WorldWideScience

Sample records for earthquake size distribution

  1. Earthquake Size Distribution: Power-Law with Exponent Beta = 1/2 ?

    CERN Document Server

    Kagan, Yan Y

    2009-01-01

    We propose that the widely observed and universal Gutenberg-Richter relation is a mathematical consequence of the critical branching nature of earthquake process in a brittle fracture environment. These arguments, though preliminary, are confirmed by recent investigations of the seismic moment distribution in global earthquake catalogs and by the results on the distribution in crystals of dislocation avalanche sizes. We consider possible systematic and random errors in determining earthquake size, especially its seismic moment. These effects increase the estimate of the parameter beta of the power-law distribution of earthquake sizes. In particular we find that the decrease in relative moment uncertainties with earthquake size causes inflation in the beta-value by about 1-3%. Moreover, earthquake clustering greatly influences the beta-parameter. If clusters (aftershock sequences) are taken as the entity to be studied, then the exponent value for their size distribution would decrease by 5-10%. The complexity ...

  2. Convergence of the frequency-size distribution of global earthquakes

    Science.gov (United States)

    Bell, Andrew F.; Naylor, Mark; Main, Ian G.

    2013-06-01

    The Gutenberg-Richter (GR) frequency-magnitude relation is a fundamental empirical law of seismology, but its form remains uncertain for rare extreme events. Here, we show that the temporal evolution of model likelihoods and parameters for the frequency-magnitude distribution of the global Harvard Centroid Moment Tensor catalog is inconsistent with an unbounded GR relation, despite if being the preferred model at the current time. During the recent spate of 12 great earthquakes in the last 8 years, record-breaking events result in profound steps in favor of the unbounded GR relation. However, between such events the preferred model gradually converges to the tapered GR relation, and the form of the convergence cannot be explained by random sampling of an unbounded GR distribution. The convergence properties are consistent with a global catalog composed of superposed randomly-sampled regional catalogs, each with different upper bounds, many of which have not yet sampled their largest event.

  3. Analysis of the relationship between landslides size distribution and earthquake source area

    Science.gov (United States)

    Valagussa, Andrea; Crosta, Giovanni B.; Frattini, Paolo; Xu, Chong

    2014-05-01

    The spatial distribution of earthquake induced landslides around the seismogenetic source has been analysed to better understand the triggering of landslides in seismic areas and to forecast the maximum distance at which an earthquake, with a certain magnitude, can induce landslides (e.g Keefer, 1984). However, when applying such approaches to old earthquakes (e.g 1929 Buller and 1968 Iningahua earthquakes New Zealand; Parker, 2013; 1976 Friuli earthquake, Italy) one should be concerned about the undersampling of smaller landslides which can be cancelled by erosion and landscape evolution. For this reason, it is important to characterize carefully the relationship between landslide area and number with distance from the source, but also the size distribution of landslides as a function of distance from the source. In this paper, we analyse the 2008 Wenchuan earthquake landslide inventory (Xu et al, 2013). The earthquake triggered more than 197,000 landslides of different type, including rock avalanches, rockfalls, translational and rotational slides, lateral spreads and derbies flows. First, we calculated the landslide intensity (number of landslides per unit area) and spatial density (landslide area per unit area) as a function of distance from the source area of the earthquake. Then, we developed magnitude frequency curves (MFC) for different distances from the source area. Comparing these curves, we can describe the relation between the distance and the frequency density of landslide in seismic area. Keefer D K (1984) Landslides caused by earthquakes. Geological Society of America Bulletin, 95(4), 406-421. Parker R N, (2013) Hillslope memory and spatial and temporal distributions of earthquake-induced landslides, Durham theses, Durham University. Xu, C., Xu, X., Yao, X., & Dai, F. (2013). Three (nearly) complete inventories of landslides triggered by the May 12, 2008 Wenchuan Mw 7.9 earthquake of China and their spatial distribution statistical analysis

  4. Tectonic controls on earthquake size distribution and seismicity rate: slab buoyancy and slab bending

    Science.gov (United States)

    Nishikawa, T.; Ide, S.

    2014-12-01

    There are clear variations in maximum earthquake magnitude among Earth's subduction zones. These variations have been studied extensively and attributed to differences in tectonic properties in subduction zones, such as relative plate velocity and subducting plate age [Ruff and Kanamori, 1980]. In addition to maximum earthquake magnitude, the seismicity of medium to large earthquakes also differs among subduction zones, such as the b-value (i.e., the slope of the earthquake size distribution) and the frequency of seismic events. However, the casual relationship between the seismicity of medium to large earthquakes and subduction zone tectonics has been unclear. Here we divide Earth's subduction zones into over 100 study regions following Ide [2013] and estimate b-values and the background seismicity rate—the frequency of seismic events excluding aftershocks—for subduction zones worldwide using the maximum likelihood method [Utsu, 1965; Aki, 1965] and the epidemic type aftershock sequence (ETAS) model [Ogata, 1988]. We demonstrate that the b-value varies as a function of subducting plate age and trench depth, and that the background seismicity rate is related to the degree of slab bending at the trench. Large earthquakes tend to occur relatively frequently (lower b-values) in shallower subduction zones with younger slabs, and more earthquakes occur in subduction zones with deeper trench and steeper dip angle. These results suggest that slab buoyancy, which depends on subducting plate age, controls the earthquake size distribution, and that intra-slab faults due to slab bending, which increase with the steepness of the slab dip angle, have influence on the frequency of seismic events, because they produce heterogeneity in plate coupling and efficiently inject fluid to elevate pore fluid pressure on the plate interface. This study reveals tectonic factors that control earthquake size distribution and seismicity rate, and these relationships between seismicity and

  5. Bayesian inference on earthquake size distribution: a case study in Italy

    Science.gov (United States)

    Licia, Faenza; Carlo, Meletti; Laura, Sandri

    2010-05-01

    This paper is focused on the study of earthquake size statistical distribution by using Bayesian inference. The strategy consists in the definition of an a priori distribution based on instrumental seismicity, and modeled as a power law distribution. By using the observed historical data, the power law is then modified in order to obtain the posterior distribution. The aim of this paper is to define the earthquake size distribution using all the seismic database available (i.e., instrumental and historical catalogs) and a robust statistical technique. We apply this methodology to the Italian seismicity, dividing the territory in source zones as done for the seismic hazard assessment, taken here as a reference model. The results suggest that each area has its own peculiar trend: while the power law is able to capture the mean aspect of the earthquake size distribution, the posterior emphasizes different slopes in different areas. Our results are in general agreement with the ones used in the seismic hazard assessment in Italy. However, there are areas in which a flattening in the curve is shown, meaning a significant departure from the power law behavior and implying that there are some local aspects that a power law distribution is not able to capture.

  6. Tsunami Size Distributions at Far-Field Locations from Aggregated Earthquake Sources

    Science.gov (United States)

    Geist, E. L.; Parsons, T.

    2015-12-01

    The distribution of tsunami amplitudes at far-field tide gauge stations is explained by aggregating the probability of tsunamis derived from individual subduction zones and scaled by their seismic moment. The observed tsunami amplitude distributions of both continental (e.g., San Francisco) and island (e.g., Hilo) stations distant from subduction zones are examined. Although the observed probability distributions nominally follow a Pareto (power-law) distribution, there are significant deviations. Some stations exhibit varying degrees of tapering of the distribution at high amplitudes and, in the case of the Hilo station, there is a prominent break in slope on log-log probability plots. There are also differences in the slopes of the observed distributions among stations that can be significant. To explain these differences we first estimate seismic moment distributions of observed earthquakes for major subduction zones. Second, regression models are developed that relate the tsunami amplitude at a station to seismic moment at a subduction zone, correcting for epicentral distance. The seismic moment distribution is then transformed to a site-specific tsunami amplitude distribution using the regression model. Finally, a mixture distribution is developed, aggregating the transformed tsunami distributions from all relevant subduction zones. This mixture distribution is compared to the observed distribution to assess the performance of the method described above. This method allows us to estimate the largest tsunami that can be expected in a given time period at a station.

  7. Power Scaling of the Size Distribution of Economic Loss and Fatalities due to Hurricanes, Earthquakes, Tornadoes, and Floods in the USA

    Science.gov (United States)

    Tebbens, S. F.; Barton, C. C.; Scott, B. E.

    2016-12-01

    Traditionally, the size of natural disaster events such as hurricanes, earthquakes, tornadoes, and floods is measured in terms of wind speed (m/sec), energy released (ergs), or discharge (m3/sec) rather than by economic loss or fatalities. Economic loss and fatalities from natural disasters result from the intersection of the human infrastructure and population with the size of the natural event. This study investigates the size versus cumulative number distribution of individual natural disaster events for several disaster types in the United States. Economic losses are adjusted for inflation to 2014 USD. The cumulative number divided by the time over which the data ranges for each disaster type is the basis for making probabilistic forecasts in terms of the number of events greater than a given size per year and, its inverse, return time. Such forecasts are of interest to insurers/re-insurers, meteorologists, seismologists, government planners, and response agencies. Plots of size versus cumulative number distributions per year for economic loss and fatalities are well fit by power scaling functions of the form p(x) = Cx-β; where, p(x) is the cumulative number of events with size equal to and greater than size x, C is a constant, the activity level, x is the event size, and β is the scaling exponent. Economic loss and fatalities due to hurricanes, earthquakes, tornadoes, and floods are well fit by power functions over one to five orders of magnitude in size. Economic losses for hurricanes and tornadoes have greater scaling exponents, β = 1.1 and 0.9 respectively, whereas earthquakes and floods have smaller scaling exponents, β = 0.4 and 0.6 respectively. Fatalities for tornadoes and floods have greater scaling exponents, β = 1.5 and 1.7 respectively, whereas hurricanes and earthquakes have smaller scaling exponents, β = 0.4 and 0.7 respectively. The scaling exponents can be used to make probabilistic forecasts for time windows ranging from 1 to 1000 years

  8. Prediction of earthquake-triggered landslide event sizes

    Science.gov (United States)

    Braun, Anika; Havenith, Hans-Balder; Schlögel, Romy

    2016-04-01

    Seismically induced landslides are a major environmental effect of earthquakes, which may significantly contribute to related losses. Moreover, in paleoseismology landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes and thus allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. We present here a review of factors contributing to earthquake triggered slope failures based on an "event-by-event" classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, 'Intensity', 'Fault', 'Topographic energy', 'Climatic conditions' and 'Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. The relative weight of these factors was extracted from published data for numerous past earthquakes; topographic inputs were checked in Google Earth and through geographic information systems. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be cross-checked. One of our main findings is that the 'Fault' factor, which is based on characteristics of the fault, the surface rupture and its location with respect to mountain areas, has the most important

  9. Scaling of Seismic Memory with Earthquake Size

    CERN Document Server

    Zheng, Zeyu; Tenenbaum, Joel; Podobnik, Boris; Stanley, H Eugene

    2011-01-01

    It has been observed that the earthquake events possess short-term memory, i.e. that events occurring in a particular location are dependent on the short history of that location. We conduct an analysis to see whether real-time earthquake data also possess long-term memory and, if so, whether such autocorrelations depend on the size of earthquakes within close spatiotemporal proximity. We analyze the seismic waveform database recorded by 64 stations in Japan, including the 2011 "Great East Japan Earthquake", one of the five most powerful earthquakes ever recorded which resulted in a tsunami and devastating nuclear accidents. We explore the question of seismic memory through use of mean conditional intervals and detrended fluctuation analysis (DFA). We find that the waveform sign series show long-range power-law anticorrelations while the interval series show long-range power-law correlations. We find size-dependence in earthquake auto-correlations---as earthquake size increases, both of these correlation beha...

  10. Extreme value distribution of earthquake magnitude

    Science.gov (United States)

    Zi, Jun Gan; Tung, C. C.

    1983-07-01

    Probability distribution of maximum earthquake magnitude is first derived for an unspecified probability distribution of earthquake magnitude. A model for energy release of large earthquakes, similar to that of Adler-Lomnitz and Lomnitz, is introduced from which the probability distribution of earthquake magnitude is obtained. An extensive set of world data for shallow earthquakes, covering the period from 1904 to 1980, is used to determine the parameters of the probability distribution of maximum earthquake magnitude. Because of the special form of probability distribution of earthquake magnitude, a simple iterative scheme is devised to facilitate the estimation of these parameters by the method of least-squares. The agreement between the empirical and derived probability distributions of maximum earthquake magnitude is excellent.

  11. Landslide size distribution in seismic areas

    Science.gov (United States)

    Valagussa, Andrea; Frattini, Paolo; Crosta, Giovanni B.

    2015-04-01

    In seismic areas, the analysis of the landslides size distribution with the distance from the seismic source is very important for hazard zoning and land planning. From numerical modelling (Bourdeau et al., 2004), it has been observed that the area of the sliding mass tends to increase with the ground-motion amplitude up to a certain threshold input acceleration. This has been also observed empirically for the 1989 Loma Prieta earthquake (Keefer and Manson, 1998) and 1999 Chi Chi earthquake (Khazai and Sitar, 2003). Based on this, it possible to assume that the landslide size decreases with the increase of the distance from the seismic source. In this research, we analysed six earthquakes-induced landslides inventories (Papua New Guinea Earthquake, 1993; Northridge Earthquake, 1994; Niigata-Chuetsu Earthquake 2004; Iwate-Miyagi Nairiku Earthquake, 2008; Wenchuan Earthquake, 2008; Tohoku Earthquake, 2011) with a magnitude ranging between 6.6 and 9.0 Mw. For each earthquake, we first analysed the size of landslides as a function of different factors such as the lithology, the PGA, the relief, the distance from the seismic sources (both fault and epicentre). Then, we analysed the magnitude frequency curves for different distances from the source area and for each lithology. We found that a clear relationship between the size distribution and the distance from the seismic source is not evident, probably due to the combined effect of the different influencing factors and to the non-linear relationship between the ground-motion intensity and the distance from the seismic source.

  12. Foreshocks Are Not Predictive of Future Earthquake Size

    Science.gov (United States)

    Page, M. T.; Felzer, K. R.; Michael, A. J.

    2014-12-01

    The standard model for the origin of foreshocks is that they are earthquakes that trigger aftershocks larger than themselves (Reasenberg and Jones, 1989). This can be formally expressed in terms of a cascade model. In this model, aftershock magnitudes follow the Gutenberg-Richter magnitude-frequency distribution, regardless of the size of the triggering earthquake, and aftershock timing and productivity follow Omori-Utsu scaling. An alternative hypothesis is that foreshocks are triggered incidentally by a nucleation process, such as pre-slip, that scales with mainshock size. If this were the case, foreshocks would potentially have predictive power of the mainshock magnitude. A number of predictions can be made from the cascade model, including the fraction of earthquakes that are foreshocks to larger events, the distribution of differences between foreshock and mainshock magnitudes, and the distribution of time lags between foreshocks and mainshocks. The last should follow the inverse Omori law, which will cause the appearance of an accelerating seismicity rate if multiple foreshock sequences are stacked (Helmstetter and Sornette, 2003). All of these predictions are consistent with observations (Helmstetter and Sornette, 2003; Felzer et al. 2004). If foreshocks were to scale with mainshock size, this would be strong evidence against the cascade model. Recently, Bouchon et al. (2013) claimed that the expected acceleration in stacked foreshock sequences before interplate earthquakes is higher prior to M≥6.5 mainshocks than smaller mainshocks. Our re-analysis fails to support the statistical significance of their results. In particular, we find that their catalogs are not complete to the level assumed, and their ETAS model underestimates inverse Omori behavior. To conclude, seismicity data to date is consistent with the hypothesis that the nucleation process is the same for earthquakes of all sizes.

  13. Pore size distribution mapping

    OpenAIRE

    Strange, John H.; J. Beau W. WEBBER; Schmidt, S.D.

    1996-01-01

    Pore size distribution mapping has been demonstrated using NMR cryoporometry\\ud in the presence of a magnetic field gradient, This novel method is extendable to 2D and 3D mapping. It offers a unique nondestructive method of obtaining full pore-size distributions in the range 3 to 100 nm at any point within a bulk sample. \\ud

  14. Origin and Nonuniversality of the Earthquake Interevent Time Distribution

    Science.gov (United States)

    Touati, Sarah; Naylor, Mark; Main, Ian G.

    2009-04-01

    Many authors have modeled regional earthquake interevent times using a gamma distribution, whereby data collapse occurs under a simple rescaling of the data from different regions or time periods. We show, using earthquake data and simulations, that the distribution is fundamentally a bimodal mixture distribution dominated by correlated aftershocks at short waiting times and independent events at longer times. The much-discussed power-law segment often arises as a crossover between these two. We explain the variation of the distribution with region size and show that it is not universal.

  15. The GIS and analysis of earthquake damage distribution of the 1303 Hongtong M=8 earthquake

    Institute of Scientific and Technical Information of China (English)

    高孟潭; 金学申; 安卫平; 吕晓健

    2004-01-01

    The geography information system of the 1303 Hongtong M=8 earthquake has been established. Using the spatial analysis function of GIS, the spatial distribution characteristics of damage and isoseismal of the earthquake are studied. By comparing with the standard earthquake intensity attenuation relationship, the abnormal damage distribution of the earthquake is found, so the relationship of the abnormal distribution with tectonics, site condition and basin are analyzed. In this paper, the influence on the ground motion generated by earthquake source and the underground structures near source also are studied. The influence on seismic zonation, anti-earthquake design, earthquake prediction and earthquake emergency responding produced by the abnormal density distribution are discussed.

  16. Size dependent rupture growth at the scale of real earthquake

    Science.gov (United States)

    Colombelli, Simona; Festa, Gaetano; Zollo, Aldo

    2017-04-01

    When an earthquake starts, the rupture process may evolve in a variety of ways, resulting in the occurrence of different magnitude earthquakes, with variable areal extent and slip, and this may produce an unpredictable damage distribution around the fault zone. The cause of the observed diversity of the rupture process evolution is unknown. There are studies supporting the idea that all earthquakes arise in the same way, while the mechanical conditions of the fault zone may determine the propagation and generation of small or large earthquakes. Other studies show that small and large earthquakes are different from the initial stage of the rupture beginning. Among them, Colombelli et al. (2014) observed that the initial slope of the P-wave peak displacement could be a discriminant for the final earthquake size, so that small and large ruptures show a different behavior in their initial stage. In this work we perform a detailed analysis of the time evolution of the P-wave peak amplitude for a set of few, co-located events, during the 2008, Iwate-Miyagi (Japan) earthquake sequence. The events have magnitude between 3.2 and 7.2 and their epicentral coordinates vary in a narrow range, with a maximum distance among the epicenters of about 15 km. After applying a refined technique for data processing, we measured the initial Peak Displacement (Pd) as the absolute value of the vertical component of displacement records, starting from the P-wave arrival time and progressively expanding the time window. For each event, we corrected the observed Pd values at different stations for the distance effect and computed the average logarithm of Pd as a function of time. The overall shape of the Pd curves (in log-lin scale) is consistent with what has been previously observed for a larger dataset by Colombelli et al. (2014). The initial amplitude begins with small values and then increases with time, until a plateau level is reached. However, we observed essential differences in the

  17. Business size distributions

    Science.gov (United States)

    D'Hulst, R.; Rodgers, G. J.

    2001-10-01

    In a recent work, we introduced two models for the dynamics of customers trying to find the business that best corresponds to their expectation for the price of a commodity. In agreement with the empirical data, a power-law distribution for the business sizes was obtained, taking the number of customers of a business as a proxy for its size. Here, we extend one of our previous models in two different ways. First, we introduce a business aggregation rate that is fitness dependent, which allows us to reproduce a spread in empirical data from one country to another. Second, we allow the bankruptcy rate to take a different functional form, to be able to obtain a log-normal distribution with power-law tails for the size of the businesses.

  18. Hail Size Distribution Mapping

    Science.gov (United States)

    2008-01-01

    A 3-D weather radar visualization software program was developed and implemented as part of an experimental Launch Pad 39 Hail Monitor System. 3DRadPlot, a radar plotting program, is one of several software modules that form building blocks of the hail data processing and analysis system (the complete software processing system under development). The spatial and temporal mapping algorithms were originally developed through research at the University of Central Florida, funded by NASA s Tropical Rainfall Measurement Mission (TRMM), where the goal was to merge National Weather Service (NWS) Next-Generation Weather Radar (NEXRAD) volume reflectivity data with drop size distribution data acquired from a cluster of raindrop disdrometers. In this current work, we adapted these algorithms to process data from a cluster of hail disdrometers positioned around Launch Pads 39A or 39B, along with the corresponding NWS radar data. Radar data from all NWS NEXRAD sites is archived at the National Climatic Data Center (NCDC). That data can be readily accessed at . 3DRadPlot plots Level III reflectivity data at four scan elevations (this software is available at Open Channel Software, ). By using spatial and temporal interpolation/extrapolation based on hydrometeor fall dynamics, we can merge the hail disdrometer array data coupled with local Weather Surveillance Radar-1988, Doppler (WSR-88D) radial velocity and reflectivity data into a 4-D (3-D space and time) picture of hail size distributions. Hail flux maps can then be generated and used for damage prediction and assessment over specific surfaces corresponding to structures within the disdrometer array volume. Immediately following a hail storm, specific damage areas and degree of damage can be identified for inspection crews.

  19. Likelihood analysis of earthquake focal mechanism distributions

    CERN Document Server

    Kagan, Y Y

    2014-01-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad-hoc, empirical assumptions, thus their performance is questionable. In this work we apply a conventional likelihood method to measure a skill of forecast. The advantage of such an approach is that earthquake rate prediction can in principle be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random. For double-couple source orientation the random probability distribution function is not uniform, which complicates the calculation of the likelihood value. To better understand the resulting complexities we calculate the information (likelihood) score for two rota...

  20. Distribution characteristics of earthquake-induced landslide with the earthquake source fault-the cases of recent strong earthquakes in eastern Japan

    Science.gov (United States)

    Hasi, B.; Ishii, Y.; Maruyama, K.; Terada, H.

    2009-12-01

    In recent years, 3 strong earthquakes, the Mid-Niigata earthquake (M6.8, October 23, 2004), the Noto Peninsula earthquake (M6.9, March 25, 2007), the Chuetsu-offshore earthquake (M6.8, July 16, 2007), stroke eastern Japan. All of these earthquakes occurred inland by reverse fault, with depth 11-17km hypocenter, triggered a large number of landslides and caused serious damage to the involved regions due to these landslides. To clarify the distribution characteristics of landslides induced by these earthquakes, we interpreted landslides by using aerial photographs taken immediately after the earthquakes, and analyzed landslide distributions with the peak ground acceleration (PGA) and seismic intensity (in Japan Meteorological Agency intensity scale), source fault of the mainshock of each earthquake. The analyzing results revealed that: 1) Most of the landslides occurred in the area where the PGA is larger than 500 gal, and the maximum seismic intensity is larger than 5 plus ; 2) The landslides occurred in a short distance from the source fault (the shortest distance from the surface projection of top tip of the fault), about 80% occurred within the distance of 20 km; 3) More than 80% of landslides occurred on the hanging wall, and the size of landslide (length, width, area) is larger than that occurred on the footwall of the source fault; 4) The number and size of landslide tends to deceases with the distance from the source fault. Our results suggesting that the distance from the source fault of earthquake could be a parameter to analyze the landslide occurrence induce by strong earthquake.

  1. Capsizing icebergs release earthquake-sized energies

    Science.gov (United States)

    Schultz, Colin

    2012-03-01

    A large iceberg can carry a tremendous amount of gravitational potential energy. While all icebergs float with the bulk of their mass submerged beneath the water's surface, some drift around in precarious orientations—they are temporarily stable, but an outside push would send them tumbling over. Large icebergs, like those that split from the Jakobshavn Isbræ glacier in Greenland, can release the energy equivalent to a magnitude 6 or 7 earthquake when they capsize. A 1995 event demonstrated the potential for destruction, as a tsunami spawned from a capsizing iceberg devastated a coastal Greenland community. Measuring how energy is dispersed during capsizing is crucial to understanding the risk associated with these events but is also key to determining their larger role in surface ocean dynamics

  2. The Upper Limit Size of Reservoir-Induced Earthquakes

    Institute of Scientific and Technical Information of China (English)

    Huang Fuqiong; Zhang Yan; Wu Zhongliang; Ma Lijie

    2008-01-01

    We showed the relation between the magnitude of induced earthquake and the reservoir storage and dam height based on the global catalog from 1967 to 1989 compiled by Ding Yuanzhang (1989). By multiplying reservoir storage with dam height, we introduced a new parameter named EE. We found that the cases with specific EE and magnitude do not exceed a limit. Based on the discussion of its physics, we called EE the equivalent energy. We considered this limit as the upper limit of magnitude for reservoir-induced earthquakes. The result was proved by the recent cases occurring in China. This size limitation can be used as a helpful consideration for reservoir design.

  3. Kinetic narrowing of size distribution

    Science.gov (United States)

    Dubrovskii, V. G.

    2016-05-01

    We present a model that reveals an interesting possibility for narrowing the size distribution of nanostructures when the deterministic growth rate changes its sign from positive to negative at a certain stationary size. Such a behavior occurs in self-catalyzed one-dimensional III-V nanowires and more generally whenever a negative "adsorption-desorption" term in the growth rate is compensated by a positive "diffusion flux." By asymptotically solving the Fokker-Planck equation, we derive an explicit representation for the size distribution that describes either Poissonian broadening or self-regulated narrowing depending on the parameters. We show how the fluctuation-induced spreading of the size distribution can be completely suppressed in systems with size self-stabilization. These results can be used for obtaining size-uniform ensembles of different nanostructures.

  4. Finite data-size scaling of clustering in earthquake networks

    CERN Document Server

    Abe, Sumiyoshi; Suzuki, Norikazu

    2010-01-01

    Earthquake network introduced in the work [S. Abe and N. Suzuki, Europhys.Lett. 65, 581 (2004)] is known to be of the small-world type. The values of the network characteristics, however, depend not only on the cell size (i.e., the scale of coarse graining needed for constructing the network) but also on the size of a seismic data set. Here, discovery of a scaling law for the clustering coefficient in terms of the data size, which is refereed to here as finite data-size scaling, is reported. Its universality is shown to be supported by the detailed analysis of the data taken from California, Japan, and Iran.

  5. Earthquakes economic costs through rank-size laws

    Science.gov (United States)

    Ficcadenti, Valerio; Cerqueti, Roy

    2017-08-01

    This paper is devoted to assessing the presence of some regularities in the magnitudes of the earthquakes in Italy between January 24th , 2016 and January 24th , 2017, and to propose an earthquakes cost indicator. The considered data includes the catastrophic events in Amatrice and in the Marche region. To our purpose, we implement two typologies of rank-size analysis: the classical Zipf-Mandelbrot law and the so-called universal law proposed by Ausloos and Cerqueti (2016 PLoS One 11 e0166011). The proposed generic measure of the economic impact of earthquakes moves from the assumption of the existence of a cause-effect relation between earthquakes magnitudes and economic costs. At this aim, we hypothesize that such a relation can be formalized in a functional way to show how infrastructure resistance affects the cost. Results allow us to clarify the impact of an earthquake on the social context and might serve to strengthen the struggle against the dramatic outcomes of such natural phenomena.

  6. The origin and non-universality of the earthquake inter-event time distribution

    Science.gov (United States)

    Touati, S.; Naylor, M.; Main, I. G.

    2009-04-01

    Understanding the form and origin of the earthquake inter-event time distribution is vital for both the advancement of seismic hazard assessment models and the development of physically-based models of earthquake dynamics. Many authors have modelled regional earthquake inter-event times using a gamma distribution, whereby data collapse occurs under a simple rescaling of the data from different regions or time periods. We use earthquake data and simulations to present a new understanding of the form of the earthquake inter-event time distribution as essentially bimodal, and a physically-motivated explanation for its origin in terms of the interaction of separate aftershock sequences within the earthquake time series. Our insight into the origin of the bimodality is through stochastic simulations of the Epidemic-Type Aftershock Sequences (ETAS) model, a point process model based on well-known empirical laws of seismicity, in which we are able to keep track of the triggering "family" structure in the catalogue unlike with real seismicity. We explain the variation of the distribution shape with region size and show that it is not universal under rescaling by the mean event rate. The power-law segment in the gamma distribution usually used to model inter-earthquake times arises under some conditions as a crossover between the two peaks; the previous results supporting universality can be explained by strong data selection criteria in the form of a requirement for short-term stationarity in the event rate.

  7. Earthquake rate and magnitude distributions of great earthquakes for use in global forecasts

    Science.gov (United States)

    Kagan, Yan Y.; Jackson, David D.

    2016-07-01

    principle that equates the seismic moment rate with the tectonic moment rate inferred from geodesy and geology, we obtain a consistent estimate of the corner moment largely independent of seismic history. These evaluations confirm the above-mentioned corner magnitude value. The new estimates of corner magnitudes are important both for the forecast part based on seismicity as well as the part based on geodetic strain rates. We examine rate variations as expressed by annual earthquake numbers. Earthquakes larger than magnitude 6.5 obey the Poisson distribution. For smaller events the negative-binomial distribution fits much better because it allows for earthquake clustering.

  8. Centaur size distribution with DECam

    Science.gov (United States)

    Fuentes, Cesar; Trilling, David E.; Schlichting, Hilke

    2014-11-01

    We present the results of the 2014 centaur search campaign on the Dark Energy Camera (DECam) in Tololo, Chile. This is the largest debiased Centaur survey to date, measuring for the first time the size distribution of small Centaurs (1-10km) and the first time the sizes of planetesimals from which the entire Solar System formed are directly detected.The theoretical model for the coagulation and collisional evolution of the outer solar system proposed in Schlichting et al. 2013 predicts a steep rise in the size distribution of TNOs smaller than 10km. These objects are below the detection limit of current TNO surveys but feasible for the Centaur population. By constraining the number of Centaurs and this feature in their size distribution we can confirm the collisional evolution of the Solar System and estimate the rate at which material is being transferred from the outer to the inner Solar System. If the shallow power law behavior from the TNO size distribution at ~40km can be extrapolated to 1km, the size of the Jupiter Family of Comets (JFC), there would not be enough small TNOs to supply the JFC population (Volk & Malhotra, 2008), debunking the link between TNOs and JFCs.We also obtain the colors of small Centaurs and TNOs, providing a signature of collisional evolution by measuring if there is in fact a relationship between color and size. If objects smaller than the break in the TNO size distribution are being ground down by collisions then their surfaces should be fresh, and then appear bluer in the optical than larger TNOs that are not experiencing collisions.

  9. "Universal" Distribution of Inter-Earthquake Times Explained

    CERN Document Server

    Saichev, A

    2006-01-01

    We propose a simple theory for the ``universal'' scaling law previously reported for the distributions of waiting times between earthquakes. It is based on a largely used benchmark model of seismicity, which just assumes no difference in the physics of foreshocks, mainshocks and aftershocks. Our theoretical calculations provide good fits to the data and show that universality is only approximate. We conclude that the distributions of inter-event times do not reveal more information than what is already known from the Gutenberg-Richter and the Omori power laws. Our results reinforces the view that triggering of earthquakes by other earthquakes is a key physical mechanism to understand seismicity.

  10. Urban aerosol number size distributions

    Directory of Open Access Journals (Sweden)

    T. Hussein

    2004-01-01

    Full Text Available Aerosol number size distributions have been measured since 5 May 1997 in Helsinki, Finland. The presented aerosol data represents size distributions within the particle diameter size range 8-400nm during the period from May 1997 to March 2003. The daily, monthly and annual patterns of the aerosol particle number concentrations were investigated. The temporal variation of the particle number concentration showed close correlations with traffic activities. The highest total number concentrations were observed during workdays; especially on Fridays, and the lowest concentrations occurred during weekends; especially Sundays. Seasonally, the highest total number concentrations were observed during winter and spring and lower concentrations were observed during June and July. More than 80% of the number size distributions had three modes: nucleation mode (30nm, Aitken mode (20-100nm and accumulation mode (}$'>90nm. Less than 20% of the number size distributions had either two modes or consisted of more than three modes. Two different measurement sites were used; in the first (Siltavuori, 5.5.1997-5.3.2001, the arithmetic means of the particle number concentrations were 7000cm, 6500cm, and 1000cm respectively for nucleation, Aitken, and accumulation modes. In the second site (Kumpula, 6.3.2001-28.2.2003 they were 5500cm, 4000cm, and 1000cm. The total number concentration in nucleation and Aitken modes were usually significantly higher during workdays than during weekends. The temporal variations in the accumulation mode were less pronounced. The lower concentrations at Kumpula were mainly due to building construction and also the slight overall decreasing trend during these years. During the site changing a period of simultaneous measurements over two weeks were performed showing nice correlation at both sites.

  11. The interannual earthquake distributions and its peculiarity.

    Science.gov (United States)

    Levin, Boris; Sasorova, Elena

    2010-05-01

    The study of the periodicity of seismic process activation at different energy levels represents a topical problem in seismology, and it might help to illuminate the physical mechanisms that govern the processes of preparation and generation of earthquakes. It was observed during written history, that the seismic events occur in various regions of the Earth in some months of a year significantly more often than in another. In the last decade, there has been growing interest in problems related to searching for global spatiotemporal regularities in the distribution of seismic events on the Earth. The objective of our work is to test of hypothesis about within-year variability existence for the events of various energy levels, to determine the lithosphere depth boundary which divided the seismic events on two groups (subjected to external (tidal) forces and no subjected to these forces) and to search the global regularity in spatio-temporal distributions of the seismic events in the Pacific. The whole region was subdivided into 31 subregions, which are located along the perimeter of the Pacific. All events in each subregion were subdivided into five subsets according to following magnitude levels: 4 Htr), where H is the hypocenter depth and Htr is the threshold value of the EQ source depth. For the first time Htr was set equal to 80 km. Then we were checking if the distributions of the events during the year period are uniform or these distributions are non-uniform. Our data sets are binned data. We obtained simultaneously two discrete time scale (monthly and 10-days). We are testing it separately for each region, for each magnitude level and for every depth level. The null hypothesis about uniform EQ distributions in the course of year was disproved for the most samples with "shallow" EQ. But the null hypothesis was confirmed for "deep" earthquakes. We use the Chi-Square test for well-filled sequences (no less than 5 events in each discrete interval) and the method

  12. Evidence for Truncated Exponential Probability Distribution of Earthquake Slip

    KAUST Repository

    Thingbaijam, Kiran K. S.

    2016-07-13

    Earthquake ruptures comprise spatially varying slip on the fault surface, where slip represents the displacement discontinuity between the two sides of the rupture plane. In this study, we analyze the probability distribution of coseismic slip, which provides important information to better understand earthquake source physics. Although the probability distribution of slip is crucial for generating realistic rupture scenarios for simulation-based seismic and tsunami-hazard analysis, the statistical properties of earthquake slip have received limited attention so far. Here, we use the online database of earthquake source models (SRCMOD) to show that the probability distribution of slip follows the truncated exponential law. This law agrees with rupture-specific physical constraints limiting the maximum possible slip on the fault, similar to physical constraints on maximum earthquake magnitudes.We show the parameters of the best-fitting truncated exponential distribution scale with average coseismic slip. This scaling property reflects the control of the underlying stress distribution and fault strength on the rupture dimensions, which determines the average slip. Thus, the scale-dependent behavior of slip heterogeneity is captured by the probability distribution of slip. We conclude that the truncated exponential law accurately quantifies coseismic slip distribution and therefore allows for more realistic modeling of rupture scenarios. © 2016, Seismological Society of America. All rights reserverd.

  13. Recurrent frequency-size distribution of characteristic events

    Directory of Open Access Journals (Sweden)

    S. G. Abaimov

    2009-04-01

    Full Text Available Statistical frequency-size (frequency-magnitude properties of earthquake occurrence play an important role in seismic hazard assessments. The behavior of earthquakes is represented by two different statistics: interoccurrent behavior in a region and recurrent behavior at a given point on a fault (or at a given fault. The interoccurrent frequency-size behavior has been investigated by many authors and generally obeys the power-law Gutenberg-Richter distribution to a good approximation. It is expected that the recurrent frequency-size behavior should obey different statistics. However, this problem has received little attention because historic earthquake sequences do not contain enough events to reconstruct the necessary statistics. To overcome this lack of data, this paper investigates the recurrent frequency-size behavior for several problems. First, the sequences of creep events on a creeping section of the San Andreas fault are investigated. The applicability of the Brownian passage-time, lognormal, and Weibull distributions to the recurrent frequency-size statistics of slip events is tested and the Weibull distribution is found to be the best-fit distribution. To verify this result the behaviors of numerical slider-block and sand-pile models are investigated and the Weibull distribution is confirmed as the applicable distribution for these models as well. Exponents β of the best-fit Weibull distributions for the observed creep event sequences and for the slider-block model are found to have similar values ranging from 1.6 to 2.2 with the corresponding aperiodicities CV of the applied distribution ranging from 0.47 to 0.64. We also note similarities between recurrent time-interval statistics and recurrent frequency-size statistics.

  14. Size distributions and failure initiation of submarine and subaerial landslides

    Science.gov (United States)

    ten Brink, U.S.; Barkan, R.; Andrews, B.D.; Chaytor, J.D.

    2009-01-01

    Landslides are often viewed together with other natural hazards, such as earthquakes and fires, as phenomena whose size distribution obeys an inverse power law. Inverse power law distributions are the result of additive avalanche processes, in which the final size cannot be predicted at the onset of the disturbance. Volume and area distributions of submarine landslides along the U.S. Atlantic continental slope follow a lognormal distribution and not an inverse power law. Using Monte Carlo simulations, we generated area distributions of submarine landslides that show a characteristic size and with few smaller and larger areas, which can be described well by a lognormal distribution. To generate these distributions we assumed that the area of slope failure depends on earthquake magnitude, i.e., that failure occurs simultaneously over the area affected by horizontal ground shaking, and does not cascade from nucleating points. Furthermore, the downslope movement of displaced sediments does not entrain significant amounts of additional material. Our simulations fit well the area distribution of landslide sources along the Atlantic continental margin, if we assume that the slope has been subjected to earthquakes of magnitude ??? 6.3. Regions of submarine landslides, whose area distributions obey inverse power laws, may be controlled by different generation mechanisms, such as the gradual development of fractures in the headwalls of cliffs. The observation of a large number of small subaerial landslides being triggered by a single earthquake is also compatible with the hypothesis that failure occurs simultaneously in many locations within the area affected by ground shaking. Unlike submarine landslides, which are found on large uniformly-dipping slopes, a single large landslide scarp cannot form on land because of the heterogeneous morphology and short slope distances of tectonically-active subaerial regions. However, for a given earthquake magnitude, the total area

  15. Urban aerosol number size distributions

    Directory of Open Access Journals (Sweden)

    T. Hussein

    2003-10-01

    Full Text Available Aerosol number size distributions were measured continuously in Helsinki, Finland from 5 May 1997 to 28 February 2003. The daily, monthly and annual patterns were investigated. The temporal variation of the particle number concentration was seen to follow the traffic density. The highest total particle number concentrations were usually observed during workdays; especially on Fridays, and the lower concentrations occurred during weekends; especially Sundays. Seasonally, the highest total number concentrations were usually observed during winter and spring and the lowest during June and July. More than 80\\% of the particle number size distributions were tri-modal: nucleation mode (Dp < 30 nm, Aitken mode (20–100 nm and accumulation mode (Dp > 90 nm. Less than 20% of the particle number size distributions have either two modes or consisted of more than three modes. Two different measurement sites are used; in the first place (Siltavuori, 5 May 1997–5 March 2001, the overall means of the integrated particle number concentrations were 7100 cm−3, 6320 cm−3, and 960 cm−3, respectively, for nucleation, Aitken, and accumulation modes. In the second site (Kumpula, 6 March 2001–28 February 2003 they were 5670 cm−3, 4050 cm−3, and 900 cm−3. The total number concentration in nucleation and Aitken modes were usually significantly higher during weekdays than during weekends. The variations in accumulation mode were less pronounced. The smaller concentrations in Kumpula were mainly due to building construction and also slight overall decreasing trend during these years. During the site changing a period of simultaneous measurements over two weeks were performed showing nice correlation in both sites.

  16. Earthquake Risk Reduction to Istanbul Natural Gas Distribution Network

    Science.gov (United States)

    Zulfikar, Can; Kariptas, Cagatay; Biyikoglu, Hikmet; Ozarpa, Cevat

    2017-04-01

    Earthquake Risk Reduction to Istanbul Natural Gas Distribution Network Istanbul Natural Gas Distribution Corporation (IGDAS) is one of the end users of the Istanbul Earthquake Early Warning (EEW) signal. IGDAS, the primary natural gas provider in Istanbul, operates an extensive system 9,867km of gas lines with 750 district regulators and 474,000 service boxes. The natural gas comes to Istanbul city borders with 70bar in 30inch diameter steel pipeline. The gas pressure is reduced to 20bar in RMS stations and distributed to district regulators inside the city. 110 of 750 district regulators are instrumented with strong motion accelerometers in order to cut gas flow during an earthquake event in the case of ground motion parameters exceeds the certain threshold levels. Also, state of-the-art protection systems automatically cut natural gas flow when breaks in the gas pipelines are detected. IGDAS uses a sophisticated SCADA (supervisory control and data acquisition) system to monitor the state-of-health of its pipeline network. This system provides real-time information about quantities related to pipeline monitoring, including input-output pressure, drawing information, positions of station and RTU (remote terminal unit) gates, slum shut mechanism status at 750 district regulator sites. IGDAS Real-time Earthquake Risk Reduction algorithm follows 4 stages as below: 1) Real-time ground motion data transmitted from 110 IGDAS and 110 KOERI (Kandilli Observatory and Earthquake Research Institute) acceleration stations to the IGDAS Scada Center and KOERI data center. 2) During an earthquake event EEW information is sent from IGDAS Scada Center to the IGDAS stations. 3) Automatic Shut-Off is applied at IGDAS district regulators, and calculated parameters are sent from stations to the IGDAS Scada Center and KOERI. 4) Integrated building and gas pipeline damage maps are prepared immediately after the earthquake event. The today's technology allows to rapidly estimate the

  17. Slip Distribution of Two Recent Large Earthquakes in the Guerrero Segment of the Mexican Subduction Zone, and Their Relation to Previous Earthquakes, Silent Slip Events and Seismic Gaps

    Science.gov (United States)

    Hjorleifsdottir, V.; Ji, C.; Iglesias, A.; Cruz-Atienza, V. M.; Singh, S. K.

    2016-12-01

    In 2012 and 2014 mega-thrust earthquakes occurred approximately 300 km apart, in the state of Guerrero, Mexico. The westernmost half of the segment between them has not had a large earthquake in at least 100 years and most of the easternmost half last broke in 1957. However, down dip of both earthquakes, silent slip events have been reported, as well as in the gap between them (Kostoglodov et al 2003, Graham 2014). There are indications that the westernmost half has different frictional properties than the areas surrounds it. However, the two events at the edges of the zone also seem to behave in different manners, indicating a broad range of frictional properties in this area, with changes occurring over short distances. The 2012/03/20, M7.5 earthquake occurred near the Guerrero-Oaxaca border, between the towns of Ometepec (Gro.) and Pinotepa Nacional (Oax.). This earthquake is noteworthy for breaking the same asperities as two previously recorded earthquakes, the M7.2 1937 and M6.9 1982(a) earthquakes, in very large "repeating earthquakes". Furthermore, the density of repeating smaller events is larger in this zone than in other parts of the subduction zone (Dominguez et al, submitted) and this earthquake has had very many aftershocks for its size (UNAM Seis. group, 2013). The 2012 event may have broken two asperities (UNAM Seis. group, 2013). How the two asperities relate to the previous relatively smaller "large events", to the repeating earthquakes, the high number of aftershocks and to the slow slip event is not clear. The 2014/04/18 M 7.2 earthquake broke a patch on the edge of the Guerrero gap, that previously broke in the 1979 M7.4 earthquake as well as the 1943 M 7.4 earthquake. This earthquake, despite being smaller, had a much larger duration, few aftershocks and clearly ruptured two separate patches (UNAM Seis. group 2015). In this work we estimate the slip distributions for the 2012 and 2014 earthquakes, by combining the data used separately in

  18. Earthquake potential revealed by tidal influence on earthquake size-frequency statistics

    Science.gov (United States)

    Ide, Satoshi; Yabe, Suguru; Tanaka, Yoshiyuki

    2016-11-01

    The possibility that tidal stress can trigger earthquakes is long debated. In particular, a clear causal relationship between small earthquakes and the phase of tidal stress is elusive. However, tectonic tremors deep within subduction zones are highly sensitive to tidal stress levels, with tremor rate increasing at an exponential rate with rising tidal stress. Thus, slow deformation and the possibility of earthquakes at subduction plate boundaries may be enhanced during periods of large tidal stress. Here we calculate the tidal stress history, and specifically the amplitude of tidal stress, on a fault plane in the two weeks before large earthquakes globally, based on data from the global, Japanese, and Californian earthquake catalogues. We find that very large earthquakes, including the 2004 Sumatran, 2010 Maule earthquake in Chile and the 2011 Tohoku-Oki earthquake in Japan, tend to occur near the time of maximum tidal stress amplitude. This tendency is not obvious for small earthquakes. However, we also find that the fraction of large earthquakes increases (the b-value of the Gutenberg-Richter relation decreases) as the amplitude of tidal shear stress increases. The relationship is also reasonable, considering the well-known relationship between stress and the b-value. This suggests that the probability of a tiny rock failure expanding to a gigantic rupture increases with increasing tidal stress levels. We conclude that large earthquakes are more probable during periods of high tidal stress.

  19. Correlation between hypocenter depth, antecedent precipitation and earthquake-induced landslide spatial distribution

    Science.gov (United States)

    Fukuoka, Hiroshi; Watanabe, Eisuke

    2017-04-01

    Since Keefer published the paper on earthquake magnitude and affected area, maximum epicentral/fault distance of induced landslide distribution in 1984, showing the envelope of plots, a lot of studies on this topic have been conducted. It has been generally supposed that landslides have been triggered by shallow quakes and more landslides are likely to occur with heavy rainfalls immediately before the quake. In order to confirm this, we have collected 22 case records of earthquake-induced landslide distribution in Japan and examined the effect of hypocenter depth and antecedent precipitation. Earthquake magnitude by JMA (Japan Meteorological Agency) of the cases are from 4.5 to 9.0. Analysis on hycpocenter depth showed the deeper quake cause wider distribution. Antecedent precipitation was evaluated using the Soil Water Index (SWI), which was developed by JMA for issuing landslide alert. We could not find meaningful correlation between SWI and the earthquake-induced landslide distribution. Additionally, we found that smaller minimum size of collected landslides results in wider distribution especially between 1,000 to 100,000 m2.

  20. Slip distribution of the 2015 Lefkada earthquake and its implications for fault segmentation

    Science.gov (United States)

    Bie, Lidong; González, Pablo J.; Rietbrock, Andreas

    2017-07-01

    It is widely accepted that fault segmentation limits earthquake rupture propagations and therefore earthquake size. While along-strike segmentation of continental strike-slip faults is well observed, direct evidence for segmentation of off-shore strike-slip faults is rare. A comparison of rupture behaviours in multiple earthquakes might help reveal the characteristics of fault segmentation. In this work, we study the 2015 Lefkada earthquake, which ruptured a major active strike slip fault offshore Lefkada Island, Greece. We report ground deformation mainly on the Lefkada Island measured by interferometric synthetic radar (InSAR), and infer a coseismic distributed slip model. To investigate how the fault location affects the inferred displacement based on our InSAR observations, we conduct a suite of inversions by taking various fault location from different studies as a prior. The result of these test inversions suggests that the Lefkada fault trace is located just offshore Lefkada Island. Our preferred model shows that the 2015 earthquake main slip patches are confined to shallow depth (locations and coseismic slip distribution shows that most aftershocks appear near the edge of main coseismic slip patches.

  1. Inversion for slip distribution using teleseismic P waveforms: North Palm Springs, Borah Peak, and Michoacan earthquakes

    Science.gov (United States)

    Mendoza, C.; Hartzell, S.H.

    1988-01-01

    We have inverted the teleseismic P waveforms recorded by stations of the Global Digital Seismograph Network for the 8 July 1986 North Palm Springs, California, the 28 October 1983 Borah Peak, Idaho, and the 19 September 1985 Michoacan, Mexico, earthquakes to recover the distribution of slip on each of the faults using a point-by-point inversion method with smoothing and positivity constraints. Results of the inversion indicate that the Global digital Seismograph Network data are useful for deriving fault dislocation models for moderate to large events. However, a wide range of frequencies is necessary to infer the distribution of slip on the earthquake fault. Although the long-period waveforms define the size (dimensions and seismic moment) of the earthquake, data at shorter period provide additional constraints on the variation of slip on the fault. Dislocation models obtained for all three earthquakes are consistent with a heterogeneous rupture process where failure is controlled largely by the size and location of high-strength asperity regions. -from Authors

  2. Microbubble Size Distributions Data Collection and Analysis

    Science.gov (United States)

    2016-06-13

    ABSTRACT A technique for determining the size distribution of micron-size bubbles from underway measurements at sea is described. A camera...Blank TM 841204 INTRODUCTION Properties of micron-sized bubble aggregates in sea water were investigated to determine their influence on the...problem during this study. This paper will discuss bubble size and size distribution measurements in sea water while underway. A technique to detect

  3. The spatial distribution of earthquake stress rotations following large subduction zone earthquakes

    Science.gov (United States)

    Hardebeck, Jeanne L.

    2017-01-01

    Rotations of the principal stress axes due to great subduction zone earthquakes have been used to infer low differential stress and near-complete stress drop. The spatial distribution of coseismic and postseismic stress rotation as a function of depth and along-strike distance is explored for three recent M ≥ 8.8 subduction megathrust earthquakes. In the down-dip direction, the largest coseismic stress rotations are found just above the Moho depth of the overriding plate. This zone has been identified as hosting large patches of large slip in great earthquakes, based on the lack of high-frequency radiated energy. The large continuous slip patches may facilitate near-complete stress drop. There is seismological evidence for high fluid pressures in the subducted slab around the Moho depth of the overriding plate, suggesting low differential stress levels in this zone due to high fluid pressure, also facilitating stress rotations. The coseismic stress rotations have similar along-strike extent as the mainshock rupture. Postseismic stress rotations tend to occur in the same locations as the coseismic stress rotations, probably due to the very low remaining differential stress following the near-complete coseismic stress drop. The spatial complexity of the observed stress changes suggests that an analytical solution for finding the differential stress from the coseismic stress rotation may be overly simplistic, and that modeling of the full spatial distribution of the mainshock static stress changes is necessary.

  4. Distributing Earthquakes Among California's Faults: A Binary Integer Programming Approach

    Science.gov (United States)

    Geist, E. L.; Parsons, T.

    2016-12-01

    Statement of the problem is simple: given regional seismicity specified by a Gutenber-Richter (G-R) relation, how are earthquakes distributed to match observed fault-slip rates? The objective is to determine the magnitude-frequency relation on individual faults. The California statewide G-R b-value and a-value are estimated from historical seismicity, with the a-value accounting for off-fault seismicity. UCERF3 consensus slip rates are used, based on geologic and geodetic data and include estimates of coupling coefficients. The binary integer programming (BIP) problem is set up such that each earthquake from a synthetic catalog spanning millennia can occur at any location along any fault. The decision vector, therefore, consists of binary variables, with values equal to one indicating the location of each earthquake that results in an optimal match of slip rates, in an L1-norm sense. Rupture area and slip associated with each earthquake are determined from a magnitude-area scaling relation. Uncertainty bounds on the UCERF3 slip rates provide explicit minimum and maximum constraints to the BIP model, with the former more important to feasibility of the problem. There is a maximum magnitude limit associated with each fault, based on fault length, providing an implicit constraint. Solution of integer programming problems with a large number of variables (>105 in this study) has been possible only since the late 1990s. In addition to the classic branch-and-bound technique used for these problems, several other algorithms have been recently developed, including pre-solving, sifting, cutting planes, heuristics, and parallelization. An optimal solution is obtained using a state-of-the-art BIP solver for M≥6 earthquakes and California's faults with slip-rates > 1 mm/yr. Preliminary results indicate a surprising diversity of on-fault magnitude-frequency relations throughout the state.

  5. City-size distribution and the size of urban systems.

    Science.gov (United States)

    Thomas, I

    1985-07-01

    "This paper is an analysis of the city-size distribution for thirty-five countries of the world in 1975; the purpose is to explain statistically the regularity of the rank-size distribution by the number of cities included in the urban systems. The rank-size parameters have been computed for each country and also for four large urban systems in which several population thresholds have been defined. These thresholds seem to have more influence than the number of cities included in the urban system on the regularity of the distribution." The data are from the U.N. Demographic Yearbook. excerpt

  6. Modeling particle size distributions by the Weibull distribution function

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Zhigang (Rogers Tool Works, Rogers, AR (United States)); Patterson, B.R.; Turner, M.E. Jr (Univ. of Alabama, Birmingham, AL (United States))

    1993-10-01

    A method is proposed for modeling two- and three-dimensional particle size distributions using the Weibull distribution function. Experimental results show that, for tungsten particles in liquid phase sintered W-14Ni-6Fe, the experimental cumulative section size distributions were well fit by the Weibull probability function, which can also be used to compute the corresponding relative frequency distributions. Modeling the two-dimensional section size distributions facilitates the use of the Saltykov or other methods for unfolding three-dimensional (3-D) size distributions with minimal irregularities. Fitting the unfolded cumulative 3-D particle size distribution with the Weibull function enables computation of the statistical distribution parameters from the parameters of the fit Weibull function.

  7. The finite, kinematic rupture properties of great-sized earthquakes since 1990

    Science.gov (United States)

    Hayes, Gavin

    2017-01-01

    Here, I present a database of >160 finite fault models for all earthquakes of M 7.5 and above since 1990, created using a consistent modeling approach. The use of a common approach facilitates easier comparisons between models, and reduces uncertainties that arise when comparing models generated by different authors, data sets and modeling techniques.I use this database to verify published scaling relationships, and for the first time show a clear and intriguing relationship between maximum potency (the product of slip and area) and average potency for a given earthquake. This relationship implies that earthquakes do not reach the potential size given by the tectonic load of a fault (sometimes called “moment deficit,” calculated via a plate rate over time since the last earthquake, multiplied by geodetic fault coupling). Instead, average potency (or slip) scales with but is less than maximum potency (dictated by tectonic loading). Importantly, this relationship facilitates a more accurate assessment of maximum earthquake size for a given fault segment, and thus has implications for long-term hazard assessments. The relationship also suggests earthquake cycles may not completely reset after a large earthquake, and thus repeat rates of such events may appear shorter than is expected from tectonic loading. This in turn may help explain the phenomenon of “earthquake super-cycles” observed in some global subduction zones.

  8. Aggregate size distributions in hydrophobic flocculation

    Directory of Open Access Journals (Sweden)

    Chairoj Rattanakawin

    2003-07-01

    Full Text Available The evolution of aggregate (floc size distributions resulting from hydrophobic flocculation has been investigated using a laser light scattering technique. By measuring floc size distributions it is possible to distinguish clearly among floc formation, growth and breakage. Hydrophobic flocculation of hematite suspensions with sodium oleate under a variety of agitating conditions produces uni-modal size distributions. The size distribution of the primary particles is shifted to larger floc sizes when the dispersed suspension is coagulated by pH adjustment. By adding sodium oleate to the pre-coagulated suspension, the distribution progresses further to the larger size. However, prolonged agitation degrades the formed flocs, regressing the distribution to the smaller size. Median floc size derived from the distribution is also used as performance criterion. The median floc size increases rapidly at the initial stage of the flocculation, and decreases with the extended agitation time and intensity. Relatively weak flocs are produced which may be due to the low dosage of sodium oleate used in this flocculation study. It is suggested that further investigation should focus on optimum reagent dosage and non-polar oil addition to strengthen these weak flocs.

  9. Body size distribution of the dinosaurs.

    Directory of Open Access Journals (Sweden)

    Eoin J O'Gorman

    Full Text Available The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.

  10. Body size distribution of the dinosaurs.

    Science.gov (United States)

    O'Gorman, Eoin J; Hone, David W E

    2012-01-01

    The distribution of species body size is critically important for determining resource use within a group or clade. It is widely known that non-avian dinosaurs were the largest creatures to roam the Earth. There is, however, little understanding of how maximum species body size was distributed among the dinosaurs. Do they share a similar distribution to modern day vertebrate groups in spite of their large size, or did they exhibit fundamentally different distributions due to unique evolutionary pressures and adaptations? Here, we address this question by comparing the distribution of maximum species body size for dinosaurs to an extensive set of extant and extinct vertebrate groups. We also examine the body size distribution of dinosaurs by various sub-groups, time periods and formations. We find that dinosaurs exhibit a strong skew towards larger species, in direct contrast to modern day vertebrates. This pattern is not solely an artefact of bias in the fossil record, as demonstrated by contrasting distributions in two major extinct groups and supports the hypothesis that dinosaurs exhibited a fundamentally different life history strategy to other terrestrial vertebrates. A disparity in the size distribution of the herbivorous Ornithischia and Sauropodomorpha and the largely carnivorous Theropoda suggests that this pattern may have been a product of a divergence in evolutionary strategies: herbivorous dinosaurs rapidly evolved large size to escape predation by carnivores and maximise digestive efficiency; carnivores had sufficient resources among juvenile dinosaurs and non-dinosaurian prey to achieve optimal success at smaller body size.

  11. City size distributions and spatial economic change.

    Science.gov (United States)

    Sheppard, E

    1982-10-01

    "The concept of the city size distribution is criticized for its lack of consideration of the effects of interurban interdependencies on the growth of cities. Theoretical justifications for the rank-size relationship have the same shortcomings, and an empirical study reveals that there is little correlation between deviations from rank-size distributions and national economic and social characteristics. Thus arguments suggesting a close correspondence between city size distributions and the level of development of a country, irrespective of intranational variations in city location and socioeconomic characteristics, seem to have little foundation." (summary in FRE, ITA, JPN, ) excerpt

  12. Measuring the size of mining-induced earthquakes: a proposal

    CSIR Research Space (South Africa)

    Ebrahim-Trollope, R

    2013-10-01

    Full Text Available Seismology. Academic Press, New York Gutenberg, B. and Richter C.F. (1956). Magnitude and energy of earthquakes, Annali di geofisica, IX, pp. 1 - 15. Hanks, T.C. and Kanamori, H., (1970). A moment magnitude scale, J Geophys. Res., 84, 2348...

  13. City-size distribution and the size of urban systems

    OpenAIRE

    Thomas, I.

    1985-01-01

    This paper is an analysis of the city-size distribution for thirty-five countries of the world in 1975; the purpose is to explain statistically the regularity of the rank-size distribution by the number of cities included in the urban systems. The rank-size parameters have been computed for each country and also for four large urban systems in which several population thresholds have been defined. These thresholds seem to have more influence than the number of cities included in the urban sys...

  14. Experimental determination of size distributions: analyzing proper sample sizes

    Science.gov (United States)

    Buffo, A.; Alopaeus, V.

    2016-04-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used.

  15. Robust Distributed Earthquake Monitoring with CISN software in Northern California

    Science.gov (United States)

    Neuhauser, D. S.; Lombard, P. N.; Dietz, L. D.; Zuzlewski, S.; Luetgert, J. H.; Kohler, W.; Hellweg, M.; Oppenheimer, D. H.; Romanowicz, B. A.

    2009-12-01

    Realtime earthquake monitoring in Northern California passed a milestone this June, when the original joint notification system operated by UC Berkeley's Seismological Laboratory and the USGS in Menlo Park was replaced by the CISN Earthquake Monitoring system. The database plays an integral part in this system, providing coordination for processing and publishing event information, as well as being the repository for event data, instrument metadata and waveforms. Several recent developments to the software system were prerequisites for the transition. (1) Due to the distributed nature of the Northern California network operations at both the BSL and USGS/MP, we enhanced the CISN software to allow distributed continuous waveform processing and the ability to seamlessly merge the processed waveform data from multiple redundant sites into the CISN real-time earthquake processing system. (2) Initially, the CISN code ignored leapseconds, which rendered it incompatible with the Northern California database. In a major CISN-wide effort, all codes were converted to be leapsecond compliant. To maintain compatibility with the Southern California database, one can select whether time output to or received from the database includes leapseconds or not. (3) The project involved revising and improving Caltech's wrapper for the UC Berkeley moment tensor analysis program. This wrapper controls both automatic processing and web review. We developed database tables to receive all supporting information used to calculate the moment tensor, so that any solution can be recalculated, and added many new options to the web interface. By December 2009, the package will use the complete moment tensor program. Since the web interface became available,we have recalculated nearly all the moment tensors for local events in the UC Berkeley moment tensor catalog, so that the mechanisms can be served from the database. (4) With the CISN software transition, Northern California temporarily lost the

  16. Aggregate size distributions in sweep flocculation

    Directory of Open Access Journals (Sweden)

    Chairoj Rattanakawin

    2005-09-01

    Full Text Available The evolution of aggregate size distributions resulting from sweep flocculation has been investigated using laser light scattering technique. By measuring the (volume distributions of floc size, it is possible to distinguish clearly among floc formation, growth and breakage. Sweep flocculation of stable kaolin suspensions with ferric chloride under conditions of the rapid/slow mixing protocol produces uni-modal size distributions. The size distribution is shifted to larger floc size especially during the rapid mixing step. The variation of the distributions is also shown in the plot of cumulative percent finer against floc size. From this plot, the distributions maintain the same S-shape curves over the range of the mixing intensities/times studied. A parallel shift of the curves indicates that self-preserving size distribution occurred in this flocculation. It is suggested that some parameters from mathematical functions derived from the curves could be used to construct a model and predict the flocculating performance. These parameters will be useful for a water treatment process selection, design criteria, and process control strategies. Thus the use of these parameters should be employed in any further study.

  17. Unusual geologic evidence of coeval seismic shaking and tsunamis shows variability in earthquake size and recurrence in the area of the giant 1960 Chile earthquake

    Science.gov (United States)

    Cisternas, M; Garrett, E; Wesson, Robert L.; Dura, T.; Ely, L. L

    2017-01-01

    An uncommon coastal sedimentary record combines evidence for seismic shaking and coincident tsunami inundation since AD 1000 in the region of the largest earthquake recorded instrumentally: the giant 1960 southern Chile earthquake (Mw 9.5). The record reveals significant variability in the size and recurrence of megathrust earthquakes and ensuing tsunamis along this part of the Nazca-South American plate boundary. A 500-m long coastal outcrop on Isla Chiloé, midway along the 1960 rupture, provides continuous exposure of soil horizons buried locally by debris-flow diamicts and extensively by tsunami sand sheets. The diamicts flattened plants that yield geologically precise ages to correlate with well-dated evidence elsewhere. The 1960 event was preceded by three earthquakes that probably resembled it in their effects, in AD 898 - 1128, 1300 - 1398 and 1575, and by five relatively smaller intervening earthquakes. Earthquakes and tsunamis recurred exceptionally often between AD 1300 and 1575. Their average recurrence interval of 85 years only slightly exceeds the time already elapsed since 1960. This inference is of serious concern because no earthquake has been anticipated in the region so soon after the 1960 event, and current plate locking suggests that some segments of the boundary are already capable of producing large earthquakes. This long-term earthquake and tsunami history of one of the world's most seismically active subduction zones provides an example of variable rupture mode, in which earthquake size and recurrence interval vary from one earthquake to the next.

  18. On the Deepwater Horizon drop size distributions

    Science.gov (United States)

    Ryerson, T. B.; Atlas, E. L.; Blake, D. R.; De Gouw, J. A.; Warneke, C.; Peischl, J.; Brock, C. A.; McKeen, S. A.

    2014-12-01

    Model simulations of the fate of gas and oil released following the Deepwater Horizon blowout in 2012 depend critically on the assumed drop size distributions. We use direct observations of surfacing time, surfacing location, and atmospheric chemical composition to infer an average drop size distribution for June 10, 2012, providing robust first-order constraints on parameterizations in models. We compare the inferred drop size distribution to published work on Deepwater Horizon and discuss the ability of this approach to determine the efficacy of subsurface dispersant injection.

  19. Particle size distribution instrument. Topical report 13

    Energy Technology Data Exchange (ETDEWEB)

    Okhuysen, W.; Gassaway, J.D.

    1995-04-01

    The development of an instrument to measure the concentration of particles in gas is described in this report. An in situ instrument was designed and constructed which sizes individual particles and counts the number of occurrences for several size classes. Although this instrument was designed to detect the size distribution of slag and seed particles generated at an experimental coal-fired magnetohydrodynamic power facility, it can be used as a nonintrusive diagnostic tool for other hostile industrial processes involving the formation and growth of particulates. Two of the techniques developed are extensions of the widely used crossed beam velocimeter, providing simultaneous measurement of the size distribution and velocity of articles.

  20. On the Size Distribution of Sand

    DEFF Research Database (Denmark)

    Sørensen, Michael

    2016-01-01

    -distribution, by taking into account that individual grains do not have the same travel time from the source to the deposit. The travel time is assumed to be random so that the wear on the individual grains vary randomly. The model provides an interpretation of the parameters of the NIG-distribution, and relates the mean......A model is presented of the development of the size distribution of sand while it is transported from a source to a deposit. The model provides a possible explanation of the log-hyperbolic shape that is frequently found in unimodal grain size distributions in natural sand deposits, as pointed out...

  1. ON POTENTIAL REPRESENTATIONS OF THE DISTRIBUTION LAW OF RARE STRONGEST EARTHQUAKES

    Directory of Open Access Journals (Sweden)

    M. V. Rodkin

    2015-09-01

    Full Text Available Assessment of long-term seismic hazard is critically dependent on the behavior of tail of the distribution function of rare strongest earthquakes. Analyses of empirical data cannot however yield the credible solution of this problem because the instrumental catalogs of earthquake are available only for a rather short time intervals, and the uncertainty in estimations of magnitude of paleoearthquakes is high. From the available data, it was possible only to propose a number of alternative models characterizing the distribution of rare strongest earthquakes. There are the following models: the model based on theGuttenberg – Richter law suggested to be valid until a maximum possible seismic event (Мmах, models of 'bend down' of earthquake recurrence curve, and the characteristic earthquakes model. We discuss these models from the general physical concepts supported by the theory of extreme values (with reference to the generalized extreme value (GEV distribution and the generalized Pareto distribution (GPD and the multiplicative cascade model of seismic regime. In terms of the multiplicative cascade model, seismic regime is treated as a large number of episodes of avalanche-type relaxation of metastable states which take place in a set of metastable sub-systems.The model of magnitude-unlimited continuation of the Guttenberg – Richter law is invalid from the physical point of view because it corresponds to an infinite mean value of seismic energy and infinite capacity of the process generating seismicity. A model of an abrupt cut of this law by a maximum possible event, Мmах is not fully logical either.A model with the 'bend-down' of earthquake recurrence curve can ensure both continuity of the distribution law and finiteness of seismic energy value. Results of studies with the use of the theory of extreme values provide a convincing support to the model of 'bend-down' of earthquakes’ recurrence curve. Moreover they testify also that the

  2. Bayesian forecasting of the recurrent earthquakes and its predictive performance for a small sample size

    Science.gov (United States)

    Nomura, S.; Ogata, Y.

    2010-12-01

    This study is concerned with the probability forecast by the Brownian Passage Time (BPT) model especially in case where only a few records of recurrent earthquakes from an active fault are available. We adopt the Bayesian predictive distribution that takes the relevant prior information and all possibilities for model parameters into account. We utilize the size of single-event displacements U and the slip rate V across the segment to calculate the mean recurrence time T=U/V that the past recurrence intervals are distributed around as Figure 1. We then make use of the best fitted prior distribution for the BPT variation coefficient (the shape parameter, α) selected by the Akaike Bayesian information criterion (ABIC), while the ERC uses the same common estimate α=0.24. Applying this prior distribution, we can see that α takes various values among the faults but has some locational tendencies from Figure 2. For example, α values tend to be higher in the center of Honshu island where the faults are densely populated. We compare the goodness of fit and probability forecasts between the conventional models and our proposed model by historical or simulated datasets. The Bayesian predictor shows very stable and superior performance for small samples or variant recurrence times. Figure 1: The relation between mean recurrence time from slip data and past recurrence intervals with error bars. Figure 2: The map of active faults in land and subduction-zones in Japan, whose colors show the Bayes estimates of variation coefficient α.

  3. The exponential age distribution and the Pareto firm size distribution

    OpenAIRE

    Coad, Alex

    2008-01-01

    Recent work drawing on data for large and small firms has shown a Pareto distribution of firm size. We mix a Gibrat-type growth process among incumbents with an exponential distribution of firm’s age, to obtain the empirical Pareto distribution.

  4. Bubble Size Distributions in Coastal Seas

    NARCIS (Netherlands)

    Leeuw, G. de; Cohen, L.H.

    1995-01-01

    Bubble size distributions have been measured with an optical system that is based on imaging of a small sample volume with a CCD camera system, and processing of the images to obtain the size of individual bubbles in the diameter range from 30 to lOOO^m. This bubble measuring system is deployed from

  5. Earthquake hazards to domestic water distribution systems in Salt Lake County, Utah

    Science.gov (United States)

    Highland, Lynn M.

    1985-01-01

    A magnitude-7. 5 earthquake occurring along the central portion of the Wasatch Fault, Utah, may cause significant damage to Salt Lake County's domestic water system. This system is composed of water treatment plants, aqueducts, distribution mains, and other facilities that are vulnerable to ground shaking, liquefaction, fault movement, and slope failures. Recent investigations into surface faulting, landslide potential, and earthquake intensity provide basic data for evaluating the potential earthquake hazards to water-distribution systems in the event of a large earthquake. Water supply system components may be vulnerable to one or more earthquake-related effects, depending on site geology and topography. Case studies of water-system damage by recent large earthquakes in Utah and in other regions of the United States offer valuable insights in evaluating water system vulnerability to earthquakes.

  6. Main Factors Affecting the Distribution of the Macroscopic Destruction Field of Earthquake

    Institute of Scientific and Technical Information of China (English)

    Li Minfeng; Li Shengqiang; Chen Yong; Mi Hongliang

    2001-01-01

    It is proposed that some possible macroseismic epicenters can be determined quickly from the relationship that the microseismic epicenters located by instruments bear with faults. Based on these so-called macroseismic epicenters, we can make fast seismic hazard estimation after a shock by use of the empirical distribution model of seismic intensity. In comparison with the method that uses the microseismic epicenters directly, this approach can increase the precision of fast seismic hazard estimation. Statistical analysis of 133 main earthquakes in China was made. The result shows that the deviation distance between the microseismic epicenter and macroseismic epicenter falls within the range of 35 km for 88 % earthquakes of the total and within the range of 35 to 75 km for the remaining ones. Then, we can take the area that has the microseismic epicenter as its center and is 35 km in radius as the area for emphatic analysis, and take the area within 75 km around the microseismic epicenter as the area for general analysis. The relation between the 66 earthquake cases on the N-S Seismic Belt in China and the spatial distribution characteristics of faults and the results of focal mechanism solution were analyzed in detail. We know from the analysis that the error of instrumental epicenter determination is not the only factor that gives effects to the deviation of the macroseismic epicenter. In addition to it, the fault size, fault distribution, fault activity, fault intersection types, earthquake magnitude, etc. are also main affecting factors. By sorting out,processing and analyzing these affecting factors, the principle and procedures for quickly determining the possible position of the macroseismic epicenter were set up. Taking these as a basis and establishing a nationwide database of faults that contains relevant factors, it is possible to apply this method in practical fast estimation of seismic hazard.

  7. Analysis on Depth Distribution and Precursor Mechanism of Small and Moderate Earthquakes

    Institute of Scientific and Technical Information of China (English)

    Wang Jian

    2001-01-01

    In this paper, the focus depth distribution of earthquakes with each magnitude has been ana lyzed. Statistic data show that the lower magnitude is, the wider focus depth distributes. With larger magnitude, the focus tends to be concentrated in upper or middle crustal layers. We an alyzed the cause of focus depth distribution and explained the precursor mechanism of small and moderate earthquakes with occurring condition and characteristics of strong earthquakes.The results of this paper may be applied to determine risk sites of strong earthquakes.

  8. Characteristics of stress distribution in trapezoid-shaped CSG dam during earthquake

    Energy Technology Data Exchange (ETDEWEB)

    Kondo, M.; Kawasaki, H. [Ministry of Land, Infrastructure and Transport (Japan). Water Management and Dam Division; Sasaki, T. [Public Works Research Institute, Tsukuba (Japan). Hydraulic Engineering Research Group

    2004-07-01

    There is currently a shortage of dam sites with optimal conditions in Japan. Dam design and construction technologies must also respond to a growing demand for cost reductions and environmental concerns. Cemented Sand and Gravel (CSG) is a new dam construction material that reduces the costs of material production. However, it is not as strong as concrete. The trapezoid shape was proposed to resolve this problem, as a trapezoidal cross section can minimize stress inside the dam body and reduce fluctuations during earthquakes. This paper examines the effects of dam size and the deformability of the foundation ground on the dynamic behavior of a trapezoid-shaped CSG dam during an earthquake, as well as examining the differences between the dynamic behaviors of trapezoid-shaped CSG dams and conventional concrete gravity dams. Finite element models of both dams were used to conduct the comparison. Analysis results included stress distribution during usual loading conditions. It was concluded that stress generated inside the dam body of trapezoid-shaped CSG during earthquakes is considerably lower than concrete gravity dams with a conventional triangle shape. In addition, stress distribution inside the dam body is affected largely by the relative deformability of the foundation to CSG. 4 refs., 2 tabs.,12 figs.

  9. Stress release model and proxy measures of earthquake size. Application to Italian seismogenic sources

    Science.gov (United States)

    Varini, Elisa; Rotondi, Renata; Basili, Roberto; Barba, Salvatore

    2016-07-01

    This study presents a series of self-correcting models that are obtained by integrating information about seismicity and fault sources in Italy. Four versions of the stress release model are analyzed, in which the evolution of the system over time is represented by the level of strain, moment, seismic energy, or energy scaled by the moment. We carry out the analysis on a regional basis by subdividing the study area into eight tectonically coherent regions. In each region, we reconstruct the seismic history and statistically evaluate the completeness of the resulting seismic catalog. Following the Bayesian paradigm, we apply Markov chain Monte Carlo methods to obtain parameter estimates and a measure of their uncertainty expressed by the simulated posterior distribution. The comparison of the four models through the Bayes factor and an information criterion provides evidence (to different degrees depending on the region) in favor of the stress release model based on the energy and the scaled energy. Therefore, among the quantities considered, this turns out to be the measure of the size of an earthquake to use in stress release models. At any instant, the time to the next event turns out to follow a Gompertz distribution, with a shape parameter that depends on time through the value of the conditional intensity at that instant. In light of this result, the issue of forecasting is tackled through both retrospective and prospective approaches. Retrospectively, the forecasting procedure is carried out on the occurrence times of the events recorded in each region, to determine whether the stress release model reproduces the observations used in the estimation procedure. Prospectively, the estimates of the time to the next event are compared with the dates of the earthquakes that occurred after the end of the learning catalog, in the 2003-2012 decade.

  10. Weakest-Link Scaling and Finite Size Effects on Recurrence Times Distribution

    CERN Document Server

    Hristopulos, Dionissios T; Kaniadakis, Giorgio

    2013-01-01

    Tectonic earthquakes result from the fracturing of the Earth's crust due to the loading induced by the motion of the tectonic plates. Hence, the statistical laws of earthquakes must be intimately connected to the statistical laws of fracture. The Weibull distribution is a commonly used model of earthquake recurrence times (ERT). Nevertheless, deviations from Weibull scaling have been observed in ERT data and in fracture experiments on quasi-brittle materials. We propose that the weakest-link-scaling theory for finite-size systems leads to the kappa-Weibull function, which implies a power-law tail for the ERT distribution. We show that the ERT hazard rate function decreases linearly after a waiting time which is proportional to the system size (in terms of representative volume elements) raised to the inverse of the Weibull modulus. We also demonstrate that the kappa-Weibull can be applied to strongly correlated systems by means of simulations of a fiber bundle model.

  11. Thermodynamic method for generating random stress distributions on an earthquake fault

    Science.gov (United States)

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  12. Distribution characteristics of historical earthquake classes in Jiangsu Province and South Huanghai Sea region

    Institute of Scientific and Technical Information of China (English)

    田建明; 徐徐; 谢华章; 杨云; 丁政

    2004-01-01

    According to the analysis on the characteristics of historic earthquakes in Jiangsu Province and South Huanghai Sea region, the historical earthquakes in the studied area are divided into two kinds of"comparatively safe class"and"comparatively dangerous class". Then the statistical result of earthquake class, the characteristics of geographical distribution and geological structures are studied. The study shows: a) In Jiangsu Province and South Huanghai Sea region, the majority of historical strong earthquakes belong to"comparatively safe class", only 13.8% belong to"comparatively dangerous class"; b) Most historical earthquakes belong to"comparatively safe class" in the land area of Jiangsu, eastern sea area of Yangtze River mouth and northern depression of South Huanghai Sea region. However, along the coast of middle Jiangsu Province and in the sea area of South Huanghai Sea, the distribution of historical earthquake classes is complex and the earthquake series of"comparatively dangerous class"and"comparatively safe class"are equivalent in number; c) In the studied area, the statistical results of historical earthquake classes and the characteristics of spatial distribution accord very well with the real case of present-day earthquake series. It shows that the seismic activity in the region has the characteristic of succession, and the result from this study can be used as a reference for early postseismic judgment in the earthquake emergency work in Jiangsu Province.

  13. Distribution Characteristics of the Seismicity of Zipingpu Reservoir Region after the Wenchuan Earthquake

    Institute of Scientific and Technical Information of China (English)

    Li Hai'ou; Ma Wentao; Xu Xiwei; Xie Ronghua; Yuan Jingli; Xu Changpeng

    2011-01-01

    815 earthquakes recorded by 12 seismic stations of the Zipingpu reservoir seismic network in 2009 were relocated using the double difference algorithm to analyze the seismic activity of the Zipingpu reservoir. Relocation results show that the earthquakes are concentrated relatively in three zones. The distribution characteristics of focal depth are obviously different among different concentration zones. This means earthquakes in different concentration zones may have different causes. Compared to relocation of earthquakes taking place before the Wenchuan earthquake done by other researchers, the seismic concentration zones in the reservoir area shifted obviously after the Wenchuan earthquake. These variations are related to local stress adjustment in the reservoir area and may also be related to the diffusion depth and range of increased pore pressure caused by rock failure in the course of Wenchuan earthquake.

  14. Size dependent pore size distribution of shales by gas physisorption

    Science.gov (United States)

    Roshan, Hamid; Andersen, Martin S.; Yu, Lu; Masoumi, Hossein; Arandian, Hamid

    2017-04-01

    Gas physisorption, in particular nitrogen adsorption-desorption, is a traditional technique for characterization of geomaterials including the organic rich shales. The low pressure nitrogen is used together with adsorption-desorption physical models to study the pore size distribution (PSD) and porosity of the porous samples. The samples are usually crushed to a certain fragment size to measure these properties however there is not yet a consistent standard size proposed for sample crushing. Crushing significantly increases the surface area of the fragments e.g. the created surface area is differentiated from that of pores using BET technique. In this study, we show that the smaller fragment sizes lead to higher cumulative pore volume and smaller pore diameters. It is also shown that some of the micro-pores are left unaccounted because of the correction of the external surface area. In order to illustrate this, the nitrogen physisorption is first conducted on the identical organic rich shale samples with different sizes: 20-25, 45-50 and 63-71 µm. We then show that such effects are not only a function of pore structure changes induced by crushing, but is linked to the inability of the physical models in differentiating between the external surface area (BET) and micro-pores for different crushing sizes at relatively low nitrogen pressure. We also discuss models currently used in nano-technology such as t-method to address this issue and their advantages and shortcoming for shale rock characterization.

  15. Bayesian forecasting of recurrent earthquakes and predictive performance for a small sample size

    Science.gov (United States)

    Nomura, S.; Ogata, Y.; Komaki, F.; Toda, S.

    2011-04-01

    This paper presents a Bayesian method of probability forecasting for a renewal of earthquakes. When only limited records of characteristic earthquakes on a fault are available, relevant prior distributions for renewal model parameters are essential to computing unbiased, stable time-dependent earthquake probabilities. We also use event slip and geological slip rate data combined with historical earthquake records to improve our forecast model. We apply the Brownian Passage Time (BPT) model and make use of the best fit prior distribution for its coefficient of variation (the shape parameter, alpha) relative to the mean recurrence time because the Earthquake Research Committee (ERC) of Japan uses the BPT model for long-term forecasting. Currently, more than 110 active faults have been evaluated by the ERC, but most include very few paleoseismic events. We objectively select the prior distribution with the Akaike Bayesian Information Criterion using all available recurrence data including the ERC datasets. These data also include mean recurrence times estimated from slip per event divided by long-term slip rate. By comparing the goodness of fit to the historical record and simulated data, we show that the proposed predictor provides more stable performance than plug-in predictors, such as maximum likelihood estimates and the predictor currently adopted by the ERC.

  16. Determination of size distribution using neural networks

    NARCIS (Netherlands)

    Stevens, JH; Nijhuis, JAG; Spaanenburg, L; Mohammadian, M

    1999-01-01

    In this paper we present a novel approach to the estimation of size distributions of grains in water from images. External conditions such as the concentrations of grains in water cannot be controlled. This poses problems for local image analysis which tries to identify and measure single grains.

  17. Earthquake

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    正A serious earthquake happened in Wenchuan, Sichuan. Over 60,000 people died in the earhtquake, millins of people lost their homes. After the earthquake, people showed their love in different ways. Some gave food, medicine and everything necessary, some gave money,

  18. Characterization of the tail of the distribution of earthquake magnitudes by combining the GEV and GPD descriptions of Extreme Value Theory

    CERN Document Server

    Pisarenko, V F; Sornette, D; Rodkin, M V

    2008-01-01

    We present a generic and powerful approach to study the statistics of extreme phenomena (meteorology, finance, biology...) that we apply to the statistical estimation of the tail of the distribution of earthquake sizes. The chief innovation is to combine the two main limit theorems of Extreme Value Theory (EVT) that allow us to derive the distribution of T-maxima (maximum magnitude occurring in sequential time intervals of duration T) for arbitrary T. We propose a method for the estimation of the unknown parameters involved in the two limit theorems corresponding to the Generalized Extreme Value distribution (GEV) and to the Generalized Pareto Distribution (GPD). We establish the direct relations between the parameters of these distributions, which permit to evaluate the distribution of the T-maxima for arbitrary T. The duality between the GEV and GPD provides a new way to check the consistency of the estimation of the tail characteristics of the distribution of earthquake magnitudes for earthquake occurring ...

  19. Size from Specular Highlights for Analyzing Droplet Size Distributions

    Science.gov (United States)

    Jalba, Andrei C.; Westenberg, Michel A.; Grooten, Mart H. M.

    In mechanical engineering, heat-transfer models by dropwise condensation are under development. The condensation process is captured by taking many pictures, which show the formation of droplets, of which the size distribution and area coverage are of interest for model improvement. The current analysis method relies on manual measurements, which is time consuming. In this paper, we propose an approach to automatically extract the positions and radii of the droplets from an image. Our method relies on specular highlights that are visible on the surfaces of the droplets. We show that these highlights can be reliably extracted, and that they provide sufficient information to infer the droplet size. The results obtained by our method compare favorably with those obtained by laborious and careful manual measurements. The processing time per image is reduced by two orders of magnitude.

  20. Coseismic slip distribution of the 1923 Kanto earthquake, Japan

    Science.gov (United States)

    Pollitz, F.F.; Nyst, M.; Nishimura, T.; Thatcher, W.

    2005-01-01

    The slip distribution associated with the 1923 M = 7.9 Kanto, Japan, earthquake is reexamined in light of new data and modeling. We utilize a combination of first-order triangulation, second-order triangulation, and leveling data in order to constrain the coseismic deformation. The second-order triangulation data, which have not been utilized in previous studies of 1923 coseismic deformation, are associated with only slightly smaller errors than the first-order triangulation data and expand the available triangulation data set by about a factor of 10. Interpretation of these data in terms of uniform-slip models in a companion study by Nyst et al. shows that a model involving uniform coseismic slip on two distinct rupture planes explains the data very well and matches or exceeds the fit obtained by previous studies, even one which involved distributed slip. Using the geometry of the Nyst et al. two-plane slip model, we perform inversions of the same geodetic data set for distributed slip. Our preferred model of distributed slip on the Philippine Sea plate interface has a moment magnitude of 7.86. We find slip maxima of ???8-9 m beneath Odawara and ???7-8 m beneath the Miura peninsula, with a roughly 2:1 ratio of strike-slip to dip-slip motion, in agreement with a previous study. However, the Miura slip maximum is imaged as a more broadly extended feature in our study, with the high-slip region continuing from the Miura peninsula to the southern Boso peninsula region. The second-order triangulation data provide good evidence for ???3 m right-lateral strike slip on a 35-km-long splay structure occupying the volume between the upper surface of the descending Philippine Sea plate and the southern Boso peninsula. Copyright 2005 by the American Geophysical Union.

  1. Earthquake probabilities and magnitude distribution (M≥6.7) along the Haiyuan fault, northwestern China

    Institute of Scientific and Technical Information of China (English)

    冉洪流

    2004-01-01

    In recent years, some researchers have studied the paleoearthquake along the Haiyuan fault and revealed a lot of paleoearthquake events. All available information allows more reliable analysis of earthquake recurrence interval and earthquake rupture patterns along the Haiyuan fault. Based on this paleoseismological information, the recurrence probability and magnitude distribution for M≥6.7 earthquakes in future 100 years along the Haiyuan fault can be obtained through weighted computation by using Poisson and Brownian passage time models and considering different rupture patterns. The result shows that the recurrence probability of MS≥6.7 earthquakes is about 0.035 in future 100 years along the Haiyuan fault.

  2. Singular statistics to model the distribution of large and small magnitude earthquakes

    CERN Document Server

    Maslov, Lev A

    2014-01-01

    The solution of the Generalized Logistic Equation is obtained to study earthquake statistics for large and small magnitudes. It is shown that the same solution fits the distribution of small magnitude earthquakes, m3, both qualitatively and quantitatively. The Gutenberg-Richter cumulative frequency-magnitude empirical formula is derived from the solution of this equation.

  3. Research Progress on the Problem of Fluid, Heat and Energy Distribution near the Earthquake Source Area

    Institute of Scientific and Technical Information of China (English)

    Yan Rui; Jiang Changsheng; Shao Zhigang; Zhou Longquan; Li Yingchun

    2011-01-01

    As the basic problems in seismology, fluid, heat and energy distribution near earthquake sources during earthquake generation have been the leading subjects of concern to seismologists. Currently, more and more research shows fluid around earthquake source areas, which plays an important role in the process of earthquake preparation and generation. However, there is considerable controversy over the source of fluid in the deep crust. As for the problem of heat around earthquake source areas, different models have been proposed to explain the stress heat flow paradox. Among them, the dynamic weakening model has been thought to be the key to solving the heat flow paradox issue. After large earthquakes, energy distribution is directly related to friction heat. It is of timely and important practical significance to immediately implement deep drilling in-site surveying to gain understanding of fluid, friction heat and energy distribution during earthquake generation. The latest international progress in fluid, heat and energy distribution research has been reviewed in this paper which will bring important inspiration for the understanding of earthquake preparation and occurrence.

  4. The size distribution of 'gold standard' nanoparticles.

    Science.gov (United States)

    Bienert, Ralf; Emmerling, Franziska; Thünemann, Andreas F

    2009-11-01

    The spherical gold nanoparticle reference materials RM 8011, RM 8012, and RM 8013, with a nominal radius of 5, 15, and 30 nm, respectively, have been available since 2008 from NIST. These materials are recommended as standards for nanoparticle size measurements and for the study of the biological effects of nanoparticles, e.g., in pre-clinical biomedical research. We report on determination of the size distributions of these gold nanoparticles using different small-angle X-ray scattering (SAXS) instruments. Measurements with a classical Kratky type SAXS instrument are compared with a synchrotron SAXS technique. Samples were investigated in situ, positioned in capillaries and in levitated droplets. The number-weighted size distributions were determined applying model scattering functions based on (a) Gaussian, (b) log-normal, and (c) Schulz distributions. The mean radii are 4.36 +/- 0.04 nm (RM 8011), 12.20 +/- 0.03 nm (RM 8012), and 25.74 +/- 0.27 nm (RM 8013). Low polydispersities, defined as relative width of the distributions, were detected with values of 0.067 +/- 0.006 (RM 8011), 0.103 +/- 0.003, (RM 8012), and 0.10 +/- 0.01 (RM 8013). The results are in agreement with integral values determined from classical evaluation procedures, such as the radius of gyration (Guinier) and particle volume (Kratky). No indications of particle aggregation and particle interactions--repulsive or attractive--were found. We recommend SAXS as a standard method for a fast and precise determination of size distributions of nanoparticles.

  5. Spatial Distribution of the Coefficient of Variation for the Paleo-Earthquakes in Japan

    Science.gov (United States)

    Nomura, S.; Ogata, Y.

    2015-12-01

    Renewal processes, point prccesses in which intervals between consecutive events are independently and identically distributed, are frequently used to describe this repeating earthquake mechanism and forecast the next earthquakes. However, one of the difficulties in applying recurrent earthquake models is the scarcity of the historical data. Most studied fault segments have few, or only one observed earthquake that often have poorly constrained historic and/or radiocarbon ages. The maximum likelihood estimate from such a small data set can have a large bias and error, which tends to yield high probability for the next event in a very short time span when the recurrence intervals have similar lengths. On the other hand, recurrence intervals at a fault depend on the long-term slip rate caused by the tectonic motion in average. In addition, recurrence times are also fluctuated by nearby earthquakes or fault activities which encourage or discourage surrounding seismicity. These factors have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus, this paper introduces a spatial structure on the key parameters of renewal processes for recurrent earthquakes and estimates it by using spatial statistics. Spatial variation of mean and variance parameters of recurrence times are estimated in Bayesian framework and the next earthquakes are forecasted by Bayesian predictive distributions. The proposal model is applied for recurrent earthquake catalog in Japan and its result is compared with the current forecast adopted by the Earthquake Research Committee of Japan.

  6. Velocity Distributions in Inelastic Granular Gases with Continuous Size Distributions

    Institute of Scientific and Technical Information of China (English)

    LI Rui; ZHANG Duan-Ming; LI Zhi-Hao

    2011-01-01

    We study by numerical simulation the property of velocity distributions of granular gases with a power-law size distribution, driven by uniform heating and boundary heating. It is found that the form of velocity distribution is primarily controlled by the restitution coefficient -q and q, the ratio between the average number of heatings and the average number of collisions in the system. Furthermore, we show that uniform and boundary heating can be understood as different limits of q, with q ? 1 and q >1 and q≤1,respectively.

  7. Prediction of the size distribution of precipitates

    Energy Technology Data Exchange (ETDEWEB)

    Prikhodovsky, A. [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Werkstoffe und Verfahren der Energietechnik 2: Werkstoffstruktur und Eigenschaften

    2001-12-01

    Modelling has proven to be an efficient way of cutting the time and costs associated with the investigation of materials properties. A new mathematical model for the prediction of the particle size distribution of precipitates has been developed. The model allows the description of all stages of the precipitation process: nucleation, growth and Ostwald ripening of particles. The incorporation of existing thermodynamic databases allows the simulation of a formation of dispersed phases in commercial multicomponent alloys. The influence of the model parameters on the final particle size distribution was investigated with the example of NbC formation in austenite. It was shown that the interfacial energy of a particle-matrix interface has the most significant effect on the final particle arrangement. A pre-exponential factor, which is the subject of nucleation theories, plays a less significant role in the final particle arrangement. (orig.)

  8. Crystallite size distributions of marine gas hydrates

    Energy Technology Data Exchange (ETDEWEB)

    Klapp, S.A.; Bohrmann, G.; Abegg, F. [Bremen Univ., Bremen (Germany). Research Center of Ocean Margins; Hemes, S.; Klein, H.; Kuhs, W.F. [Gottingen Univ., Gottingen (Germany). Dept. of Crystallography

    2008-07-01

    Experimental studies were conducted to determine the crystallite size distributions of natural gas hydrate samples retrieved from the Gulf of Mexico, the Black Sea, and a hydrate ridge located near offshore Oregon. Synchrotron radiation technology was used to provide the high photon fluxes and high penetration depths needed to accurately analyze the bulk sediment samples. A new beam collimation diffraction technique was used to measure gas hydrate crystallite sizes. The analyses showed that gas hydrate crystals were globular in shape. Mean crystallite sizes ranged from 200 to 400 {mu}m for hydrate samples taken from the sea floor. Larger grain sizes in the hydrate ridge samples suggested differences in hydrate formation ages or processes. A comparison with laboratory-produced methane hydrate samples showed half a lognormal curve with a mean value of 40{mu}m. Results of the study showed that a cautious approach must be adopted when transposing crystallite-size sensitive physical data from laboratory-made gas hydrates to natural settings. It was concluded that crystallite size information may also be used to resolve the formation ages of gas hydrates when formation processes and conditions are constrained. 48 refs., 1 tab., 9 figs.

  9. Anomalous Power Law Distribution of Total Lifetimes of Branching Processes Relevant to Earthquakes

    CERN Document Server

    Saichev, A

    2004-01-01

    We consider a branching model of triggered seismicity, the ETAS (epidemic-type aftershock sequence) model which assumes that each earthquake can trigger other earthquakes (``aftershocks''). An aftershock sequence results in this model from the cascade of aftershocks of each past earthquake. Due to the large fluctuations of the number of aftershocks triggered directly by any earthquake (``productivity'' or ``fertility''), there is a large variability of the total number of aftershocks from one sequence to another, for the same mainshock magnitude. We study the regime where the distribution of fertilities $\\mu$ is characterized by a power law $\\sim 1/\\mu^{1+\\gamma}$ and the bare Omori law for the memory of previous triggering mothers decays slowly as $\\sim 1/t^{1+\\theta}$, with $0 < \\theta <1$ relevant for earthquakes. Using the tool of generating probability functions and a quasistatic approximation which is shown to be exact asymptotically for large durations, we show that the density distribution of to...

  10. The pecularities of shear crack pre-rupture evolution and distribution of seismicity before strong earthquakes

    Directory of Open Access Journals (Sweden)

    D. Kiyashchenko

    2001-01-01

    Full Text Available Several methods are presently suggested for investigating pre-earthquake evolution of the regions of high tectonic activity based on analysis of the seismicity spatial distribution. Some precursor signatures are detected before strong earthquakes: decrease in fractal dimension of the continuum of earthquake epicenters, cluster formation, concentration of seismic events near one of the nodal planes of the future earthquake, and others. In the present paper, it is shown that such peculiarities are typical of the evolution of the shear crack network under external stresses in elastic bodies with inhomogeneous distribution of strength. The results of computer modeling of crack network evolution are presented. It is shown that variations of the fractal dimension of the earthquake epicenters’ continuum and other precursor signatures contain information about the evolution of the destruction process towards the main rupture.

  11. A New Technique to Recover Source-Time Functions of Intermediate-Sized Earthquakes

    Science.gov (United States)

    Plourde, A. P.; Bostock, M. G.

    2016-12-01

    Most studies of source-time functions (STFs) for intermediate-sized earthquakes remove propagation effects through seismogram deconvolution with a smaller earthquake known as an empirical Green's function (EGF). Improved stability over simple spectral division can be achieved through Projected Landweber Deconvolution (PLD) that imposes positivity and duration constraints. We investigate a new procedure for recovering STFs that does not assume an EGF, but instead the availability of two (or more) earthquakes that share a common Green's function. Under this condition one can show u2*s1 - u1*s2 = 0, where ui and si are the seismogram and STF for a given earthquake, respectively. This system can be augmented with a scaling constraint and written a Ax = z, where matrix A has a block-Toeplitz structure and x are the target STFs. We form an objective function from this linear system, which we minimize with a combined Newton and Conjugate-Gradient algorithm. As is the case in EGF deconvolution, success is dependent on proper constraints. In our time-domain implementation duration constraints are easily enforced by shortening x to the allowed duration prior to inversion, and removing the corresponding columns of A. During line searches we preserve positivity of the STFs by 'bending' the Newton direction as needed. If the magnitude-difference of the earthquakes make one a suitable EGF for the other, using PLD to obtain a starting model for the larger STF is a prudent approach. We demonstrate the effectiveness of this algorithm using synthetic tests and real examples, and we suggest such methods should improve our estimates for a variety of earthquake source parameters.

  12. Remote Laser Diffraction Particle Size Distribution Analyzer

    Energy Technology Data Exchange (ETDEWEB)

    Batcheller, Thomas Aquinas; Huestis, Gary Michael; Bolton, Steven Michael

    2001-03-01

    In support of a radioactive slurry sampling and physical characterization task, an “off-the-shelf” laser diffraction (classical light scattering) particle size analyzer was utilized for remote particle size distribution (PSD) analysis. Spent nuclear fuel was previously reprocessed at the Idaho Nuclear Technology and Engineering Center (INTEC—formerly recognized as the Idaho Chemical Processing Plant) which is on DOE’s INEEL site. The acidic, radioactive aqueous raffinate streams from these processes were transferred to 300,000 gallon stainless steel storage vessels located in the INTEC Tank Farm area. Due to the transfer piping configuration in these vessels, complete removal of the liquid can not be achieved. Consequently, a “heel” slurry remains at the bottom of an “emptied” vessel. Particle size distribution characterization of the settled solids in this remaining heel slurry, as well as suspended solids in the tank liquid, is the goal of this remote PSD analyzer task. A Horiba Instruments Inc. Model LA-300 PSD analyzer, which has a 0.1 to 600 micron measurement range, was modified for remote application in a “hot cell” (gamma radiation) environment. This technology provides rapid and simple PSD analysis, especially down in the fine and microscopic particle size regime. Particle size analysis of these radioactive slurries down in this smaller range was not previously achievable—making this technology far superior than the traditional methods used. Successful acquisition of this data, in conjunction with other characterization analyses, provides important information that can be used in the myriad of potential radioactive waste management alternatives.

  13. Measurement of nonvolatile particle number size distribution

    Science.gov (United States)

    Gkatzelis, G. I.; Papanastasiou, D. K.; Florou, K.; Kaltsonoudis, C.; Louvaris, E.; Pandis, S. N.

    2016-01-01

    An experimental methodology was developed to measure the nonvolatile particle number concentration using a thermodenuder (TD). The TD was coupled with a high-resolution time-of-flight aerosol mass spectrometer, measuring the chemical composition and mass size distribution of the submicrometer aerosol and a scanning mobility particle sizer (SMPS) that provided the number size distribution of the aerosol in the range from 10 to 500 nm. The method was evaluated with a set of smog chamber experiments and achieved almost complete evaporation (> 98 %) of secondary organic as well as freshly nucleated particles, using a TD temperature of 400 °C and a centerline residence time of 15 s. This experimental approach was applied in a winter field campaign in Athens and provided a direct measurement of number concentration and size distribution for particles emitted from major pollution sources. During periods in which the contribution of biomass burning sources was dominant, more than 80 % of particle number concentration remained after passing through the thermodenuder, suggesting that nearly all biomass burning particles had a nonvolatile core. These remaining particles consisted mostly of black carbon (60 % mass contribution) and organic aerosol (OA; 40 %). Organics that had not evaporated through the TD were mostly biomass burning OA (BBOA) and oxygenated OA (OOA) as determined from AMS source apportionment analysis. For periods during which traffic contribution was dominant 50-60 % of the particles had a nonvolatile core while the rest evaporated at 400 °C. The remaining particle mass consisted mostly of black carbon with an 80 % contribution, while OA was responsible for another 15-20 %. Organics were mostly hydrocarbon-like OA (HOA) and OOA. These results suggest that even at 400 °C some fraction of the OA does not evaporate from particles emitted from common combustion processes, such as biomass burning and car engines, indicating that a fraction of this type of OA

  14. Aerosol Size Distribution in the marine regions

    Science.gov (United States)

    Markuszewski, Piotr; Petelski, Tomasz; Zielinski, Tymon; Pakszys, Paulina; Strzalkowska, Agata; Makuch, Przemyslaw; Kowalczyk, Jakub

    2014-05-01

    We would like to present the data obtained during the regular research cruises of the S/Y Oceania over a period of time between 2009 - 2012. The Baltic Sea is a very interesting polygon for aerosol measurements, however, also difficult due to the fact that mostly cases of a mixture of continental and marine aerosols are observed. It is possible to measure clear marine aerosol, but also advections of dust from southern Europe or even Africa. This variability of data allows to compare different conditions. The data is also compared with our measurements from the Arctic Seas, which have been made during the ARctic EXperiment (AREX). The Arctic Seas are very suitable for marine aerosol investigations since continental advections of aerosols are far less frequent than in other European sea regions. The aerosol size distribution was measured using the TSI Laser Aerosol Spectrometer model 3340 (99 channels, measurement range 0.09 μm to 7 μm), condensation particle counter (range 0.01 μm to 3 μm) and laser particle counter PMS CSASP-100-HV-SP (range 0.5 μm to 47 μm in 45 channels). Studies of marine aerosol production and transport are important for many Earth sciences such as cloud physics, atmospheric optics, environmental pollution studies and interaction between ocean and atmosphere. All equipment was placed on one of the masts of S/Y Oceania. Measurements using the laser aerosol spectrometer and condensation particle counter were made on one level (8 meters above sea level). Measurements with the laser particle counter were performed at five different levels above the sea level (8, 11, 14, 17 and 20 m). Based on aerosol size distribution the parameterizations with a Log-Normal and a Power-Law distributions were made. The aerosol source functions, characteristic for the region were also determined. Additionally, poor precision of the sea spray emission determination was confirmed while using only the aerosol concentration data. The emission of sea spray depends

  15. Charge and Size Distributions of Electrospray Drops

    Science.gov (United States)

    de Juan L; de la Mora JF

    1997-02-15

    The distributions of charge q and diameter d of drops emitted from electrified liquid cones in the cone-jet mode are investigated with two aerosol instruments. A differential mobility analyzer (DMA, Vienna type) first samples the spray drops, selects those with electrical mobilities within a narrow band, and either measures the associated current or passes them to a second instrument. The drops may also be individually counted optically and sized by sampling them into an aerodynamic size spectrometer (API's Aerosizer). For a given cone-jet, the distribution of charge q for the main electrospray drops is some 2.5 times broader than their distribution of diameters d, with qmax/qmin approximately 4. But mobility-selected drops have relative standard deviations of only 5% for both d and q, showing that the support of the (q, d) distribution is a narrow band centered around a curve q(d). The approximate one-dimensionality of this support region is explained through the mechanism of jet breakup, which is a random process with only one degree of freedom: the wavelength of axial modulation of the jet. The observed near constancy of the charge over volume ratio (q approximately d3) shows that the charge is frozen in the liquid surface at the time scale of the breakup process. The charge over volume ratio of the primary drops varies between 98 and 55% of the ratio of spray current I over liquid flow rate Q, and decreases at increasing Q. I/Q is therefore an unreliable measure of the charge density of these drops.

  16. Bimodal micropore size distribution in active carbons

    Energy Technology Data Exchange (ETDEWEB)

    Vartapetyan, R.S.; Voloshchuk, A.M.; Limonov, N.A.; Romanov, Y.A. (Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Physical Chemistry)

    1993-03-01

    The porous structure of active carbon was compared with that of the original mineral coal and its carbonization products. The parameters of the porous structure were calculated from the adsorption isotherms of CO[sub 2] (298 K) and H[sub 2]O (293 K). It was shown that carbonization of the original coal at 1120 K causes changes in the chemical composition, consolidation of the part which is amorphous to X-rays, generation of an ordered defect-containing structure on its basis, an increase in the volume of the micropores, and a decrease in the mean diameter. Activation of the carbonized coal affords a microporous structure with a bimodal size distribution.

  17. Parameterizing Size Distribution in Ice Clouds

    Energy Technology Data Exchange (ETDEWEB)

    DeSlover, Daniel; Mitchell, David L.

    2009-09-25

    PARAMETERIZING SIZE DISTRIBUTIONS IN ICE CLOUDS David L. Mitchell and Daniel H. DeSlover ABSTRACT An outstanding problem that contributes considerable uncertainty to Global Climate Model (GCM) predictions of future climate is the characterization of ice particle sizes in cirrus clouds. Recent parameterizations of ice cloud effective diameter differ by a factor of three, which, for overcast conditions, often translate to changes in outgoing longwave radiation (OLR) of 55 W m-2 or more. Much of this uncertainty in cirrus particle sizes is related to the problem of ice particle shattering during in situ sampling of the ice particle size distribution (PSD). Ice particles often shatter into many smaller ice fragments upon collision with the rim of the probe inlet tube. These small ice artifacts are counted as real ice crystals, resulting in anomalously high concentrations of small ice crystals (D < 100 µm) and underestimates of the mean and effective size of the PSD. Half of the cirrus cloud optical depth calculated from these in situ measurements can be due to this shattering phenomenon. Another challenge is the determination of ice and liquid water amounts in mixed phase clouds. Mixed phase clouds in the Arctic contain mostly liquid water, and the presence of ice is important for determining their lifecycle. Colder high clouds between -20 and -36 oC may also be mixed phase but in this case their condensate is mostly ice with low levels of liquid water. Rather than affecting their lifecycle, the presence of liquid dramatically affects the cloud optical properties, which affects cloud-climate feedback processes in GCMs. This project has made advancements in solving both of these problems. Regarding the first problem, PSD in ice clouds are uncertain due to the inability to reliably measure the concentrations of the smallest crystals (D < 100 µm), known as the “small mode”. Rather than using in situ probe measurements aboard aircraft, we employed a treatment of ice

  18. Mw7.7 2013 Balochistan Earthquake. Slip-Distribution and Deformation Field in Oblique Tectonic Context

    Science.gov (United States)

    Klinger, Y.; Vallage, A.; Grandin, R.; Delorme, A.; Rosu, A. M.; Pierro-Deseilligny, M.

    2014-12-01

    The Mw7.7 2013 Balochistan earthquake ruptured 200 km of the Hoshab fault, the southern end of the Chaman fault. Azimuth of the fault changes by more than 30° along rupture, from a well-oriented strike-slip fault to a more thrust prone direction. We use the MicMac optical image software to correlate pairs of Landsat images taken before and after the earthquake to access to the horizontal displacement field associated with the earthquake. We combine the horizontal displacement with radar image correlation in range and radar interferometry to derive the co-seismic slip on the fault. The combination of these different datasets actually provides the 3D displacement field. We note that although the earthquake was mainly strike-slip all along the rupture length, some vertical motion patches exist, which locations seem to be controlled by kilometric-scale variations of the fault geometry. 5 pairs of SPOT images were also correlated to derive a 2.5m pixel-size horizontal displacement field, providing unique opportunity to look at deformation in the near field and to obtain high-resolution strike-slip and normal slip-distributions. We note a significant difference, especially in the normal component, between the slip localized at depth on the fault plane and the slip localized closer to the surface, with more apparent slip at the surface. A high-resolution map of ground rupture allows us to locate the distribution of the deformation over the whole rupture length. The rupture map also highlights multiple fault geometric complexities where we could quantify details of the slip distribution. At the rupture length-scale, the local azimuth variations between segments have a large impact on the expression of the localized slip at the surface. The combination of those datasets gives an overview of the large distribution of the deformation in the near field, corresponding to the co-seismic damage zone.

  19. The temporal distribution of seismic radiation during deep earthquake rupture.

    Science.gov (United States)

    Houston, H; Vidale, J E

    1994-08-01

    The time history of energy release during earthquakes illuminates the process of failure, which remains enigmatic for events deeper than about 100 kilometers. Stacks of teleseismic records from regional arrays for 122 intermediate (depths of 100 to 350 kilometers) and deep (depths of 350 to 700 kilometers) earthquakes show that the temporal pattern of short-period seismic radiation has a systematic variation with depth. On average, for intermediate depth events more radiation is released toward the beginning of the rupture than near the end, whereas for deep events radiation is released symmetrically over the duration of the event, with an abrupt beginning and end of rupture. These findings suggest a variation in the style of rupture related to decreasing fault heterogeneity with depth.

  20. Estimation of source parameters and scaling relations for moderate size earthquakes in North-West Himalaya

    Science.gov (United States)

    Kumar, Vikas; Kumar, Dinesh; Chopra, Sumer

    2016-10-01

    The scaling relation and self similarity of earthquake process have been investigated by estimating the source parameters of 34 moderate size earthquakes (mb 3.4-5.8) occurred in the NW Himalaya. The spectral analysis of body waves of 217 accelerograms recorded at 48 sites have been carried out using in the present analysis. The Brune's ω-2 model has been adopted for this purpose. The average ratio of the P-wave corner frequency, fc(P), to the S-wave corner frequency, fc(S), has been found to be 1.39 with fc(P) > fc(S) for 90% of the events analyzed here. This implies the shift in the corner frequency in agreement with many other similar studies done for different regions. The static stress drop values for all the events analyzed here lie in the range 10-100 bars average stress drop value of the order of 43 ± 19 bars for the region. This suggests the likely estimate of the dynamic stress drop, which is 2-3 times the static stress drop, is in the range of about 80-120 bars. This suggests the relatively high seismic hazard in the NW Himalaya as high frequency strong ground motions are governed by the stress drop. The estimated values of stress drop do not show significant variation with seismic moment for the range 5 × 1014-2 × 1017 N m. This observation along with the cube root scaling of corner frequencies suggests the self similarity of the moderate size earthquakes in the region. The scaling relation between seismic moment and corner frequency Mo fc3 = 3.47 ×1016Nm /s3 estimated in the present study can be utilized to estimate the source dimension given the seismic moment of the earthquake for the hazard assessment. The present study puts the constrains on the important parameters stress drop and source dimension required for the synthesis of strong ground motion from the future expected earthquakes in the region. Therefore, the present study is useful for the seismic hazard and risk related studies for NW Himalaya.

  1. Anomalous power law distribution of total lifetimes of branching processes: application to earthquake aftershock sequences.

    Science.gov (United States)

    Saichev, A; Sornette, D

    2004-10-01

    We consider a general stochastic branching process, which is relevant to earthquakes, and study the distributions of global lifetimes of the branching processes. In the earthquake context, this amounts to the distribution of the total durations of aftershock sequences including aftershocks of arbitrary generation number. Our results extend previous results on the distribution of the total number of offspring (direct and indirect aftershocks in seismicity) and of the total number of generations before extinction. We consider a branching model of triggered seismicity, the epidemic-type aftershock sequence model, which assumes that each earthquake can trigger other earthquakes ("aftershocks"). An aftershock sequence results in this model from the cascade of aftershocks of each past earthquake. Due to the large fluctuations of the number of aftershocks triggered directly by any earthquake ("productivity" or "fertility"), there is a large variability of the total number of aftershocks from one sequence to another, for the same mainshock magnitude. We study the regime where the distribution of fertilities mu is characterized by a power law approximately 1/ mu(1+gamma) and the bare Omori law for the memory of previous triggering mothers decays slowly as approximately 1/ t(1+theta;) , with 0aftershock lifetimes scales as approximately 1/ t(1+theta;/gamma) when the average branching ratio is critical (n=1) . The coefficient 1aftershocks with mainshock magnitude m (productivity), with 0.5distribution of fertilities and the critical nature of the branching cascade process. In the subcritical case n<1 , the crossover from approximately 1/ t(1+theta;/gamma) at early times to approximately 1/ t(1+theta;) at longer times is described. More generally, our results apply to any stochastic

  2. ANALYSIS OF REGULARITIES IN DISTRIBUTION OF EARTHQUAKES BY FOCAL DISPLACEMENT IN THE KURIL-OKHOTSK REGION BEFORE THE CATASTROPHIC SIMUSHIR EARTHQUAKE OF 15 NOVEMBER 2006

    Directory of Open Access Journals (Sweden)

    Timofei K. Zlobin

    2015-09-01

    Full Text Available The catastrophic Simushir earthquake occurred on 15 November 2006 in the Kuril-Okhotsk region in the Middle Kuril Islands which is a transition zone between the Eurasian continent and the Pacific Ocean. It was followed by numerous strong earthquakes. It is established that the catastrophic earthquake was prepared on a site characterized by increased relative effective pressures which is located at the border of the low-pressure area (Figure 1.Based on data from GlobalCMT (Harvard, earthquake focal mechanisms were reconstructed, and tectonic stresses, the seismotectonic setting and the earthquakes distribution pattern were studied for analysis of the field of stresses in the region before to the Simushir earthquake (Figures 2 and 3; Table 1.Five areas of various types of movement were determined. Three of them are stretched along the Kuril Islands. It is established that seismodislocations in earthquake focal areas are regularly distributed. In each of the determined areas, displacements of a specific type (shear or reverse shear are concentrated and give evidence of the alteration and change of zones characterized by horizontal stretching and compression.The presence of the horizontal stretching and compression zones can be explained by a model of subduction (Figure 4. Detailed studies of the state of stresses of the Kuril region confirm such zones (Figure 5. Recent GeodynamicsThe established specific features of tectonic stresses before the catastrophic Simushir earthquake of 15 November 2006 contribute to studies of earthquake forecasting problems. The state of stresses and the geodynamic conditions suggesting occurrence of new earthquakes can be assessed from the data on the distribution of horizontal compression, stretching and shear areas of the Earth’s crust and the upper mantle in the Kuril region.

  3. Atmospheric Ion Clusters: Properties and Size Distributions

    Science.gov (United States)

    D'Auria, R.; Turco, R. P.

    2002-12-01

    Ions are continuously generated in the atmosphere by the action of galactic cosmic radiation. Measured charge concentrations are of the order of 103 ~ {cm-3} throughout the troposphere, increasing to about 5 x 103 ~ {cm-3} in the lower stratosphere [Cole and Pierce, 1965; Paltridge, 1965, 1966]. The lifetimes of these ions are sufficient to allow substantial clustering with common trace constituents in air, including water, nitric and sulfuric acids, ammonia, and a variety of organic compounds [e.g., D'Auria and Turco, 2001 and references cited therein]. The populations of the resulting charged molecular clusters represent a pre-nucleation phase of particle formation, and in this regard comprise a key segment of the over-all nucleation size spectrum [e.g., Castleman and Tang, 1972]. It has been suggested that these clusters may catalyze certain heterogeneous reactions, and given their characteristic crystal-like structures may act as freezing nuclei for supercooled droplets. To investigate these possibilities, basic information on cluster thermodynamic properties and chemical kinetics is needed. Here, we present new results for several relevant atmospheric ion cluster families. In particular, predictions based on quantum mechanical simulations of cluster structure, and related thermodynamic parameters, are compared against laboratory data. We also describe a hybrid approach for modeling cluster sequences that combines laboratory measurements and quantum predictions with the classical liquid droplet (Thomson) model to treat a wider range of cluster sizes. Calculations of cluster mass distributions based on this hybrid model are illustrated, and the advantages and limitations of such an analysis are summarized. References: Castelman, A. W., Jr., and I. N. Tang, Role of small clusters in nucleation about ions, J. Chem. Phys., 57, 3629-3638, 1972. Cole, R. K., and E. T. Pierce, Electrification in the Earth's atmosphere for altitudes between 0 and 100 kilometers, J

  4. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    Science.gov (United States)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  5. Evaluation of Factors Affecting Size and Size Distribution of Chitosan-Electrosprayed Nanoparticles.

    Science.gov (United States)

    Abyadeh, Morteza; Karimi Zarchi, Ali Akbar; Faramarzi, Mohammad Ali; Amani, Amir

    2017-01-01

    Size and size distribution of polymeric nanoparticles have important effect on their properties for pharmaceutical application. In this study, Chitosan nanoparticles were prepared by electrospray method (electrohydrodynamic atomization) and parameters that simultaneously affect size and/or size distribution of chitosan nanoparticles were optimized. Effect of formulation/processing three independent formulation/processing parameters, namely concentration, flow rate and applied voltage was investigated on particle size and size distribution of generated nanoparticles using a Box-Behnken experimental design. All the studied factors showed important effects on average size and size distribution of nanoparticles. A decrease in size and size distribution was obtainable with decreasing flow rate and concentration and increasing applied voltage. Eventually, a sample with minimum size and polydispersity was obtained with polymer concentration, flow rate and applied voltage values of 0.5 %w/v, 0.05 ml/hr and 15 kV, respectively. The experimentally prepared nanoparticles, expected having lowest size and size distribution values had a size of 105 nm, size distribution of 36 and Zeta potential of 59.3 mV. Results showed that optimum condition for production of chitosan nanoparticles with the minimum size and narrow size distribution was a minimum value for flow rate and highest value for applied voltage along with an optimum chitosan concentration.

  6. Fault Slip Distribution of the 2016 Fukushima Earthquake Estimated from Tsunami Waveforms

    Science.gov (United States)

    Gusman, Aditya Riadi; Satake, Kenji; Shinohara, Masanao; Sakai, Shin'ichi; Tanioka, Yuichiro

    2017-08-01

    The 2016 Fukushima normal-faulting earthquake (Mjma 7.4) occurred 40 km off the coast of Fukushima within the upper crust. The earthquake generated a moderate tsunami which was recorded by coastal tide gauges and offshore pressure gauges. First, the sensitivity of tsunami waveforms to fault dimensions and depths was examined and the best size and depth were determined. Tsunami waveforms computed based on four available focal mechanisms showed that a simple fault striking northeast-southwest and dipping southeast (strike = 45°, dip = 41°, rake = -95°) yielded the best fit to the observed waveforms. This fault geometry was then used in a tsunami waveform inversion to estimate the fault slip distribution. A large slip of 3.5 m was located near the surface and the major slip region covered an area of 20 km × 20 km. The seismic moment, calculated assuming a rigidity of 2.7 × 1010 N/m2 was 3.70 × 1019 Nm, equivalent to Mw = 7.0. This is slightly larger than the moments from the moment tensor solutions (Mw 6.9). Large secondary tsunami peaks arrived approximately an hour after clear initial peaks were recorded by the offshore pressure gauges and the Sendai and Ofunato tide gauges. Our tsunami propagation model suggests that the large secondary tsunami signals were from tsunami waves reflected off the Fukushima coast. A rather large tsunami amplitude of 75 cm at Kuji, about 300 km north of the source, was comparable to those recorded at stations located much closer to the epicenter, such as Soma and Onahama. Tsunami simulations and ray tracing for both real and artificial bathymetry indicate that a significant portion of the tsunami wave was refracted to the coast located around Kuji and Miyako due to bathymetry effects.

  7. Imaging the distribution of transient viscosity after the 2016 Mw 7.1 Kumamoto earthquake

    Science.gov (United States)

    Moore, James D. P.; Yu, Hang; Tang, Chi-Hsien; Wang, Teng; Barbot, Sylvain; Peng, Dongju; Masuti, Sagar; Dauwels, Justin; Hsu, Ya-Ju; Lambert, Valère; Nanjundiah, Priyamvada; Wei, Shengji; Lindsey, Eric; Feng, Lujia; Shibazaki, Bunichiro

    2017-04-01

    The deformation of mantle and crustal rocks in response to stress plays a crucial role in the distribution of seismic and volcanic hazards, controlling tectonic processes ranging from continental drift to earthquake triggering. However, the spatial variation of these dynamic properties is poorly understood as they are difficult to measure. We exploited the large stress perturbation incurred by the 2016 earthquake sequence in Kumamoto, Japan, to directly image localized and distributed deformation. The earthquakes illuminated distinct regions of low effective viscosity in the lower crust, notably beneath the Mount Aso and Mount Kuju volcanoes, surrounded by larger-scale variations of viscosity across the back-arc. This study demonstrates a new potential for geodesy to directly probe rock rheology in situ across many spatial and temporal scales.

  8. New Approach to the Characterization of Mmax and of the Tail of the Distribution of Earthquake Magnitudes

    CERN Document Server

    Pisarenko, V F; Sornette, A; Sornette, D

    2007-01-01

    We develop a new method for the statistical esitmation of the tail of the distribution of earthquake sizes recorded in the Worldwide Harvard catalog of seismic moments converted to mW-magnitudes (1977-2004 and 1977-2006). We show that using the set of maximum magnitudes (the set of T-maxima) in windows of duration T days provides a significant improvement over existing methods, in particular (i) by minimizing the negative impact of time-clustering of foreshock / main shock /aftershock sequences in the estimation of the tail of the magnitude distribution, and (ii) by providing via a simulation method reliable estimates of the biases in the Moment estimation procedure (which turns out to be more efficient than the Maximum Likelihood estimation). Using a simulation method, we have determined the optimal window size of the T-maxima to be T=500 days. We have estimated the following quantiles of the distribution of T-maxima of earthquake magnitudes for the whole period 1977-2006: Q_{0.16}(Mmax)=9.3, Q_{0.5}(Mmax)=9...

  9. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    Science.gov (United States)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  10. Undersampling power-law size distributions: effect on the assessment of extreme natural hazards

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2014-01-01

    The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.

  11. Fault roughness and strength heterogeneity control earthquake size and stress drop

    KAUST Repository

    Zielke, Olaf

    2017-01-13

    An earthquake\\'s stress drop is related to the frictional breakdown during sliding and constitutes a fundamental quantity of the rupture process. High-speed laboratory friction experiments that emulate the rupture process imply stress drop values that greatly exceed those commonly reported for natural earthquakes. We hypothesize that this stress drop discrepancy is due to fault-surface roughness and strength heterogeneity: an earthquake\\'s moment release and its recurrence probability depend not only on stress drop and rupture dimension but also on the geometric roughness of the ruptured fault and the location of failing strength asperities along it. Using large-scale numerical simulations for earthquake ruptures under varying roughness and strength conditions, we verify our hypothesis, showing that smoother faults may generate larger earthquakes than rougher faults under identical tectonic loading conditions. We further discuss the potential impact of fault roughness on earthquake recurrence probability. This finding provides important information, also for seismic hazard analysis.

  12. Evaluation of droplet size distributions using univariate and multivariate approaches.

    Science.gov (United States)

    Gaunø, Mette Høg; Larsen, Crilles Casper; Vilhelmsen, Thomas; Møller-Sonnergaard, Jørn; Wittendorff, Jørgen; Rantanen, Jukka

    2013-01-01

    Pharmaceutically relevant material characteristics are often analyzed based on univariate descriptors instead of utilizing the whole information available in the full distribution. One example is droplet size distribution, which is often described by the median droplet size and the width of the distribution. The current study was aiming to compare univariate and multivariate approach in evaluating droplet size distributions. As a model system, the atomization of a coating solution from a two-fluid nozzle was investigated. The effect of three process parameters (concentration of ethyl cellulose in ethanol, atomizing air pressure, and flow rate of coating solution) on the droplet size and droplet size distribution using a full mixed factorial design was used. The droplet size produced by a two-fluid nozzle was measured by laser diffraction and reported as volume based size distribution. Investigation of loading and score plots from principal component analysis (PCA) revealed additional information on the droplet size distributions and it was possible to identify univariate statistics (volume median droplet size), which were similar, however, originating from varying droplet size distributions. The multivariate data analysis was proven to be an efficient tool for evaluating the full information contained in a distribution.

  13. Unimodal tree size distributions possibly result from relatively strong conservatism in intermediate size classes.

    Directory of Open Access Journals (Sweden)

    Yue Bin

    Full Text Available Tree size distributions have long been of interest to ecologists and foresters because they reflect fundamental demographic processes. Previous studies have assumed that size distributions are often associated with population trends or with the degree of shade tolerance. We tested these associations for 31 tree species in a 20 ha plot in a Dinghushan south subtropical forest in China. These species varied widely in growth form and shade-tolerance. We used 2005 and 2010 census data from that plot. We found that 23 species had reversed J shaped size distributions, and eight species had unimodal size distributions in 2005. On average, modal species had lower recruitment rates than reversed J species, while showing no significant difference in mortality rates, per capita population growth rates or shade-tolerance. We compared the observed size distributions with the equilibrium distributions projected from observed size-dependent growth and mortality. We found that observed distributions generally had the same shape as predicted equilibrium distributions in both unimodal and reversed J species, but there were statistically significant, important quantitative differences between observed and projected equilibrium size distributions in most species, suggesting that these populations are not at equilibrium and that this forest is changing over time. Almost all modal species had U-shaped size-dependent mortality and/or growth functions, with turning points of both mortality and growth at intermediate size classes close to the peak in the size distribution. These results show that modal size distributions do not necessarily indicate either population decline or shade-intolerance. Instead, the modal species in our study were characterized by a life history strategy of relatively strong conservatism in an intermediate size class, leading to very low growth and mortality in that size class, and thus to a peak in the size distribution at intermediate sizes.

  14. Fractal analysis of the spatial distribution of earthquakes along the Hellenic Subduction Zone

    Science.gov (United States)

    Papadakis, Giorgos; Vallianatos, Filippos; Sammonds, Peter

    2014-05-01

    The Hellenic Subduction Zone (HSZ) is the most seismically active region in Europe. Many destructive earthquakes have taken place along the HSZ in the past. The evolution of such active regions is expressed through seismicity and is characterized by complex phenomenology. The understanding of the tectonic evolution process and the physical state of subducting regimes is crucial in earthquake prediction. In recent years, there is a growing interest concerning an approach to seismicity based on the science of complex systems (Papadakis et al., 2013; Vallianatos et al., 2012). In this study we calculate the fractal dimension of the spatial distribution of earthquakes along the HSZ and we aim to understand the significance of the obtained values to the tectonic and geodynamic evolution of this area. We use the external seismic sources provided by Papaioannou and Papazachos (2000) to create a dataset regarding the subduction zone. According to the aforementioned authors, we define five seismic zones. Then, we structure an earthquake dataset which is based on the updated and extended earthquake catalogue for Greece and the adjacent areas by Makropoulos et al. (2012), covering the period 1976-2009. The fractal dimension of the spatial distribution of earthquakes is calculated for each seismic zone and for the HSZ as a unified system using the box-counting method (Turcotte, 1997; Robertson et al., 1995; Caneva and Smirnov, 2004). Moreover, the variation of the fractal dimension is demonstrated in different time windows. These spatiotemporal variations could be used as an additional index to inform us about the physical state of each seismic zone. As a precursor in earthquake forecasting, the use of the fractal dimension appears to be a very interesting future work. Acknowledgements Giorgos Papadakis wish to acknowledge the Greek State Scholarships Foundation (IKY). References Caneva, A., Smirnov, V., 2004. Using the fractal dimension of earthquake distributions and the

  15. Application of the extreme value approaches to the apparent magnitude distribution of the earthquakes

    Science.gov (United States)

    Tinti, S.; Mulargia, F.

    1985-03-01

    The apparent magnitude of an earthquake y is defined as the observed magnitude value and differs from the true magnitude m because of the experimental noise n. If f(m) is the density distribution of the magnitude m, and if g(n) is the density distribution of the error n, then the density distribution of y is simply computed by convolving f and g, i.e. h(y)=f*g. If the distinction between y and m is not realized, any statistical analysis based on the frequency-magnitude relation of the earthquake is bound to produce questionable results. In this paper we investigate the impact of the apparent magnitude idea on the statistical methods that study the earthquake distribution by taking into account only the largest (or extremal) earthquakes. We use two approaches: the Gumbel method based on Gumbel theory ( Gumbel, 1958), and the Poisson method introduced by Epstein and Lomnitz (1966). Both methods are concerned with the asymptotic properties of the magnitude distributions. Therefore, we study and compare the asymptotic behaviour of the distributions h(y) and f(m) under suitable hypotheses on the nature of the experimental noise. We investigate in detail two dinstinct cases: first, the two-side limited symmetrical noise, i.e. the noise that is bound to assume values inside a limited region, and second, the normal noise, i.e. the noise that is distributed according to a normal symmetric distribution. We further show that disregarding the noise generally leads to biased results and that, in the framework of the apparent magnitude, the Poisson approach preserves its usefulness, while the Gumbel method gives rise to a curious paradox.

  16. Analysis of the spatial distribution between successive earthquakes in aftershocks series

    Directory of Open Access Journals (Sweden)

    Elisaveta Georgieva Marekova

    2014-10-01

    Full Text Available The earthquake spatial distribution is being studied, using catalogs for different recent aftershock series. The quality of the available data, taking into account the completeness of the magnitude, is examined. Based on the analysis of the catalogs, it was determined that the probability densities of the inter-event distance distribution collapse into a single curve when the data were rescaled. The collapse of the data provides a clear illustration of aftershock-occurrence self-similarity in space.

  17. The Negative Binomial Distribution as a Renewal Model for the Recurrence of Large Earthquakes

    Science.gov (United States)

    Tejedor, Alejandro; Gómez, Javier B.; Pacheco, Amalio F.

    2015-01-01

    The negative binomial distribution is presented as the waiting time distribution of a cyclic Markov model. This cycle simulates the seismic cycle in a fault. As an example, this model, which can describe recurrences with aperiodicities between 0 and 0.5, is used to fit the Parkfield, California earthquake series in the San Andreas Fault. The performance of the model in the forecasting is expressed in terms of error diagrams and compared with other recurrence models from literature.

  18. The Distribution of Bubble Sizes During Reionization

    CERN Document Server

    Lin, Yin; Furlanetto, Steven R; Sutter, P M

    2015-01-01

    A key physical quantity during reionization is the size of HII regions. Previous studies found a characteristic bubble size which increases rapidly during reionization, with apparent agreement between simulations and analytic excursion set theory. Using four different methods, we critically examine this claim. In particular, we introduce the use of the watershed algorithm -- widely used for void finding in galaxy surveys -- which we show to be an unbiased method with the lowest dispersion and best performance on Monte-Carlo realizations of a known bubble size PDF. We find that a friends-of-friends algorithm declares most of the ionized volume to be occupied by a network of volume-filling regions connected by narrow tunnels. For methods tuned to detect those volume-filling regions, previous apparent agreement between simulations and theory is spurious, and due to a failure to correctly account for the window function of measurement schemes. The discrepancy is already obvious from visual inspection. Instead, HI...

  19. How dense can one pack spheres of arbitrary size distribution?

    Science.gov (United States)

    Reis, S. D. S.; Araújo, N. A. M.; Andrade, J. S., Jr.; Herrmann, Hans J.

    2012-01-01

    We present the first systematic algorithm to estimate the maximum packing density of spheres when the grain sizes are drawn from an arbitrary size distribution. With an Apollonian filling rule, we implement our technique for disks in 2d and spheres in 3d. As expected, the densest packing is achieved with power-law size distributions. We also test the method on homogeneous and on empirical real distributions, and we propose a scheme to obtain experimentally accessible distributions of grain sizes with low porosity. Our method should be helpful in the development of ultra-strong ceramics and high-performance concrete.

  20. Distribution of Earthquake Interevent Times in Northeast India and Adjoining Regions

    Science.gov (United States)

    Pasari, Sumanta; Dikshit, Onkar

    2015-10-01

    This study analyzes earthquake interoccurrence times of northeast India and its vicinity from eleven probability distributions, namely exponential, Frechet, gamma, generalized exponential, inverse Gaussian, Levy, lognormal, Maxwell, Pareto, Rayleigh, and Weibull distributions. Parameters of these distributions are estimated from the method of maximum likelihood estimation, and their respective asymptotic variances as well as confidence bounds are calculated using Fisher information matrices. Three model selection criteria namely the Chi-square criterion, the maximum likelihood criterion, and the Kolmogorov-Smirnov minimum distance criterion are used to compare model suitability for the present earthquake catalog (Y adav et al. in Pure Appl Geophys 167:1331-1342, 2010). It is observed that gamma, generalized exponential, and Weibull distributions provide the best fitting, while exponential, Frechet, inverse Gaussian, and lognormal distributions provide intermediate fitting, and the rest, namely Levy, Maxwell Pareto, and Rayleigh distributions fit poorly to the present data. The conditional probabilities for a future earthquake and related conditional probability curves are presented towards the end of this article.

  1. Predicting Posttraumatic Stress Symptom Prevalence and Local Distribution after an Earthquake with Scarce Data.

    Science.gov (United States)

    Dussaillant, Francisca; Apablaza, Mauricio

    2017-08-01

    After a major earthquake, the assignment of scarce mental health emergency personnel to different geographic areas is crucial to the effective management of the crisis. The scarce information that is available in the aftermath of a disaster may be valuable in helping predict where are the populations that are in most need. The objectives of this study were to derive algorithms to predict posttraumatic stress (PTS) symptom prevalence and local distribution after an earthquake and to test whether there are algorithms that require few input data and are still reasonably predictive. A rich database of PTS symptoms, informed after Chile's 2010 earthquake and tsunami, was used. Several model specifications for the mean and centiles of the distribution of PTS symptoms, together with posttraumatic stress disorder (PTSD) prevalence, were estimated via linear and quantile regressions. The models varied in the set of covariates included. Adjusted R2 for the most liberal specifications (in terms of numbers of covariates included) ranged from 0.62 to 0.74, depending on the outcome. When only including peak ground acceleration (PGA), poverty rate, and household damage in linear and quadratic form, predictive capacity was still good (adjusted R2 from 0.59 to 0.67 were obtained). Information about local poverty, household damage, and PGA can be used as an aid to predict PTS symptom prevalence and local distribution after an earthquake. This can be of help to improve the assignment of mental health personnel to the affected localities. Dussaillant F , Apablaza M . Predicting posttraumatic stress symptom prevalence and local distribution after an earthquake with scarce data. Prehosp Disaster Med. 2017;32(4):357-367.

  2. Pareto tails and lognormal body of US cities size distribution

    Science.gov (United States)

    Luckstead, Jeff; Devadoss, Stephen

    2017-01-01

    We consider a distribution, which consists of lower tail Pareto, lognormal body, and upper tail Pareto, to estimate the size distribution of all US cities. This distribution fits the data more accurately than a distribution that comprises of only lognormal and the upper tail Pareto.

  3. Changes of firm size distribution: The case of Korea

    Science.gov (United States)

    Kang, Sang Hoon; Jiang, Zhuhua; Cheong, Chongcheul; Yoon, Seong-Min

    2011-01-01

    In this paper, the distribution and inequality of firm sizes is evaluated for the Korean firms listed on the stock markets. Using the amount of sales, total assets, capital, and the number of employees, respectively, as a proxy for firm sizes, we find that the upper tail of the Korean firm size distribution can be described by power-law distributions rather than lognormal distributions. Then, we estimate the Zipf parameters of the firm sizes and assess the changes in the magnitude of the exponents. The results show that the calculated Zipf exponents over time increased prior to the financial crisis, but decreased after the crisis. This pattern implies that the degree of inequality in Korean firm sizes had severely deepened prior to the crisis, but lessened after the crisis. Overall, the distribution of Korean firm sizes changes over time, and Zipf’s law is not universal but does hold as a special case.

  4. Heterogeneity and anomalous critical indices in the aftershocks distribution of L Aquila earthquake

    CERN Document Server

    Innocenti, D; Poccia, N; Ricci, A; Caputo, M; Bianconi, A

    2009-01-01

    The data analysis of aftershock events of L Aquila earthquake in Apennines following the main 6.3 Mw event of April 6, 2009 has been carried out by standard statistical geophysical tools. The results show the heterogeneity of seismic activity in five different geographical sub-regions indicated by anomalous critical indices of power law distributions: the exponents of the Omori law, the b values of Gutenberg-Richter magnitude-frequency distribution, and the distribution of waiting times. The heterogeneous distribution of dynamic stress and a different morphology in the five sub-regions has been found and two anomalous sub-regions have been identified.

  5. The distribution of bubble sizes during reionization

    Science.gov (United States)

    Lin, Yin; Oh, S. Peng; Furlanetto, Steven R.; Sutter, P. M.

    2016-09-01

    A key physical quantity during reionization is the size of H II regions. Previous studies found a characteristic bubble size which increases rapidly during reionization, with apparent agreement between simulations and analytic excursion set theory. Using four different methods, we critically examine this claim. In particular, we introduce the use of the watershed algorithm - widely used for void finding in galaxy surveys - which we show to be an unbiased method with the lowest dispersion and best performance on Monte Carlo realizations of a known bubble size probability density function (PDF). We find that a friends-of-friends algorithm declares most of the ionized volume to be occupied by a network of volume-filling regions connected by narrow tunnels. For methods tuned to detect the volume-filling regions, previous apparent agreement between simulations and theory is spurious, and due to a failure to correctly account for the window function of measurement schemes. The discrepancy is already obvious from visual inspection. Instead, H II regions in simulations are significantly larger (by factors of 10-1000 in volume) than analytic predictions. The size PDF is narrower, and evolves more slowly with time, than predicted. It becomes more sharply peaked as reionization progresses. These effects are likely caused by bubble mergers, which are inadequately modelled by analytic theory. Our results have important consequences for high-redshift 21 cm observations, the mean free path of ionizing photons, and the visibility of Lyα emitters, and point to a fundamental failure in our understanding of the characteristic scales of the reionization process.

  6. An evaluation of earthquake hazard parameters in the Iranian Plateau based on the Gumbel III distribution

    Science.gov (United States)

    Mohammadi, Hiwa; Bayrak, Yusuf

    2016-04-01

    The Gumbel's third asymptotic distribution (GIII) of the extreme value method is employed to evaluate the earthquake hazard parameters in the Iranian Plateau. This research quantifies spatial mapping of earthquake hazard parameters like annual and 100-year mode beside their 90 % probability of not being exceeded (NBE) in the Iranian Plateau. Therefore, we used a homogeneous and complete earthquake catalogue during the period 1900-2013 with magnitude M w ≥ 4.0, and the Iranian Plateau is separated into equal area mesh of 1° late × 1° long. The estimated result of annual mode with 90 % probability of NBE is expected to exceed the values of M w 6.0 in the Eastern part of Makran, most parts of Central and East Iran, Kopeh Dagh, Alborz, Azerbaijan, and SE Zagros. The 100-year mode with 90 % probability of NBE is expected to overpass the value of M w 7.0 in the Eastern part of Makran, Central and East Iran, Alborz, Kopeh Dagh, and Azerbaijan. The spatial distribution of 100-year mode with 90 % probability of NBE uncovers the high values of earthquake hazard parameters which are frequently connected with the main tectonic regimes of the studied area. It appears that there is a close communication among the seismicity and the tectonics of the region.

  7. The size distribution of inhabited planets

    Science.gov (United States)

    Simpson, Fergus

    2016-02-01

    Earth-like planets are expected to provide the greatest opportunity for the detection of life beyond the Solar system. However, our planet cannot be considered a fair sample, especially if intelligent life exists elsewhere. Just as a person's country of origin is a biased sample among countries, so too their planet of origin may be a biased sample among planets. The magnitude of this effect can be substantial: over 98 per cent of the world's population live in a country larger than the median. In the context of a simple model where the mean population density is invariant to planet size, we infer that a given inhabited planet (such as our nearest neighbour) has a radius r planets hosting advanced life, but also for those which harbour primitive life forms. Further, inferences may be drawn for any variable which influences population size. For example, since population density is widely observed to decline with increasing body mass, we conclude that most intelligent species are expected to exceed 300 kg.

  8. Estimation of Nanoparticle Size Distributions by Image Analysis

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael; Hansen, Mikkel Fougt

    2000-01-01

    Knowledge of the nanoparticle size distribution is important for the interpretation of experimental results in many studies of nanoparticle properties. An automated method is needed for accurate and robust estimation of particle size distribution from nanoparticle images with thousands of particl...

  9. Knife mill operating factors effect on switchgrass particle size distributions.

    Science.gov (United States)

    Bitra, Venkata S P; Womac, Alvin R; Yang, Yuechuan T; Igathinathane, C; Miu, Petre I; Chevanan, Nehru; Sokhansanj, Shahab

    2009-11-01

    Biomass particle size impacts handling, storage, conversion, and dust control systems. Switchgrass (Panicum virgatum L.) particle size distributions created by a knife mill were determined for integral classifying screen sizes from 12.7 to 50.8 mm, operating speeds from 250 to 500 rpm, and mass input rates from 2 to 11 kg/min. Particle distributions were classified with standardized sieves for forage analysis that included horizontal sieving motion with machined-aluminum sieves of thickness proportional to sieve opening dimensions. Then, a wide range of analytical descriptors were examined to mathematically represent the range of particle sizes in the distributions. Correlation coefficient of geometric mean length with knife mill screen size, feed rate, and speed were 0.872, 0.349, and 0.037, respectively. Hence, knife mill screen size largely determined particle size of switchgrass chop. Feed rate had an unexpected influence on particle size, though to a lesser degree than screen size. The Rosin-Rammler function fit the chopped switchgrass size distribution data with an R(2)>0.982. Mass relative span was greater than 1, which indicated a wide distribution of particle sizes. Uniformity coefficient was more than 4.0, which indicated a large assortment of particles and also represented a well-graded particle size distribution. Knife mill chopping of switchgrass produced 'strongly fine skewed mesokurtic' particles with 12.7-25.4 mm screens and 'fine skewed mesokurtic' particles with 50.8 mm screen. Results of this extensive analysis of particle sizes can be applied to selection of knife mill operating parameters to produce a particular size of switchgrass chop, and will serve as a guide for relations among the various analytic descriptors of biomass particle distributions.

  10. Droplet size distribution in homogeneous isotropic turbulence

    Science.gov (United States)

    Perlekar, Prasad; Biferale, Luca; Sbragaglia, Mauro; Srivastava, Sudhir; Toschi, Federico

    2012-06-01

    We study the physics of droplet breakup in a statistically stationary homogeneous and isotropic turbulent flow by means of high resolution numerical investigations based on the multicomponent lattice Boltzmann method. We verified the validity of the criterion proposed by Hinze [AIChE J. 1, 289 (1955)] for droplet breakup and we measured the full probability distribution function of droplets radii at different Reynolds numbers and for different volume fractions. By means of a Lagrangian tracking we could follow individual droplets along their trajectories, define a local Weber number based on the velocity gradients, and study its cross-correlation with droplet deformation.

  11. The Magnitude Distribution of Earthquakes Near Southern California Faults

    Science.gov (United States)

    2011-12-16

    Lindh , 1985; Jackson and Kagan, 2006]. We do not consider time dependence in this study, but focus instead on the magnitude distribution for this fault...90032-7. Bakun, W. H., and A. G. Lindh (1985), The Parkfield, California, earth- quake prediction experiment, Science, 229(4714), 619–624, doi:10.1126

  12. Induced Seismicity: What is the Size of the Largest Expected Earthquake?

    Science.gov (United States)

    Zoeller, G.; Holschneider, M.

    2014-12-01

    The injections of fluids is a well-known origin for the triggering of earthquake sequences. The growing number of projects related to enhanced geothermal systems, fracking and others has led to the question, which maximum earthquake magnitude can be expected as a consequence of fluid injection. This question is addressed from the perspective of statistical analysis. Using basic empirical laws of earthquake statistics, we estimate the magnitude MT of the maximum expected earthquake in a pre-defined future time window T. A case study of the fluid injection site at Paradox Valley, Colorado, USA, demonstrates that the magnitude m=4.3 of the largest observed earthquake on 27 May 2000 is lying very well within the expectation from past seismicity without adjusting any parameters. Vice versa, for a given maximum tolerable earthquake at an injection site, we can constrain the corresponding amount of injected fluids that must not be exceeded within pre-defined confidence bounds.

  13. Powder Size and Distribution in Ultrasonic Gas Atomization

    Science.gov (United States)

    Rai, G.; Lavernia, E.; Grant, N. J.

    1985-08-01

    Ultrasonic gas atomization (USGA) produces powder sizes dependent on the ratio of the nozzle jet diameter to the distance of spread dt/R, Powder size distribution is attributed to the spread of atomizing gas jets during travel from the nozzle exit to the metal stream. The spread diminishes at higher gas atomization pressures. In this paper, calculated powder sizes and distribution are compared with experimentally determined values.

  14. Earthquake Facts

    Science.gov (United States)

    Jump to Navigation Earthquake Facts The largest recorded earthquake in the United States was a magnitude 9.2 that struck Prince William Sound, ... we know, there is no such thing as "earthquake weather" . Statistically, there is an equal distribution of ...

  15. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  16. Reconstruction of Far-Field Tsunami Amplitude Distributions from Earthquake Sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Tom

    2016-04-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  17. Reconstruction of Far-Field Tsunami Amplitude Distributions from Earthquake Sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Tom

    2016-12-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  18. Vapor intrusion in soils with multimodal pore-size distribution

    OpenAIRE

    Alfaro Soto Miguel; Hung Kiang Chang

    2016-01-01

    The Johnson and Ettinger [1] model and its extensions are at this time the most widely used algorithms for estimating subsurface vapor intrusion into buildings (API [2]). The functions which describe capillary pressure curves are utilized in quantitative analyses, although these are applicable for porous media with a unimodal or lognormal pore-size distribution. However, unaltered soils may have a heterogeneous pore distribution and consequently a multimodal pore-size distribution [3], which ...

  19. Study on the Cause of Complex Spatial Distribution of the Tangshan Earthquake Sequence

    Institute of Scientific and Technical Information of China (English)

    Liu Puxiong; Xiaojian

    2012-01-01

    By analyzing higher-accuracy location data of the Tangshan earthquake sequence, a clear distribution pattern of three aftershock belts in the NE, NWW, and NW directions of has been obtained. The analysis reveals three rupture planes of strong events of Ms7.8, Ms7.1 and Ms6.9 in the sequence. It indicates that the complex pattern is closely related to the earthquake source, and the NE-, NWW- and NW-trending regional fault zones, which have been revealed by the research of the pre-seismicity anomaly. In summary, the source is located in the junction of the three fault zones, and the rupture planes of the three strong events located in the source can be regarded as the locked segments on the three fault zones. On these grounds, the paper explains the complexity of the source and epicentral distribution of aftershocks.

  20. AIDA – Seismic data acquisition, processing, storage and distribution at the National Earthquake Center, INGV

    Directory of Open Access Journals (Sweden)

    Salvatore Mazza

    2012-10-01

    Full Text Available On May 4, 2012, a new system, known as the AIDA (Advanced Information and Data Acquisition system for seismology, became operational as the primary tool to monitor, analyze, store and distribute seismograms from the Italian National Seismic Network. Only 16 days later, on May 20, 2012, northern Italy was struck by a Ml 5.9 earthquake that caused seven casualties. This was followed by numerous small to moderate earthquakes, with some over Ml 5. Then, on May 29, 2012, a Ml 5.8 earthquake resulted in 17 more victims and left about 14,000 people homeless. This sequence produced more than 2,100 events over 40 days, and it was still active at the end of June 2012, with minor earthquakes at a rate of about 20 events per day. The new AIDA data management system was designed and implemented, among other things, to exploit the recent huge upgrade of the Italian Seismic Network (in terms of the number and quality of stations and to overcome the limitations of the previous system.

  1. Widespread ground motion distribution caused by rupture directivity during the 2015 Gorkha, Nepal earthquake.

    Science.gov (United States)

    Koketsu, Kazuki; Miyake, Hiroe; Guo, Yujia; Kobayashi, Hiroaki; Masuda, Tetsu; Davuluri, Srinagesh; Bhattarai, Mukunda; Adhikari, Lok Bijaya; Sapkota, Soma Nath

    2016-06-23

    The ground motion and damage caused by the 2015 Gorkha, Nepal earthquake can be characterized by their widespread distributions to the east. Evidence from strong ground motions, regional acceleration duration, and teleseismic waveforms indicate that rupture directivity contributed significantly to these distributions. This phenomenon has been thought to occur only if a strike-slip or dip-slip rupture propagates to a site in the along-strike or updip direction, respectively. However, even though the earthquake was a dip-slip faulting event and its source fault strike was nearly eastward, evidence for rupture directivity is found in the eastward direction. Here, we explore the reasons for this apparent inconsistency by performing a joint source inversion of seismic and geodetic datasets, and conducting ground motion simulations. The results indicate that the earthquake occurred on the underthrusting Indian lithosphere, with a low dip angle, and that the fault rupture propagated in the along-strike direction at a velocity just slightly below the S-wave velocity. This low dip angle and fast rupture velocity produced rupture directivity in the along-strike direction, which caused widespread ground motion distribution and significant damage extending far eastwards, from central Nepal to Mount Everest.

  2. Probabilistic Assessment of Earthquake Hazards: a Comparison among Gamma, Weibull, Generalized Exponential and Gamma Distributions

    Science.gov (United States)

    Pasari, S.

    2013-05-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Weibull, gamma, generalized exponential and lognormal distributions are quite established probability models in this recurrence interval estimation. Moreover these models share many important characteristics among themselves. In this paper, we aim to compare the effectiveness of these models in recurrence interval estimations and eventually in hazard analysis. To contemplate the appropriateness of these models, we use a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (200-320 N and 870-1000 E). The model parameters are estimated using modified maximum likelihood estimator (MMLE). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  3. The Collisional Divot in the Kuiper belt Size Distribution

    CERN Document Server

    Fraser, Wesley C

    2009-01-01

    This paper presents the results of collisional evolution calculations for the Kuiper belt starting from an initial size distribution similar to that produced by accretion simulations of that region - a steep power-law large object size distribution that breaks to a shallower slope at r ~1-2 km, with collisional equilibrium achieved for objects r ~0.5 km. We find that the break from the steep large object power-law causes a divot, or depletion of objects at r ~10-20 km, which in-turn greatly reduces the disruption rate of objects with r> 25-50 km, preserving the steep power-law behavior for objects at this size. Our calculations demonstrate that the roll-over observed in the Kuiper belt size distribution is naturally explained as an edge of a divot in the size distribution; the radius at which the size distribution transitions away from the power-law, and the shape of the divot from our simulations are consistent with the size of the observed roll-over, and size distribution for smaller bodies. Both the kink r...

  4. Evaluation of droplet size distributions using univariate and multivariate approaches

    DEFF Research Database (Denmark)

    Gauno, M.H.; Larsen, C.C.; Vilhelmsen, T.

    2013-01-01

    of the distribution. The current study was aiming to compare univariate and multivariate approach in evaluating droplet size distributions. As a model system, the atomization of a coating solution from a two-fluid nozzle was investigated. The effect of three process parameters (concentration of ethyl cellulose....... Investigation of loading and score plots from principal component analysis (PCA) revealed additional information on the droplet size distributions and it was possible to identify univariate statistics (volume median droplet size), which were similar, however, originating from varying droplet size distributions....... The multivariate data analysis was proven to be an efficient tool for evaluating the full information contained in a distribution. © 2013 Informa Healthcare USA, Inc....

  5. Complexity in Size, Recurrence and Source of Historical Earthquakes and Tsunamis in Central Chile

    Science.gov (United States)

    Cisternas, M.

    2013-05-01

    Central Chile has a 470-year-long written earthquake history, the longest of any part of the country. Thanks to the early and continuous Spanish settlement of this part of Chile (32°- 35° S), records document destructive earthquakes and tsunamis in 1575, 1647, 1730, 1822, 1906 and 1985. This sequence has promoted the idea that central Chile's large subduction inter-plate earthquakes recur at regular intervals of about 80 years. The last of these earthquakes, in 1985, was even forecast as filling a seismic gap on the thrust boundary between the subducting Nazca Plate and the overriding South America Plate. Following this logic, the next large earthquake in metropolitan Chile will not occur until late in the 21st century. However, here I challenge this conclusion by reporting recently discovered historical evidence in Spain, Japan, Peru, and Chile. This new evidence augments the historical catalog in central Chile, strongly suggests that one of these earthquakes previously assumed to occur on the inter-plate interface in fact occurred elsewhere, and forces the conclusion that another of these earthquakes (and its accompanying tsunami) dwarfed the others. These findings complicate the task of assessing the hazard of future earthquakes in Chile's most populated region.

  6. Inversion of spheroid particle size distribution in wider size range and aspect ratio range

    Directory of Open Access Journals (Sweden)

    Tang Hong

    2013-01-01

    Full Text Available The non-spherical particle sizing is very important in the aerosol science, and it can be determined by the light extinction measurement. This paper studies the effect of relationship of the size range and aspect ratio range on the inversion of spheroid particle size distribution by the dependent mode algorithm. The T matrix method and the geometric optics approximation method are used to calculate the extinction efficiency of the spheroids with different size range and aspect ratio range, and the inversion of spheroid particle size distribution in these different ranges is conducted. Numerical simulation indicates that a fairly reasonable representation of the spheroid particle size distribution can be obtained when the size range and aspect ratio range are suitably chosen.

  7. Pore-size-distribution of cationic polyacrylamide hydrogels. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Kremer, M.; Prausnitz, J.M.

    1992-06-01

    The pore size distribution of a AAm/MAPTAC (acrylamide copolymerized with (3-methacrylamidopropyl)trimethylammonium chloride) hydrogel was investigated using Kuga`s mixed-solute-exclusion method, taking into account the wall effect. A Brownian-motion model is also used. Results show the feasibility of determining pore-size distribution of porous materials using the mixed-solute-exclusion method in conjunction with solution of the Fredholm equation; good agreement was obtained with experiment, even for bimodal pore structures. However, different pore size distributions were calculated for the two different probe-solutes (Dextran and poly(ethylene glycol/oxide)). Future work is outlined. 32 figs, 25 refs.

  8. Pore-size-distribution of cationic polyacrylamide hydrogels

    Energy Technology Data Exchange (ETDEWEB)

    Kremer, M.; Prausnitz, J.M.

    1992-06-01

    The pore size distribution of a AAm/MAPTAC (acrylamide copolymerized with (3-methacrylamidopropyl)trimethylammonium chloride) hydrogel was investigated using Kuga's mixed-solute-exclusion method, taking into account the wall effect. A Brownian-motion model is also used. Results show the feasibility of determining pore-size distribution of porous materials using the mixed-solute-exclusion method in conjunction with solution of the Fredholm equation; good agreement was obtained with experiment, even for bimodal pore structures. However, different pore size distributions were calculated for the two different probe-solutes (Dextran and poly(ethylene glycol/oxide)). Future work is outlined. 32 figs, 25 refs.

  9. Spatial distribution of landslides triggered from the 2007 Niigata Chuetsu–Oki Japan Earthquake

    Science.gov (United States)

    Collins, Brian; Kayen, Robert; Tanaka, Yasuo

    2012-01-01

    Understanding the spatial distribution of earthquake-induced landslides from specific earthquakes provides an opportunity to recognize what to expect from future events. The July 16, 2007 Mw 6.6 (MJMA 6.8) Niigata Chuetsu–Oki Japan earthquake triggered hundreds of landslides in the area surrounding the coastal city of Kashiwazaki and provides one such opportunity to evaluate the impacts of an offshore, magnitude 6 + earthquake on a steep coastal region. As part of a larger effort to document all forms of geotechnical damage from this earthquake, we performed landslide inventory mapping throughout the epicentral area and analyzed the resulting data for spatial, seismic-motion, and geologic correlations to describe the pattern of landsliding. Coupled with examination of a third-party, aerial-photo-based landslide inventory, our analyses reveal several areas of high landslide concentration that are not readily explained by either traditional epicentral and fault–plane-distance metrics or by recorded and inferred ground-motions. Whereas average landslide concentrations averaged less than 1 landslide per square kilometer (LS/km2), some areas reached up to 2 LS/km2 in the Nishiyama Hills to the northeast of Kashiwazaki and between 2 and 11 LS/km2 in coastal areas to the north and south of the city. Correlation with seismometer-based and monument overturning back-calculated ground motions suggests that a minimum peak ground acceleration (PGA) of approximately 0.2 g was necessary for landsliding throughout the region, but does not explain the subregional areas of high landslide concentration. However, analysis of topographic slope and the distribution of generally weak, dip-slope, geologic units does sufficiently explain why, on a sub-regional scale, high landslide concentrations occurred where they did. These include: (1) an inland region of steep, dip-slope, anticlinal sedimentary strata with associated fold belt compression and uplift of the anticline and (2

  10. Scale invariance of incident size distributions in response to sizes of their causes.

    Science.gov (United States)

    Englehardt, James D

    2002-04-01

    Incidents can be defined as low-probability, high-consequence events and lesser events of the same type. Lack of data on extremely large incidents makes it difficult to determine distributions of incident size that reflect such disasters, even though they represent the great majority of total losses. If the form of the incident size distribution can be determined, then predictive Bayesian methods can be used to assess incident risks from limited available information. Moreover, incident size distributions have generally been observed to have scale invariant, or power law, distributions over broad ranges. Scale invariance in the distributions of sizes of outcomes of complex dynamical systems has been explained based on mechanistic models of natural and built systems, such as models of self-organized criticality. In this article, scale invariance is shown to result also as the maximum Shannon entropy distribution of incident sizes arising as the product of arbitrary functions of cause sizes. Entropy is shown by simulation and derivation to be maximized as a result of dependence, diversity, abundance, and entropy of multiplicative cause sizes. The result represents an information-theoretic explanation of invariance, parallel to those of mechanistic models. For example, distributions of incident size resulting from 30 partially dependent causes are shown to be scale invariant over several orders of magnitude. Empirical validation of power law distributions of incident size is reviewed, and the Pareto (power law) distribution is validated against oil spill, hurricane, and insurance data. The applicability of the Pareto distribution, in particular, for assessment of total losses over a planning period is discussed. Results justify the use of an analytical, predictive Bayesian version of the Pareto distribution, derived previously, to assess incident risk from available data.

  11. Exploring Unintended Social Side Effects of Tent Distribution Practices in Post-Earthquake Haiti

    Directory of Open Access Journals (Sweden)

    Carmen Helen Logie

    2013-09-01

    Full Text Available The January 2010 earthquake devastated Haiti’s social, economic and health infrastructure, leaving 2 million persons—one-fifth of Haiti’s population—homeless. Internally displaced persons relocated to camps, where human rights remain compromised due to increased poverty, reduced security, and limited access to sanitation and clean water. This article draws on findings from 3 focus groups conducted with internally displaced young women and 3 focus groups with internally displaced young men (aged 18–24 in Leogane, Haiti to explore post-earthquake tent distribution practices. Focus group findings highlighted that community members were not engaged in developing tent distribution strategies. Practices that distributed tents to both children and parents, and linked food and tent distribution, inadvertently contributed to “chaos”, vulnerability to violence and family network breakdown. Moving forward we recommend tent distribution strategies in disaster contexts engage with community members, separate food and tent distribution, and support agency and strategies of self-protection among displaced persons.

  12. Environmental control of natural gap size distribution in tropical forests

    Science.gov (United States)

    Goulamoussène, Youven; Bedeau, Caroline; Descroix, Laurent; Linguet, Laurent; Hérault, Bruno

    2017-01-01

    Natural disturbances are the dominant form of forest regeneration and dynamics in unmanaged tropical forests. Monitoring the size distribution of treefall gaps is important to better understand and predict the carbon budget in response to land use and other global changes. In this study, we model the size frequency distribution of natural canopy gaps with a discrete power law distribution. We use a Bayesian framework to introduce and test, using Monte Carlo Markov chain and Kuo-Mallick algorithms, the effect of local physical environment on gap size distribution. We apply our methodological framework to an original light detecting and ranging dataset in which natural forest gaps were delineated over 30 000 ha of unmanaged forest. We highlight strong links between gap size distribution and environment, primarily hydrological conditions and topography, with large gaps being more frequent on floodplains and in wind-exposed areas. In the future, we plan to apply our methodological framework on a larger scale using satellite data. Additionally, although gap size distribution variation is clearly under environmental control, variation in gap size distribution in time should be tested against climate variability.

  13. Periodicity in the spatial-temporal earthquake distributions for the Pacific region: observation and modeling.

    Science.gov (United States)

    Sasorova, Elena; Levin, Boris

    2014-05-01

    In the course of the last century a cyclic increasing and decreasing of the Earth's seismic activity (SA) was marked. The variations of the SA for the events with M>=7.0 from 1900 up to date were under study. The two subsets of the worldwide NEIC (USGS) catalog were used: USGS/NEIC from 1973 to 2012 and catalog of the significant worldwide earthquakes (2150 B.C. - 1994 A.D.), compiled by USGS/NEIC from the NOAA agency. The preliminary standardization of magnitudes and elimination of aftershocks from list of events was performed. The entire period of observations was subdivided into 5-year intervals. The temporal distributions of the earthquake (EQ) density and released energy density were calculated separately for the Southern hemisphere (SH), and for the Northern hemisphere (NH) and for eighteen latitudinal belts: 90°-80°N, 80°-70°N, 70°-60°N, 60°-50°N and so on (the size of each belt is equal to 10°). The periods of the SA was compared for different latitudinal belts of the Earth. The peaks and decays of the seismicity do not coincide in time for different latitudinal belts and especially for the belts located in NH and SH. The peaks and decays of the SA for the events (with M>=8) were marked in the temporal distributions of the EQ for all studied latitudinal belts. The two-dimension distributions (over latitudes and over time) of the EQ density and released energy density highlighted that the periods of amplification of the SA are equal to 30-35 years approximately. Next, we check the existence of a non-random component in the EQ occurrence between the NH and the SH. All events were related to the time axis according to their origin time. We take into consideration the set of the EQs in the studied catalog as the sequence of events if each event may have only one of two possible outcome (occurrence in the NH or in the SH). A nonparametric run test was used for testing of hypothesis about an existence the nonrandom component in the examined sequence of

  14. Effects of Data Frame Size Distribution on Wireless Lans | Aneke ...

    African Journals Online (AJOL)

    Effects of Data Frame Size Distribution on Wireless Lans. ... Nigerian Journal of Technology ... to replace cables and deploy mobile devices in the communications industry has led to very active research on the utilization of wireless networks.

  15. Size distribution measurements and chemical analysis of aerosol components

    Energy Technology Data Exchange (ETDEWEB)

    Pakkanen, T.A.

    1995-12-31

    The principal aims of this work were to improve the existing methods for size distribution measurements and to draw conclusions about atmospheric and in-stack aerosol chemistry and physics by utilizing size distributions of various aerosol components measured. A sample dissolution with dilute nitric acid in an ultrasonic bath and subsequent graphite furnace atomic absorption spectrometric analysis was found to result in low blank values and good recoveries for several elements in atmospheric fine particle size fractions below 2 {mu}m of equivalent aerodynamic particle diameter (EAD). Furthermore, it turned out that a substantial amount of analyses associated with insoluble material could be recovered since suspensions were formed. The size distribution measurements of in-stack combustion aerosols indicated two modal size distributions for most components measured. The existence of the fine particle mode suggests that a substantial fraction of such elements with two modal size distributions may vaporize and nucleate during the combustion process. In southern Norway, size distributions of atmospheric aerosol components usually exhibited one or two fine particle modes and one or two coarse particle modes. Atmospheric relative humidity values higher than 80% resulted in significant increase of the mass median diameters of the droplet mode. Important local and/or regional sources of As, Br, I, K, Mn, Pb, Sb, Si and Zn were found to exist in southern Norway. The existence of these sources was reflected in the corresponding size distributions determined, and was utilized in the development of a source identification method based on size distribution data. On the Finnish south coast, atmospheric coarse particle nitrate was found to be formed mostly through an atmospheric reaction of nitric acid with existing coarse particle sea salt but reactions and/or adsorption of nitric acid with soil derived particles also occurred. Chloride was depleted when acidic species reacted

  16. Long‐term creep rates on the Hayward Fault: evidence for controls on the size and frequency of large earthquakes

    Science.gov (United States)

    Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Bilham, Roger; Ponce, David A.; Boatwright, John; Caskey, S. John

    2012-01-01

    The Hayward fault (HF) in California exhibits large (Mw 6.5–7.1) earthquakes with short recurrence times (161±65 yr), probably kept short by a 26%–78% aseismic release rate (including postseismic). Its interseismic release rate varies locally over time, as we infer from many decades of surface creep data. Earliest estimates of creep rate, primarily from infrequent surveys of offset cultural features, revealed distinct spatial variation in rates along the fault, but no detectable temporal variation. Since the 1989 Mw 6.9 Loma Prieta earthquake (LPE), monitoring on 32 alinement arrays and 5 creepmeters has greatly improved the spatial and temporal resolution of creep rate. We now identify significant temporal variations, mostly associated with local and regional earthquakes. The largest rate change was a 6‐yr cessation of creep along a 5‐km length near the south end of the HF, attributed to a regional stress drop from the LPE, ending in 1996 with a 2‐cm creep event. North of there near Union City starting in 1991, rates apparently increased by 25% above pre‐LPE levels on a 16‐km‐long reach of the fault. Near Oakland in 2007 an Mw 4.2 earthquake initiated a 1–2 cm creep event extending 10–15 km along the fault. Using new better‐constrained long‐term creep rates, we updated earlier estimates of depth to locking along the HF. The locking depths outline a single, ∼50‐km‐long locked or retarded patch with the potential for an Mw∼6.8 event equaling the 1868 HF earthquake. We propose that this inferred patch regulates the size and frequency of large earthquakes on HF.

  17. A statistical approach to estimate the 3D size distribution of spheres from 2D size distributions

    Science.gov (United States)

    Kong, M.; Bhattacharya, R.N.; James, C.; Basu, A.

    2005-01-01

    Size distribution of rigidly embedded spheres in a groundmass is usually determined from measurements of the radii of the two-dimensional (2D) circular cross sections of the spheres in random flat planes of a sample, such as in thin sections or polished slabs. Several methods have been devised to find a simple factor to convert the mean of such 2D size distributions to the actual 3D mean size of the spheres without a consensus. We derive an entirely theoretical solution based on well-established probability laws and not constrained by limitations of absolute size, which indicates that the ratio of the means of measured 2D and estimated 3D grain size distribution should be r/4 (=.785). Actual 2D size distribution of the radii of submicron sized, pure Fe0 globules in lunar agglutinitic glass, determined from backscattered electron images, is tested to fit the gamma size distribution model better than the log-normal model. Numerical analysis of 2D size distributions of Fe0 globules in 9 lunar soils shows that the average mean of 2D/3D ratio is 0.84, which is very close to the theoretical value. These results converge with the ratio 0.8 that Hughes (1978) determined for millimeter-sized chondrules from empirical measurements. We recommend that a factor of 1.273 (reciprocal of 0.785) be used to convert the determined 2D mean size (radius or diameter) of a population of spheres to estimate their actual 3D size. ?? 2005 Geological Society of America.

  18. Size Segregation in Rapid Flows of Inelastic Particles with Continuous Size Distributions

    Institute of Scientific and Technical Information of China (English)

    LI Rui; ZHANG Duan-Ming; LI Zhi-Hao

    2012-01-01

    Two-dimensional numerical simulations are employed to gain insight into the segregation behavior of granular mixtures with a power-law particle size distribution in the presence of a granular temperature gradient.It is found that particles of all sizes move toward regions of low granular temperature.Species segregation is also observed.Large particles demonstrate a higher affinity for the low-temperature regions and accumulate in these cool regions to a greater extent than their smaller counterparts.Furthermore,the local particle size distribution maintains the same form as the overall (including all particles) size distribution.%Two-dimensional numerical simulations are employed to gain insight into the segregation behavior of granular mixtures with a power-law particle size distribution in the presence of a granular temperature gradient. It is found that particles of all sizes move toward regions of low granular temperature. Species segregation is also observed. Large particles demonstrate a higher affinity for the low-temperature regions and accumulate in these cool regions to a greater extent than their smaller counterparts. Furthermore, the local particle size distribution maintains the same form as the overall (including all particles) size distribution.

  19. Distribution of Earthquakes as Described by the Generalized Logistic Equation and the Gutenberg-Richter Magnitude-Frequency Formula

    CERN Document Server

    Maslov, Lev A

    2012-01-01

    In this work we developed a new differential equation to study the statistics of earthquake distributions. We call this equation the generalized logistic equation. We used the solution of this equation to analyze earthquake data from the following regions: the Central Atlantic, the Canary Islands, the Magellan Mountains, and the Sea of Japan. Our solution showed excellent correspondence with the observed cumulative distribution of earthquakes for all magnitudes. Historically, the Gutenberg-Richter frequency-magnitude formula has been used to study the distribution of earthquakes. However, the Gutenberg-Richter formula is only accurate for large magnitudes. As shown in our analysis, the Gutenberg-Richter formula is a special case of the solution to our generalized logistic equation for large magnitudes.

  20. Vapor intrusion in soils with multimodal pore-size distribution

    Directory of Open Access Journals (Sweden)

    Alfaro Soto Miguel

    2016-01-01

    Full Text Available The Johnson and Ettinger [1] model and its extensions are at this time the most widely used algorithms for estimating subsurface vapor intrusion into buildings (API [2]. The functions which describe capillary pressure curves are utilized in quantitative analyses, although these are applicable for porous media with a unimodal or lognormal pore-size distribution. However, unaltered soils may have a heterogeneous pore distribution and consequently a multimodal pore-size distribution [3], which may be the result of specific granulometry or the formation of secondary porosity related to genetic processes. The present paper was designed to present the application of the Vapor Intrusion Model (SVI_Model to unsaturated soils with multimodal pore-size distribution. Simulations with data from the literature show that the use of a multimodal model in soils with such pore distribution characteristics could provide more reliable results for indoor air concentration, rather than conventional models.

  1. A multivariate rank test for comparing mass size distributions

    KAUST Repository

    Lombard, F.

    2012-04-01

    Particle size analyses of a raw material are commonplace in the mineral processing industry. Knowledge of particle size distributions is crucial in planning milling operations to enable an optimum degree of liberation of valuable mineral phases, to minimize plant losses due to an excess of oversize or undersize material or to attain a size distribution that fits a contractual specification. The problem addressed in the present paper is how to test the equality of two or more underlying size distributions. A distinguishing feature of these size distributions is that they are not based on counts of individual particles. Rather, they are mass size distributions giving the fractions of the total mass of a sampled material lying in each of a number of size intervals. As such, the data are compositional in nature, using the terminology of Aitchison [1] that is, multivariate vectors the components of which add to 100%. In the literature, various versions of Hotelling\\'s T 2 have been used to compare matched pairs of such compositional data. In this paper, we propose a robust test procedure based on ranks as a competitor to Hotelling\\'s T 2. In contrast to the latter statistic, the power of the rank test is not unduly affected by the presence of outliers or of zeros among the data. © 2012 Copyright Taylor and Francis Group, LLC.

  2. Modelling complete particle-size distributions from operator estimates of particle-size

    Science.gov (United States)

    Roberson, Sam; Weltje, Gert Jan

    2014-05-01

    Estimates of particle-size made by operators in the field and laboratory represent a vast and relatively untapped data archive. The wide spatial distribution of particle-size estimates makes them ideal for constructing geological models and soil maps. This study uses a large data set from the Netherlands (n = 4837) containing both operator estimates of particle size and complete particle-size distributions measured by laser granulometry. This study introduces a logit-based constrained-cubic-spline (CCS) algorithm to interpolate complete particle-size distributions from operator estimates. The CCS model is compared to four other models: (i) a linear interpolation; (ii) a log-hyperbolic interpolation; (iii) an empirical logistic function; and (iv) an empirical arctan function. Operator estimates were found to be both inaccurate and imprecise; only 14% of samples were successfully classified using the Dutch classification scheme for fine sediment. Operator estimates of sediment particle-size encompass the same range of values as particle-size distributions measured by laser analysis. However, the distributions measured by laser analysis show that most of the sand percentage values lie between zero and one, so the majority of the variability in the data is lost because operator estimates are made to the nearest 1% at best, and more frequently to the nearest 5%. A method for constructing complete particle-size distributions from operator estimates of sediment texture using a logit constrained cubit spline (CCS) interpolation algorithm is presented. This model and four other previously published methods are compared to establish the best approach to modelling particle-size distributions. The logit-CCS model is the most accurate method, although both logit-linear and log-linear interpolation models provide reasonable alternatives. Models based on empirical distribution functions are less accurate than interpolation algorithms for modelling particle-size distributions in

  3. Surface Rupture and Slip Distribution Resulting from the 2013 M7.7 Balochistan, Pakistan Earthquake

    Science.gov (United States)

    Reitman, N. G.; Gold, R. D.; Briggs, R. W.; Barnhart, W. D.; Hayes, G. P.

    2014-12-01

    The 24 September 2013 M7.7 earthquake in Balochistan, Pakistan, produced a ~200 km long left-lateral strike-slip surface rupture along a portion of the Hoshab fault, a moderately dipping (45-75º) structure in the Makran accretionary prism. The rupture is remarkably continuous and crosses only two (0.7 and 1.5 km wide) step-overs along its arcuate path through southern Pakistan. Displacements are dominantly strike-slip, with a minor component of reverse motion. We remotely mapped the surface rupture at 1:5,000 scale and measured displacements using high resolution (0.5 m) pre- and post-event satellite imagery. We mapped 295 laterally faulted stream channels, terrace margins, and roads to quantify near-field displacement proximal (±10 m) to the rupture trace. The maximum near-field left-lateral offset is 15±2 m (average of ~7 m). Additionally, we used pre-event imagery to digitize 254 unique landforms in the "medium-field" (~100-200 m from the rupture) and then measured their displacements compared to the post-event imagery. At this scale, maximum left-lateral offset approaches 17 m (average of ~8.5 m). The width (extent of observed surface faulting) of the rupture zone varies from ~1 m to 3.7 km. Near- and medium-field offsets show similar slip distributions that are inversely correlated with the width of the fault zone at the surface (larger offsets correspond to narrow fault zones). The medium-field offset is usually greater than the near-field offset. The along-strike surface slip distribution is highly variable, similar to the slip distributions documented for the 2002 Denali M7.9 earthquake and 2001 Kunlun M7.8 earthquake, although the Pakistan offsets are larger in magnitude. The 2013 Pakistan earthquake ranks among the largest documented continental strike-slip displacements, possibly second only to the 18+ m surface displacements attributed to the 1855 Wairarapa M~8.1 earthquake.

  4. Granule Size Distribution and Porosity of Granule Packing

    Institute of Scientific and Technical Information of China (English)

    DAI Shu-hua; SHEN Feng-man; YU Ai-bing

    2008-01-01

    The granule size distribution and the porosity of the granule packing process were researched.For realizing the optimizing control of the whole sintering production process,researchers must know the factors influencing the granule size distribution and the porosity.Therefore,tests were carried out in the laboratory with regard to the influences of the size and size distribution of raw materials and the total moisture content on the size and size distribution of granule.Moreover,tests for finding out the influences of the moisture content and the granule volume fraction on the porosity were also carried out.The results show that (1) the raw material has little influence on granulation when its size is in the range of 0.51 mm to 1.0 mm;(2) the influence of the material size on granule size plays a dominant role,and in contrast,the moisture content creates a minor effect on granule size;(3) in binary packing system,with the increase in the constituent volume fraction,the porosity initially increases and then decreases,and there is a minimum value on the porosity curve of the binary mixture system;(4) the minimum value of the porosity in binary packing system occurs at different locations for different moisture contents,and this value shifts from right to left on the porosity curve with increasing the moisture content;(5) the addition of small granules to the same size component cannot create a significant influence on the porosity,whereas the addition of large granules to the same system can greatly change the porosity.

  5. Particle size and shape distributions of hammer milled pine

    Energy Technology Data Exchange (ETDEWEB)

    Westover, Tyler Lott [Idaho National Lab. (INL), Idaho Falls, ID (United States); Matthews, Austin Colter [Idaho National Lab. (INL), Idaho Falls, ID (United States); Williams, Christopher Luke [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ryan, John Chadron Benjamin [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-04-01

    Particle size and shape distributions impact particle heating rates and diffusion of volatized gases out of particles during fast pyrolysis conversion, and consequently must be modeled accurately in order for computational pyrolysis models to produce reliable results for bulk solid materials. For this milestone, lodge pole pine chips were ground using a Thomas-Wiley #4 mill using two screen sizes in order to produce two representative materials that are suitable for fast pyrolysis. For the first material, a 6 mm screen was employed in the mill and for the second material, a 3 mm screen was employed in the mill. Both materials were subjected to RoTap sieve analysis, and the distributions of the particle sizes and shapes were determined using digital image analysis. The results of the physical analysis will be fed into computational pyrolysis simulations to create models of materials with realistic particle size and shape distributions. This milestone was met on schedule.

  6. Particle size and shape distributions of hammer milled pine

    Energy Technology Data Exchange (ETDEWEB)

    Westover, Tyler Lott [Idaho National Lab. (INL), Idaho Falls, ID (United States); Matthews, Austin Colter [Idaho National Lab. (INL), Idaho Falls, ID (United States); Williams, Christopher Luke [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ryan, John Chadron Benjamin [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-04-01

    Particle size and shape distributions impact particle heating rates and diffusion of volatized gases out of particles during fast pyrolysis conversion, and consequently must be modeled accurately in order for computational pyrolysis models to produce reliable results for bulk solid materials. For this milestone, lodge pole pine chips were ground using a Thomas-Wiley #4 mill using two screen sizes in order to produce two representative materials that are suitable for fast pyrolysis. For the first material, a 6 mm screen was employed in the mill and for the second material, a 3 mm screen was employed in the mill. Both materials were subjected to RoTap sieve analysis, and the distributions of the particle sizes and shapes were determined using digital image analysis. The results of the physical analysis will be fed into computational pyrolysis simulations to create models of materials with realistic particle size and shape distributions. This milestone was met on schedule.

  7. Scaling Analysis of Time Distribution between Successive Earthquakes in Aftershock Sequences

    Science.gov (United States)

    Marekova, Elisaveta

    2016-08-01

    The earthquake inter-event time distribution is studied, using catalogs for different recent aftershock sequences. For aftershock sequences following the Modified Omori's Formula (MOF) it seems clear that the inter-event distribution is a power law. The parameters of this law are defined and they prove to be higher than the calculated value (2-1/ p). Based on the analysis of the catalogs, it is determined that the probability densities of the inter-event time distribution collapse into a single master curve when the data is rescaled with instantaneous intensity, R( t; M th ), defined by MOF. The curve is approximated by a gamma distribution. The collapse of the data provides a clear view of aftershock-occurrence self-similarity.

  8. Scaling Analysis of Time Distribution between Successive Earthquakes in Aftershock Sequences

    Directory of Open Access Journals (Sweden)

    Marekova Elisaveta

    2016-08-01

    Full Text Available The earthquake inter-event time distribution is studied, using catalogs for different recent aftershock sequences. For aftershock sequences following the Modified Omori’s Formula (MOF it seems clear that the inter-event distribution is a power law. The parameters of this law are defined and they prove to be higher than the calculated value (2 – 1/p. Based on the analysis of the catalogs, it is determined that the probability densities of the inter-event time distribution collapse into a single master curve when the data is rescaled with instantaneous intensity, R(t; Mth, defined by MOF. The curve is approximated by a gamma distribution. The collapse of the data provides a clear view of aftershock-occurrence self-similarity.

  9. Packing fraction of particles with lognormal size distribution.

    Science.gov (United States)

    Brouwers, H J H

    2014-05-01

    This paper addresses the packing and void fraction of polydisperse particles with a lognormal size distribution. It is demonstrated that a binomial particle size distribution can be transformed into a continuous particle-size distribution of the lognormal type. Furthermore, an original and exact expression is derived that predicts the packing fraction of mixtures of particles with a lognormal distribution, which is governed by the standard deviation, mode of packing, and particle shape only. For a number of particle shapes and their packing modes (close, loose) the applicable values are given. This closed-form analytical expression governing the packing fraction is thoroughly compared with empirical and computational data reported in the literature, and good agreement is found.

  10. Packing fraction of particles with lognormal size distribution

    Science.gov (United States)

    Brouwers, H. J. H.

    2014-05-01

    This paper addresses the packing and void fraction of polydisperse particles with a lognormal size distribution. It is demonstrated that a binomial particle size distribution can be transformed into a continuous particle-size distribution of the lognormal type. Furthermore, an original and exact expression is derived that predicts the packing fraction of mixtures of particles with a lognormal distribution, which is governed by the standard deviation, mode of packing, and particle shape only. For a number of particle shapes and their packing modes (close, loose) the applicable values are given. This closed-form analytical expression governing the packing fraction is thoroughly compared with empirical and computational data reported in the literature, and good agreement is found.

  11. Cell-size distribution in epithelial tissue formation and homeostasis.

    Science.gov (United States)

    Puliafito, Alberto; Primo, Luca; Celani, Antonio

    2017-03-01

    How cell growth and proliferation are orchestrated in living tissues to achieve a given biological function is a central problem in biology. During development, tissue regeneration and homeostasis, cell proliferation must be coordinated by spatial cues in order for cells to attain the correct size and shape. Biological tissues also feature a notable homogeneity of cell size, which, in specific cases, represents a physiological need. Here, we study the temporal evolution of the cell-size distribution by applying the theory of kinetic fragmentation to tissue development and homeostasis. Our theory predicts self-similar probability density function (PDF) of cell size and explains how division times and redistribution ensure cell size homogeneity across the tissue. Theoretical predictions and numerical simulations of confluent non-homeostatic tissue cultures show that cell size distribution is self-similar. Our experimental data confirm predictions and reveal that, as assumed in the theory, cell division times scale like a power-law of the cell size. We find that in homeostatic conditions there is a stationary distribution with lognormal tails, consistently with our experimental data. Our theoretical predictions and numerical simulations show that the shape of the PDF depends on how the space inherited by apoptotic cells is redistributed and that apoptotic cell rates might also depend on size.

  12. Characteristics of Spatial Distribution for Peak Ground Acceleration in 3 Aug 2014 Ms6.5 Ludian Earthquake, Yuanan, China

    Science.gov (United States)

    kun, Chen; YanXiang, Yu

    2016-04-01

    Considering the geological context, focal mechanism solutions, aftershock distribution and attenuation characteristics of the ground motion in western China, shakemaps of PGA (Peak Ground Acceleration) for The Ludian Ms6.5 earthquake on 3 Aug 2014 was acquired, in which the Mothed of rapid generation ShakeMaps considering site effects was used, and the peak ground acceleration of 62 stations for this earthquake was used as interpolation. Then, distribution of PGA was amended by using PGA observations to correct system bias of theoretical estimates in the area without PGA observations. The results show that the attenuation of ground motion with distance for this earthquake was faster than that of Wang Su-Yun in 2000; the result of bias-corrected was more consistent with attenuation law of this earthquake. After adjusting, for the area with PGA greater than 40 cm / s2 was nearly 8000 km2, which was is reduced by about 40%.

  13. Nowcasting Earthquakes

    Science.gov (United States)

    Rundle, J. B.; Donnellan, A.; Grant Ludwig, L.; Turcotte, D. L.; Luginbuhl, M.; Gail, G.

    2016-12-01

    Nowcasting is a term originating from economics and finance. It refers to the process of determining the uncertain state of the economy or markets at the current time by indirect means. We apply this idea to seismically active regions, where the goal is to determine the current state of the fault system, and its current level of progress through the earthquake cycle. In our implementation of this idea, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. Our method does not involve any model other than the idea of an earthquake cycle. Rather, we define a specific region and a specific large earthquake magnitude of interest, ensuring that we have enough data to span at least 20 or more large earthquake cycles in the region. We then compute the earthquake potential score (EPS) which is defined as the cumulative probability distribution P(nearthquakes in the region. From the count of small earthquakes since the last large earthquake, we determine the value of EPS = P(nearthquake cycle in the defined region at the current time.

  14. Nowcasting earthquakes

    Science.gov (United States)

    Rundle, J. B.; Turcotte, D. L.; Donnellan, A.; Grant Ludwig, L.; Luginbuhl, M.; Gong, G.

    2016-11-01

    Nowcasting is a term originating from economics and finance. It refers to the process of determining the uncertain state of the economy or markets at the current time by indirect means. We apply this idea to seismically active regions, where the goal is to determine the current state of the fault system and its current level of progress through the earthquake cycle. In our implementation of this idea, we use the global catalog of earthquakes, using "small" earthquakes to determine the level of hazard from "large" earthquakes in the region. Our method does not involve any model other than the idea of an earthquake cycle. Rather, we define a specific region and a specific large earthquake magnitude of interest, ensuring that we have enough data to span at least 20 or more large earthquake cycles in the region. We then compute the earthquake potential score (EPS) which is defined as the cumulative probability distribution P(n < n(t)) for the current count n(t) for the small earthquakes in the region. From the count of small earthquakes since the last large earthquake, we determine the value of EPS = P(n < n(t)). EPS is therefore the current level of hazard and assigns a number between 0% and 100% to every region so defined, thus providing a unique measure. Physically, the EPS corresponds to an estimate of the level of progress through the earthquake cycle in the defined region at the current time.

  15. Construction and operation of a system for secure and precise medical material distribution in disaster areas after Wenchuan earthquake.

    Science.gov (United States)

    Cheng, Yongzhong; Xu, Jiankang; Ma, Jian; Cheng, Shusen; Shi, Yingkang

    2009-11-01

    After the Wenchuan Earthquake on May 12th , 2008, under the strong leadership of the Sichuan Provincial Party Committee, the People's Government of Sichuan Province, and the Ministry of Health of the People's Republic of China, the Medical Security Team working at the Sichuan Provincial Headquarters for Wenchuan Earthquake and Disaster Relief Work constructed a secure medical material distribution system through coordination and interaction among and between regions, systems, and departments.

  16. Mega-city and great earthquake distributions: the search of basic links.

    Science.gov (United States)

    Levin, Boris; Sasorova, Elena; Domanski, Andrej

    2013-04-01

    The ever-increasing population density in large metropolitan cities near major active faults (e.g. Tokyo, Lisbon, San-Francisco, et al.) and recent catastrophic earthquakes in Japan, Indonesia and Haiti (loss of life more 500000), highlight the need for searching of causal relationships between distributions of earthquake epicenters and mega-cities at the Earth [1]. The latitudinal distribution of mega-cities calculated with using Internet data base, discovers a curious peculiarity: the density of large city numbers, related to 10-degree latitude interval, demonstrates two maximums in middle latitudes (±30-40°) on both sides of the equator. These maximums are separated by clean local minimum near equator, and such objects (mega-cities) are practically absent in the high latitudes. In the last two decades, it was shown [2, 3, 4] that a seismic activity of the Earth is described by the similar bimodal latitudinal distribution. The similarity between bimodal distributions for geophysical phenomena and mega-city locations attracts common attention. The peak values in the both distributions (near ±35°) correspond to location of well-known "critical latitudes" at the planet. These latitudes were determined [5], as the lines of intersection of a sphere and a spheroid of equal volume (±35°15'52″). Increasing of the angular velocity of a celestial body rotation leads to growth of oblateness of planet, and vice versa, the oblateness is decreasing with reducing of velocity of rotation. So, well-known effect of the Earth rotation instability leads to small pulsations of the geoid. In the critical latitudes, the geoid radius-vector is equal to the radius of sphere. The zones of near critical latitudes are characterized by high density of faults in the Earth crust and manifestation of some geological peculiarities (hot spot distribution, large ore deposit distribution, et al.). The active faults existence has led to an emanation of depth fluids, which created the good

  17. Dynamics of multifractal and correlation characteristics of the spatio-temporal distribution of regional seismicity before the strong earthquakes

    Directory of Open Access Journals (Sweden)

    D. Kiyashchenko

    2003-01-01

    Full Text Available Investigations of the distribution of regional seismicity and the results of numerical simulations of the seismic process show the increase of inhomogenity in spatio-temporal distribution of the seismicity prior to large earthquakes and formation of inhomogeneous clusters in a wide range of scales. Since that, the multifractal approach is appropriate to investigate the details of such dynamics. Here we analyze the dynamics of the seismicity distribution before a number of strong earthquakes occurred in two seismically active regions of the world: Japan and Southern California. In order to study the evolution of spatial inhomogeneity of the seismicity distribution, we consider variations of two multifractal characteristics: information entropy of multifractal measure generation process and the higher-order generalized fractal dimension of the continuum of the earthquake epicenters. Also we studied the dynamics of the level of spatio-temporal correlations in the seismicity distribution. It is found that two aforementioned multifractal characteristics tend to decrease and the level of spatio-temporal correlations tends to increase before the majority of considered strong earthquakes. Such a tendency can be considered as an earthquake precursory signature. Therefore, the results obtained show the possibility to use multifractal and correlation characteristics of the spatio-temporal distribution of regional seismicity for seismic hazard risk evaluation.

  18. Size Distributions of Solar Proton Events: Methodological and Physical Restrictions

    Science.gov (United States)

    Miroshnichenko, L. I.; Yanke, V. G.

    2016-12-01

    Based on the new catalogue of solar proton events (SPEs) for the period of 1997 - 2009 (Solar Cycle 23) we revisit the long-studied problem of the event-size distributions in the context of those constructed for other solar-flare parameters. Recent results on the problem of size distributions of solar flares and proton events are briefly reviewed. Even a cursory acquaintance with this research field reveals a rather mixed and controversial picture. We concentrate on three main issues: i) SPE size distribution for {>} 10 MeV protons in Solar Cycle 23; ii) size distribution of {>} 1 GV proton events in 1942 - 2014; iii) variations of annual numbers for {>} 10 MeV proton events on long time scales (1955 - 2015). Different results are critically compared; most of the studies in this field are shown to suffer from vastly different input datasets as well as from insufficient knowledge of underlying physical processes in the SPEs under consideration. New studies in this field should be made on more distinct physical and methodological bases. It is important to note the evident similarity in size distributions of solar flares and superflares in Sun-like stars.

  19. Modelling and validation of particle size distributions of supported nanoparticles using the pair distribution function technique

    Energy Technology Data Exchange (ETDEWEB)

    Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.; Martinez-Inesta, Maria

    2017-04-13

    The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately andin situusing crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormal size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. This work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.

  20. Size Dependency of Income Distribution and Its Implications

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jiang; WANG You-Gui

    2011-01-01

    We systematically study the size dependency of income distributions, i.e. income distribution versus the population of a country. Using the generalized Lotka--Uolterra model to fit the empirical income data for 1996-2007 in the U.S.A,we find an important parameter A that can scale with a βpower of the size(population) of the U.S.A.in that year. We point out that the size dependency of income distributions, which is a very important property but seldom addressed in previous studies, has two non-trivial implications:(1) the allometric growth pattern,i.e. the power-law relationship between population and GDP in different years, can be mathematically derived from the size-dependent income distributions and also supported by the empirical data;(2)the connection with the anomalous scaling for the probability density function in critical phenomena, since the re-scaled form of the income distributions has asymptotically exactly the same mathematical expression for the limit distribution of the sum of many correlated random variables.

  1. Lognormal Behavior of the Size Distributions of Animation Characters

    Science.gov (United States)

    Yamamoto, Ken

    This study investigates the statistical property of the character sizes of animation, superhero series, and video game. By using online databases of Pokémon (video game) and Power Rangers (superhero series), the height and weight distributions are constructed, and we find that the weight distributions of Pokémon and Zords (robots in Power Rangers) follow the lognormal distribution in common. For the theoretical mechanism of this lognormal behavior, the combination of the normal distribution and the Weber-Fechner law is proposed.

  2. Particle size distribution in ferrofluid macro-clusters

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Wah-Keat, E-mail: wklee@bnl.gov [X-ray Science Division, Advanced Photon Source, Argonne National Laboratory, 9700S. Cass Avenue, Argonne, IL 60439 (United States); Ilavsky, Jan [X-ray Science Division, Advanced Photon Source, Argonne National Laboratory, 9700S. Cass Avenue, Argonne, IL 60439 (United States)

    2013-03-15

    Under an applied magnetic field, many commercial and concentrated ferrofluids agglomerate and form large micron-sized structures. Although large diameter particles have been implicated in the formation of these macro-clusters, the question of whether the particle size distribution of the macro-clusters are the same as the original fluid remains open. Some studies suggest that these macro-clusters consist of larger particles, while others have shown that there is no difference in the particle size distribution between the macro-clusters and the original fluid. In this study, we use X-ray imaging to aid in a sample (diluted EFH-1 from Ferrotec) separation process and conclusively show that the average particle size in the macro-clusters is significantly larger than those in the original sample. The average particle size in the macro-clusters is 19.6 nm while the average particle size of the original fluid is 11.6 nm. - Highlights: Black-Right-Pointing-Pointer X-ray imaging was used to isolate ferrofluid macro-clusters under an applied field. Black-Right-Pointing-Pointer Small angle X-ray scattering was used to determine particle size distributions. Black-Right-Pointing-Pointer Results show that macro-clusters consist of particles that are larger than average.

  3. Formation and size distribution of self-assembled vesicles

    Science.gov (United States)

    Huang, Changjin; Quinn, David; Suresh, Subra

    2017-01-01

    When detergents and phospholipid membranes are dispersed in aqueous solutions, they tend to self-assemble into vesicles of various shapes and sizes by virtue of their hydrophobic and hydrophilic segments. A clearer understanding of such vesiculation processes holds promise for better elucidation of human physiology and disease, and paves the way to improved diagnostics, drug development, and drug delivery. Here we present a detailed analysis of the energetics and thermodynamics of vesiculation by recourse to nonlinear elasticity, taking into account large deformation that may arise during the vesiculation process. The effects of membrane size, spontaneous curvature, and membrane stiffness on vesiculation and vesicle size distribution were investigated, and the critical size for vesicle formation was determined and found to compare favorably with available experimental evidence. Our analysis also showed that the critical membrane size for spontaneous vesiculation was correlated with membrane thickness, and further illustrated how the combined effects of membrane thickness and physical properties influenced the size, shape, and distribution of vesicles. These findings shed light on the formation of physiological extracellular vesicles, such as exosomes. The findings also suggest pathways for manipulating the size, shape, distribution, and physical properties of synthetic vesicles, with potential applications in vesicle physiology, the pathobiology of cancer and other diseases, diagnostics using in vivo liquid biopsy, and drug delivery methods. PMID:28265065

  4. Mass size distribution of particle-bound water

    Science.gov (United States)

    Canepari, S.; Simonetti, G.; Perrino, C.

    2017-09-01

    The thermal-ramp Karl-Fisher method (tr-KF) for the determination of PM-bound water has been applied to size-segregated PM samples collected in areas subjected to different environmental conditions (protracted atmospheric stability, desert dust intrusion, urban atmosphere). This method, based on the use of a thermal ramp for the desorption of water from PM samples and the subsequent analysis by the coulometric KF technique, had been previously shown to differentiate water contributes retained with different strength and associated to different chemical components in the atmospheric aerosol. The application of the method to size-segregated samples has revealed that water showed a typical mass size distribution in each one of the three environmental situations that were taken into consideration. A very similar size distribution was shown by the chemical PM components that prevailed during each event: ammonium nitrate in the case of atmospheric stability, crustal species in the case of desert dust, road-dust components in the case of urban sites. The shape of the tr-KF curve varied according to the size of the collected particles. Considering the size ranges that better characterize the event (fine fraction for atmospheric stability, coarse fraction for dust intrusion, bi-modal distribution for urban dust), this shape is coherent with the typical tr-KF shape shown by water bound to the chemical species that predominate in the same PM size range (ammonium nitrate, crustal species, secondary/combustion species - road dust components).

  5. Molecular theory of size exclusion chromatography for wide pore size distributions.

    Science.gov (United States)

    Sepsey, Annamária; Bacskay, Ivett; Felinger, Attila

    2014-02-28

    Chromatographic processes can conveniently be modeled at a microscopic level using the molecular theory of chromatography. This molecular or microscopic theory is completely general; therefore it can be used for any chromatographic process such as adsorption, partition, ion-exchange or size exclusion chromatography. The molecular theory of chromatography allows taking into account the kinetics of the pore ingress and egress processes, the heterogeneity of the pore sizes and polymer polydispersion. In this work, we assume that the pore size in the stationary phase of chromatographic columns is governed by a wide lognormal distribution. This property is integrated into the molecular model of size exclusion chromatography and the moments of the elution profiles were calculated for several kinds of pore structure. Our results demonstrate that wide pore size distributions have strong influence on the retention properties (retention time, peak width, and peak shape) of macromolecules. The novel model allows us to estimate the real pore size distribution of commonly used HPLC stationary phases, and the effect of this distribution on the size exclusion process. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Global patterns of city size distributions and their fundamental drivers.

    Directory of Open Access Journals (Sweden)

    Ethan H Decker

    Full Text Available Urban areas and their voracious appetites are increasingly dominating the flows of energy and materials around the globe. Understanding the size distribution and dynamics of urban areas is vital if we are to manage their growth and mitigate their negative impacts on global ecosystems. For over 50 years, city size distributions have been assumed to universally follow a power function, and many theories have been put forth to explain what has become known as Zipf's law (the instance where the exponent of the power function equals unity. Most previous studies, however, only include the largest cities that comprise the tail of the distribution. Here we show that national, regional and continental city size distributions, whether based on census data or inferred from cluster areas of remotely-sensed nighttime lights, are in fact lognormally distributed through the majority of cities and only approach power functions for the largest cities in the distribution tails. To explore generating processes, we use a simple model incorporating only two basic human dynamics, migration and reproduction, that nonetheless generates distributions very similar to those found empirically. Our results suggest that macroscopic patterns of human settlements may be far more constrained by fundamental ecological principles than more fine-scale socioeconomic factors.

  7. Production, depreciation and the size distribution of firms

    Science.gov (United States)

    Ma, Qi; Chen, Yongwang; Tong, Hui; Di, Zengru

    2008-05-01

    Many empirical researches indicate that firm size distributions in different industries or countries exhibit some similar characters. Among them the fact that many firm size distributions obey power-law especially for the upper end has been mostly discussed. Here we present an agent-based model to describe the evolution of manufacturing firms. Some basic economic behaviors are taken into account, which are production with decreasing marginal returns, preferential allocation of investments, and stochastic depreciation. The model gives a steady size distribution of firms which obey power-law. The effect of parameters on the power exponent is analyzed. The theoretical results are given based on both the Fokker-Planck equation and the Kesten process. They are well consistent with the numerical results.

  8. Particle size distribution and particle size-related crystalline silica content in granite quarry dust.

    Science.gov (United States)

    Sirianni, Greg; Hosgood, Howard Dean; Slade, Martin D; Borak, Jonathan

    2008-05-01

    Previous studies indicate that the relationship between empirically derived particle counts, particle mass determinations, and particle size-related silica content are not constant within mines or across mine work tasks. To better understand the variability of particle size distributions and variations in silica content by particle size in a granite quarry, exposure surveys were conducted with side-by-side arrays of four closed face cassettes, four cyclones, four personal environmental monitors, and a real-time particle counter. In general, the proportion of silica increased as collected particulate size increased, but samples varied in an inconstant way. Significant differences in particle size distributions were seen depending on the extent of ventilation and the nature and activity of work performed. Such variability raises concerns about the adequacy of silica exposure assessments based on only limited numbers of samples or short-term samples.

  9. The Frequency Distribution of Inter-Event Times of M e 3 Earthquakes in the Taipei Metropolitan Area: 1973 - 2010

    Directory of Open Access Journals (Sweden)

    Jeen-Hwa Wang

    2012-01-01

    Full Text Available M ≥ 3 earthquakes which occurred in the Taipei Metropolitan Area from 1973 through 2010 are used to study seismicity of the area. First, the epicentral distribution, depth distribution, and temporal sequences of earthquake magnitudes are described. The earthquakes can be divided into two groups: one for shallow events with focal depths ranging 0 - 40 km and the other with focal depths deeper than 60 km. Shallow earthquakes are mainly located in the depth range from 0 - 10 km north of 25.1°N, and down to 35 km for those south of 25.1°N. Deep events are located in the subduction zone, with a dip angle of about 70°. Three statistical models, the gamma, power-law, and exponential functions, are applied to describe the single frequency distribution of inter-occurrence times between two consecutive events for both shallow and deep earthquakes. Numerical tests suggest that the most appropriate time interval for counting the frequency of events for statistical analysis is 10 days. Results show that among the three functions, the power-law function is the most appropriate for describing the data points. While the exponential function is the least appropriate to describe the observations, thus, the time series of earthquakes in consideration are not Poissonian. The gamma function is less and more appropriate to describe the observations than the power-law function and the exponential function, respectively. The scaling exponent of the power-law function decreases linearly with an increasingly lower-bound magnitude. The slope value of the regression equation is smaller for shallow earthquakes than for deep events. Meanwhile, the power-law function cannot work when the lower-bound magnitude is 4.2 for shallow earthquakes and 4.3 for deep events.

  10. Theory of Nanocluster Size Distributions from Ion Beam Synthesis

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, C.W.; Yi, D.O.; Sharp, I.D.; Shin, S.J.; Liao, C.Y.; Guzman, J.; Ager III, J.W.; Haller, E.E.; Chrzan, D.C.

    2008-06-13

    Ion beam synthesis of nanoclusters is studied via both kinetic Monte Carlo simulations and the self-consistent mean-field solution to a set of coupled rate equations. Both approaches predict the existence of a steady state shape for the cluster size distribution that depends only on a characteristic length determined by the ratio of the effective diffusion coefficient to the ion flux. The average cluster size in the steady state regime is determined by the implanted species/matrix interface energy.

  11. Particle size distributions in the Eastern Mediterranean troposphere

    Science.gov (United States)

    Kalivitis, N.; Birmili, W.; Stock, M.; Wehner, B.; Massling, A.; Wiedensohler, A.; Gerasopoulos, E.; Mihalopoulos, N.

    2008-11-01

    Atmospheric particle size distributions were measured on Crete island, Greece in the Eastern Mediterranean during an intensive field campaign between 28 August and 20 October, 2005. Our instrumentation combined a differential mobility particle sizer (DMPS) and an aerodynamic particle sizer (APS) and measured number size distributions in the size range 0.018 μm 10 μm. Four time periods with distinct aerosol characteristics were discriminated, two corresponding to marine and polluted air masses, respectively. In marine air, the sub-μm size distributions showed two particle modes centered at 67 nm and 195 nm having total number concentrations between 900 and 2000 cm-3. In polluted air masses, the size distributions were mainly unimodal with a mode typically centered at 140 nm, with number concentrations varying between 1800 and 2900 cm-3. Super-μm particles showed number concentrations in the range from 0.01 to 2.5 cm-3 without any clear relation to air mass origin. A small number of short-lived particle nucleation events were recorded, where the calculated particle formation rates ranged between 1.1 1.7 cm-3 s-1. However, no particle nucleation and growth events comparable to those typical for the continental boundary layer were observed. Particles concentrations (Diameter population was governed mainly by coagulation and that particle formation was absent during most days.

  12. Modal character of atmospheric black carbon size distributions

    Science.gov (United States)

    Berner, A.; Sidla, S.; Galambos, Z.; Kruisz, C.; Hitzenberger, R.; ten Brink, H. M.; Kos, G. P. A.

    1996-08-01

    Samples of atmospheric aerosols, collected with cascade impactors in the urban area of Vienna (Austria) and at a coastal site on the North Sea, were investigated for black carbon (BC) as the main component of absorbing material and for mass. The size distributions are structured. The BC distributions of these samples show a predominant mode, the accumulation aerosol, in the upper submicron size range, a less distinct finer mode attributable to fresh emissions from combustion sources, and a distinct coarse mode of unclear origin. It is important to note that some parameters of the accumulation aerosol are related statistically, indicating the evolution of the atmospheric accumulation aerosol.

  13. Correction of bubble size distributions from transmission electron microscopy observations

    Energy Technology Data Exchange (ETDEWEB)

    Kirkegaard, P.; Eldrup, M.; Horsewell, A.; Skov Pedersen, J.

    1996-01-01

    Observations by transmission electron microscopy of a high density of gas bubbles in a metal matrix yield a distorted size distribution due to bubble overlap and bubble escape from the surface. A model is described that reconstructs 3-dimensional bubble size distributions from 2-dimensional projections on taking these effects into account. Mathematically, the reconstruction is an ill-posed inverse problem, which is solved by regularization technique. Extensive Monte Carlo simulations support the validity of our model. (au) 1 tab., 32 ills., 32 refs.

  14. Size distribution of Portuguese firms between 2006 and 2012

    Science.gov (United States)

    Pascoal, Rui; Augusto, Mário; Monteiro, A. M.

    2016-09-01

    This study aims to describe the size distribution of Portuguese firms, as measured by annual sales and total assets, between 2006 and 2012, giving an economic interpretation for the evolution of the distribution along the time. Three distributions are fitted to data: the lognormal, the Pareto (and as a particular case Zipf) and the Simplified Canonical Law (SCL). We present the main arguments found in literature to justify the use of distributions and emphasize the interpretation of SCL coefficients. Methods of estimation include Maximum Likelihood, modified Ordinary Least Squares in log-log scale and Nonlinear Least Squares considering the Levenberg-Marquardt algorithm. When applying these approaches to Portuguese's firms data, we analyze if the evolution of estimated parameters in both lognormal power and SCL is in accordance with the known existence of a recession period after 2008. This is confirmed for sales but not for assets, leading to the conclusion that the first variable is a best proxy for firm size.

  15. The degree distribution of fixed act-size collaboration networks

    Indian Academy of Sciences (India)

    Qinggui Zhao; Xiangxing Kong; Zhenting Hou

    2009-11-01

    In this paper, we investigate a special evolving model of collaboration net-works, where the act-size is fixed. Based on the first-passage probability of Markov chain theory, this paper provides a rigorous proof for the existence of a limiting degree distribution of this model and proves that the degree distribution obeys the power-law form with the exponent adjustable between 2 and 3.

  16. Size Evolution and Stochastic Models: Explaining Ostracod Size through Probabilistic Distributions

    Science.gov (United States)

    Krawczyk, M.; Decker, S.; Heim, N. A.; Payne, J.

    2014-12-01

    The biovolume of animals has functioned as an important benchmark for measuring evolution throughout geologic time. In our project, we examined the observed average body size of ostracods over time in order to understand the mechanism of size evolution in these marine organisms. The body size of ostracods has varied since the beginning of the Ordovician, where the first true ostracods appeared. We created a stochastic branching model to create possible evolutionary trees of ostracod size. Using stratigraphic ranges for ostracods compiled from over 750 genera in the Treatise on Invertebrate Paleontology, we calculated overall speciation and extinction rates for our model. At each timestep in our model, new lineages can evolve or existing lineages can become extinct. Newly evolved lineages are assigned sizes based on their parent genera. We parameterized our model to generate neutral and directional changes in ostracod size to compare with the observed data. New sizes were chosen via a normal distribution, and the neutral model selected new sizes differentials centered on zero, allowing for an equal chance of larger or smaller ostracods at each speciation. Conversely, the directional model centered the distribution on a negative value, giving a larger chance of smaller ostracods. Our data strongly suggests that the overall direction of ostracod evolution has been following a model that directionally pushes mean ostracod size down, shying away from a neutral model. Our model was able to match the magnitude of size decrease. Our models had a constant linear decrease while the actual data had a much more rapid initial rate followed by a constant size. The nuance of the observed trends ultimately suggests a more complex method of size evolution. In conclusion, probabilistic methods can provide valuable insight into possible evolutionary mechanisms determining size evolution in ostracods.

  17. Size distribution of native cytosolic proteins of Thermoplasma acidophilum.

    Science.gov (United States)

    Sun, Na; Tamura, Noriko; Tamura, Tomohiro; Knispel, Roland Wilhelm; Hrabe, Thomas; Kofler, Christine; Nickell, Stephan; Nagy, István

    2009-07-01

    We used molecular sieve chromatography in combination with LC-MS/MS to identify protein complexes that can serve as templates in the template matching procedures of visual proteomics approaches. By this method the sample complexity was lowered sufficiently to identify 464 proteins and - on the basis of size distribution and bioinformatics analysis - 189 of them could be assigned as subunits of macromolecular complexes over the size of 300 kDa. From these we purified six stable complexes of Thermoplasma acidophilum whose size and subunit composition - analyzed by electron microscopy and MALDI-TOF-MS, respectively - verified the accuracy of our method.

  18. Aerosol mobility imaging for rapid size distribution measurements

    Science.gov (United States)

    Wang, Jian; Hering, Susanne Vera; Spielman, Steven Russel; Kuang, Chongai

    2016-07-19

    A parallel plate dimensional electrical mobility separator and laminar flow water condensation provide rapid, mobility-based particle sizing at concentrations typical of the remote atmosphere. Particles are separated spatially within the electrical mobility separator, enlarged through water condensation, and imaged onto a CCD array. The mobility separation distributes particles in accordance with their size. The condensation enlarges size-separated particles by water condensation while they are still within the gap of the mobility drift tube. Once enlarged the particles are illuminated by a laser. At a pre-selected frequency, typically 10 Hz, the position of all of the individual particles illuminated by the laser are captured by CCD camera. This instantly records the particle number concentration at each position. Because the position is directly related to the particle size (or mobility), the particle size spectra is derived from the images recorded by the CCD.

  19. Quantifying the PAH Size Distribution in H II-Regions

    Science.gov (United States)

    Allamandola, Louis

    We propose to determine the astronomical PAH size distribution for 20 compact H II-regions from the ISO H II-regions spectroscopic archive (catalog). The selected sample includes H IIregions at a range of distances, all with angular sizes captured by the ISO aperture. This is the first time that the PAH size distribution will be put on an accurate, quantitative footing and that a breakdown of the overall PAH population into different size bins is possible. Since the PAH properties that influence the astronomical environment are PAH-size dependent, this new knowledge will provide a deeper understanding of the specific, and sometimes critical, roles that PAHs play in different astronomical environments. This research will be carried out using the PAH spectra and tools that are available through the NASA Ames PAH IR Spectroscopic Database (www.astrochemistry.org/pahdb/). The ISO compact, H II-regions spectroscopic catalog contains the 2.3 196 µm spectra from some 45 H II-regions. Of these, 20 capture the PAH spectrum with high enough quality between 2.5 15 µm to carry out the proposed work. From the outset of the PAH hypothesis it has been thought that the 3.3/11.2 µm PAH band strength ratio is a qualitative proxy for PAH size and a rough measure of variations in the astronomical PAH size distribution between objects or within extended objects. However, because of the intrinsic uncertainties for most of the observational data available for these two bands, and the very limited spectroscopic data available for PAHs representative of the astronomical PAH population, only very crude estimates of the astronomical PAH size distribution have been possible up to now. The work proposed here overcomes these two limitations, allowing astronomers to quantitatively and accurately determine the astronomical PAH size distribution for the first time. The spectra and tools from the NASA Ames PAH IR Spectroscopic Database will be used to determine the astronomical PAH size

  20. Selecting series size where the generalized Pareto distribution best fits

    Science.gov (United States)

    Ben-Zvi, Arie

    2016-10-01

    Rates of arrival and magnitudes of hydrologic variables are frequently described by the Poisson and the generalized Pareto (GP) distributions. Variations of their goodness-of-fit to nested series are studied here. The variable employed is depth of rainfall events at five stations of the Israel Meteorological Service. Series sizes range from about 50 (number of years on records) to about 1000 (total number of recorded events). The goodness-of-fit is assessed by the Anderson-Darling test. Three versions of this test are applied here. These are the regular two-sided test (of which the statistic is designated here by A2), the upper one-sided test (UA2) and the adaptation to the Poisson distribution (PA2). Very good fits, with rejection significance levels higher than 0.5 for A2 and higher than 0.25 for PA2, are found for many series of different sizes. Values of the shape parameter of the GP distribution and of the predicted rainfall depths widely vary with series size. Small coefficients of variation are found, at each station, for the 100-year rainfall depths, predicted through the series with very good fit of the GP distribution. Therefore, predictions through series of very good fit appear more consistent than through other selections of series size. Variations of UA2, with series size, are found narrower than those of A2. Therefore, it is advisable to predict through the series of low UA2. Very good fits of the Poisson distribution to arrival rates are found for series with low UA2. But, a reversed relation is not found here. Thus, the model of Poissonian arrival rates and GP distribution of magnitudes suits here series with low UA2. It is recommended to predict through the series, to which the lowest UA2 is obtained.

  1. Landslides triggered by the 12 January 2010 Mw 7.0 Port-au-Prince, Haiti, earthquake: visual interpretation, inventory compiling and spatial distribution statistical analysis

    Directory of Open Access Journals (Sweden)

    C. Xu

    2014-02-01

    Full Text Available The 12 January 2010 Port-au-Prince, Haiti, earthquake (Mw 7.0 triggered tens of thousands of landslides. The purpose of this study is to investigate the correlations of the occurrence of landslides and their erosion thicknesses with topographic factors, seismic parameters, and their distance from roads. A total of 30 828 landslides triggered by the earthquake covered a total area of 15.736 km2, distributed in an area more than 3000 km2, and the volume of landslide accumulation materials is estimated to be about 29 700 000 m3. These landslides are of various types, mostly belonging to shallow disrupted landslides and rock falls, but also include coherent deep-seated landslides and rock slides. These landslides were delineated using pre- and post-earthquake high-resolutions satellite images. Spatial distribution maps and contour maps of landslide number density, landslide area percentage, and landslide erosion thickness were constructed in order to analyze the spatial distribution patterns of co-seismic landslides. Statistics of size distribution and morphometric parameters of co-seismic landslides were carried out and were compared with other earthquake events in the world. Four proxies of co-seismic landslide abundance, including landslides centroid number density (LCND, landslide top number density (LTND, landslide area percentage (LAP, and landslide erosion thickness (LET were used to correlate co-seismic landslides with various landslide controlling parameters. These controlling parameters include elevation, slope angle, slope aspect, slope curvature, topographic position, distance from drainages, lithology, distance from the epicenter, distance from the Enriquillo–Plantain Garden fault, distance along the fault, and peak ground acceleration (PGA. A comparison of these impact parameters on co-seismic landslides shows that slope angle is the strongest impact parameter on co-seismic landslide occurrence. Our co-seismic landslide inventory is

  2. Global abundance and size distribution of streams and rivers

    NARCIS (Netherlands)

    Downing, J.A.; Cole, J.J.; Duarte, C.M.; Middelburg, J.J.; Melack, J.M.; Prairie, Y.T.; Kortelainen, P.; Striegl, R.G.; McDowell, W.H.; Tranvik, L.J.

    2012-01-01

    To better integrate lotic ecosystems into global cycles and budgets, we provide approximations of the size-distribution and areal extent of streams and rivers. One approach we used was to employ stream network theory combined with data on stream width. We also used detailed stream networks on 2 cont

  3. Comparison of aerosol size distribution in coastal and oceanic environments

    NARCIS (Netherlands)

    Kusmierczyk-Michulec, J.T.; Eijk, A.M.J. van

    2006-01-01

    The results of applying the empirical orthogonal functions (EOF) method to decomposition and approximation of aerosol size distributions are presented. A comparison was made for two aerosol data sets, representing coastal and oceanic environments. The first data set includes measurements collected a

  4. Casein Micelles: Size Distribution in Milks from Individual Cows

    NARCIS (Netherlands)

    de Kruif, C.G.; Huppertz, T.

    2012-01-01

    The size distribution and protein composition of casein micelles in the milk of Holstein-Friesian cows was determined as a function of stage and number of lactations. Protein composition did not vary significantly between the milks of different cows or as a function of lactation stage. Differences i

  5. Global abundance and size distribution of streams and rivers

    NARCIS (Netherlands)

    Downing, J.A.; Cole, J.J.; Duarte, C.M.; Middelburg, J.J.; Melack, J.M.; Prairie, Y.T.; Kortelainen, P.; Striegl, R.G.; McDowell, W.H.; Tranvik, L.J.

    2012-01-01

    To better integrate lotic ecosystems into global cycles and budgets, we provide approximations of the size-distribution and areal extent of streams and rivers. One approach we used was to employ stream network theory combined with data on stream width. We also used detailed stream networks on 2

  6. Effects of Mixtures on Liquid and Solid Fragment Size Distributions

    Science.gov (United States)

    2016-05-01

    Bath of an Immiscible Liquid, Physical Review Letters, 110, 264503, 2013 X. Li and R. S. Tankin, Droplet Size Distribution: A Derivation of a...10), 811-823, 1969 C. R. Hoggatt and R. F. Recht, Fracture Behavior of Tubular Bombs , Journal of Applied Physics, 39(3), 1856-1862, 1968

  7. Modeling of Microporosity Size Distribution in Aluminum Alloy A356

    Science.gov (United States)

    Yao, Lu; Cockcroft, Steve; Zhu, Jindong; Reilly, Carl

    2011-12-01

    Porosity is one of the most common defects to degrade the mechanical properties of aluminum alloys. Prediction of pore size, therefore, is critical to optimize the quality of castings. Moreover, to the design engineer, knowledge of the inherent pore population in a casting is essential to avoid potential fatigue failure of the component. In this work, the size distribution of the porosity was modeled based on the assumptions that the hydrogen pores are nucleated heterogeneously and that the nucleation site distribution is a Gaussian function of hydrogen supersaturation in the melt. The pore growth is simulated as a hydrogen-diffusion-controlled process, which is driven by the hydrogen concentration gradient at the pore liquid interface. Directionally solidified A356 (Al-7Si-0.3Mg) alloy castings were used to evaluate the predictive capability of the proposed model. The cast pore volume fraction and size distributions were measured using X-ray microtomography (XMT). Comparison of the experimental and simulation results showed that good agreement could be obtained in terms of both porosity fraction and size distribution. The model can effectively evaluate the effect of hydrogen content, heterogeneous pore nucleation population, cooling conditions, and degassing time on microporosity formation.

  8. Collisional processes and size distribution in spatially extended debris discs

    CERN Document Server

    Thebault, Philippe

    2007-01-01

    We present a new multi-annulus code for the study of collisionally evolving extended debris discs. We first aim to confirm results obtained for a single-annulus system, namely that the size distribution in "real" debris discs always departs from the theoretical collisional equilibrium $dN\\proptoR^{-3.5}dR$ power law, especially in the crucial size range of observable particles (<1cm), where it displays a characteristic wavy pattern. We also aim at studying how debris discs density distributions, scattered light luminosity profiles, and SEDs are affected by the coupled effect of collisions and radial mixing due to radiation pressure affected small grains. The size distribution evolution is modeled from micron-sized grains to 50km-sized bodies. The model takes into account the crucial influence of radiation pressure-affected small grains. We consider the collisional evolution of a fiducial a=120AU radius disc with an initial surface density in $\\Sigma(a)\\propto a^{\\alpha}$. We show that the system's radial e...

  9. The Detection and Measurement of the Activity Size Distributions

    Science.gov (United States)

    Ramamurthi, Mukund

    The infiltration of radon into the indoor environment may cause the exposure of the public to excessive amounts of radioactivity and has spurred renewed research interest over the past several years into the occurrence and properties of radon and its decay products in indoor air. The public health risks posed by the inhalation and subsequent lung deposition of the decay products of Rn-222 have particularly warranted the study of their diffusivity and attachment to molecular cluster aerosols in the ultrafine particle size range (0.5-5 nm) and to accumulation mode aerosols. In this research, a system for the detection and measurement of the activity size distributions and concentration levels of radon decay products in indoor environments has been developed. The system is microcomputer-controlled and involves a combination of multiple wire screen sampler -detector units operated in parallel. The detection of the radioactivity attached to the aerosol sampled in these units permits the determination of the radon daughter activity -weighted size distributions and concentration levels in indoor air on a semi-continuous basis. The development of the system involved the design of the detection and measurement system, its experimental characterization and testing in a radon-aerosol chamber, and numerical studies for the optimization of the design and operating parameters of the system. Several concepts of utility to aerosol size distribution measurement methods sampling the ultrafine cluster size range evolved from this study, and are discussed in various chapters of this dissertation. The optimized multiple wire screen (Graded Screen Array) system described in this dissertation is based on these concepts. The principal facet of the system is its ability to make unattended measurements of activity size distributions and concentration levels of radon decay products on a semi-continuous basis. Thus, the capability of monitoring changes in the activity concentrations and size

  10. Remnant lipoprotein size distribution profiling via dynamic light scattering analysis.

    Science.gov (United States)

    Chandra, Richa; Mellis, Birgit; Garza, Kyana; Hameed, Samee A; Jurica, James M; Hernandez, Ana V; Nguyen, Mia N; Mittal, Chandra K

    2016-11-01

    Remnant lipoproteins (RLP) are a metabolically derived subpopulation of triglyceride-rich lipoproteins (TRL) in human blood that are involved in the metabolism of dietary fats or triglycerides. RLP, the smaller and denser variants of TRL particles, are strongly correlated with cardiovascular disease (CVD) and were listed as an emerging atherogenic risk factor by the AHA in 2001. Varying analytical techniques used in clinical studies in the size determination of RLP contribute to conflicting hypotheses in regard to whether larger or smaller RLP particles contribute to CVD progression, though multiple pathways may exist. We demonstrated a unique combinatorial bioanalytical approach involving the preparative immunoseparation of RLP, and dynamic light scattering for size distribution analysis. This is a new facile and robust methodology for the size distribution analysis of RLP that in conjunction with clinical studies may reveal the mechanisms by which RLP cause CVD progression. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Size distribution and structure of Barchan dune fields

    Directory of Open Access Journals (Sweden)

    O. Durán

    2011-07-01

    Full Text Available Barchans are isolated mobile dunes often organized in large dune fields. Dune fields seem to present a characteristic dune size and spacing, which suggests a cooperative behavior based on dune interaction. In Duran et al. (2009, we propose that the redistribution of sand by collisions between dunes is a key element for the stability and size selection of barchan dune fields. This approach was based on a mean-field model ignoring the spatial distribution of dune fields. Here, we present a simplified dune field model that includes the spatial evolution of individual dunes as well as their interaction through sand exchange and binary collisions. As a result, the dune field evolves towards a steady state that depends on the boundary conditions. Comparing our results with measurements of Moroccan dune fields, we find that the simulated fields have the same dune size distribution as in real fields but fail to reproduce their homogeneity along the wind direction.

  12. Casein micelles: size distribution in milks from individual cows.

    Science.gov (United States)

    de Kruif, C G Kees; Huppertz, Thom

    2012-05-09

    The size distribution and protein composition of casein micelles in the milk of Holstein-Friesian cows was determined as a function of stage and number of lactations. Protein composition did not vary significantly between the milks of different cows or as a function of lactation stage. Differences in the size and polydispersity of the casein micelles were observed between the milks of different cows, but not as a function of stage of milking or stage of lactation and not even over successive lactations periods. Modal radii varied from 55 to 70 nm, whereas hydrodynamic radii at a scattering angle of 73° (Q² = 350 μm⁻²) varied from 77 to 115 nm and polydispersity varied from 0.27 to 0.41, in a log-normal distribution. Casein micelle size in the milks of individual cows was not correlated with age, milk production, or lactation stage of the cows or fat or protein content of the milk.

  13. Size distribution and structure of Barchan dune fields

    DEFF Research Database (Denmark)

    Duran, O.; Schwämmle, Veit; Lind, P. G.;

    2011-01-01

    Barchans are isolated mobile dunes often organized in large dune fields. Dune fields seem to present a characteristic dune size and spacing, which suggests a co-operative behavior based on dune interaction. In Duran et al. (2009), we propose that the redistribution of sand by collisions between...... dunes is a key element for the stability and size selection of barchan dune fields. This approach was based on a mean-field model ignoring the spatial distribution of dune fields. Here, we present a simplified dune field model that includes the spatial evolution of individual dunes as well...... as their interaction through sand exchange and binary collisions. As a result, the dune field evolves towards a steady state that depends on the boundary conditions. Comparing our results with measurements of Moroccan dune fields, we find that the simulated fields have the same dune size distribution as in real fields...

  14. Thoron progeny size distribution in monazite storage facility.

    Science.gov (United States)

    Rogozina, Marina; Zhukovsky, Michael; Ekidin, Aleksey; Vasyanovich, Maksim

    2014-11-01

    Field experiments in the atmosphere of monazite warehouses with a high content of (220)Rn progeny concentration were conducted. Size distribution of aerosol particles was measured with the combined use of diffusion battery with varied capture elements and cascade impactor. Four (212)Pb aerosol modes were detected-three in the ultrafine region (aerosol median thermodynamic diameters ∼0.3, 1 and 5 nm) and one with an aerosol median aerodynamic diameter of 500 nm. The activity fraction of aerosol particles with the size <10 nm is nearly 20-25 %. The dose conversion factor for EEC₂₂₀Rn exposure, obtained on the basis of the aerosol size distribution and existing research data on lung absorption types of (212)Pb aerosols, is close to 180 nSv per Bq h m(-3).

  15. Thresholded Power Law Size Distributions of Instabilities in Astrophysics

    CERN Document Server

    Aschwanden, Markus J

    2015-01-01

    Power law-like size distributions are ubiquitous in astrophysical instabilities. There are at least four natural effects that cause deviations from ideal power law size distributions, which we model here in a generalized way: (1) a physical threshold of an instability; (2) incomplete sampling of the smallest events below a threshold $x_0$; (3) contamination by an event-unrelated background $x_b$; and (4) truncation effects at the largest events due to a finite system size. These effects can be modeled in simplest terms with a "thresholded power law" distribution function (also called generalized Pareto [type II] or Lomax distribution), $N(x) dx \\propto (x+x_0)^{-a} dx$, where $x_0 > 0$ is positive for a threshold effect, while $x_0 < 0$ is negative for background contamination. We analytically derive the functional shape of this thresholded power law distribution function from an exponential-growth evolution model, which produces avalanches only when a disturbance exceeds a critical threshold $x_0$. We app...

  16. An Inhomogeneous Distribution Model of Strong Earthquakes along Strike-Slip Active Fault Segments on the Chinese Continent and Its Implication in Engineering Seismology

    Institute of Scientific and Technical Information of China (English)

    Zhou Bengang; Ran Hongliu; Song Xinchu; Zhou Qin

    2004-01-01

    Through the statistical analysis of earthquake distribution along 51 strike-slip active fault segments on the Chinese continent, we found that strong earthquake distribution along the seismogenic fault segments is inhomogeneous and the distribution probability density p (K) can be stated asp(K) = 1.1206e-3.947K2in which K = S/( L/2), S refers to the distance from earthquake epicenter to the center of a fault segment, L is the length of the fault segment. The above model can be utilized to modify the probability density of earthquake occurrence of the maximum magnitude interval in a potential earthquake source. Nevertheless, it is only suitable for those potential earthquake sources delineated along a single seismogenic fault.This inhomogeneous model has certain effects on seismic risk assessment, especially for those potential earthquake sources with higher earthquake reoccurrence rates of the maximum magnitude interval. In general, higher reoccurrence rate of the maximum magnitude interval and lower exceeding probability level may bring larger difference of the results in seismic risk analysis by adopting the inhomogeneous model, the PGA values increase inner the potential earthquake source, but reduce near the vicinity and out of the potential earthquake source.Taking the Tangyin potential earthquake source as an example, with exceeding probability of 10 % and 2 % in 50 years, the difference of the PGA values between inhomogeneous model and homogenous models can reach 12%.

  17. Packing fraction of particles with a Weibull size distribution

    Science.gov (United States)

    Brouwers, H. J. H.

    2016-07-01

    This paper addresses the void fraction of polydisperse particles with a Weibull (or Rosin-Rammler) size distribution. It is demonstrated that the governing parameters of this distribution can be uniquely related to those of the lognormal distribution. Hence, an existing closed-form expression that predicts the void fraction of particles with a lognormal size distribution can be transformed into an expression for Weibull distributions. Both expressions contain the contraction coefficient β. Likewise the monosized void fraction φ1, it is a physical parameter which depends on the particles' shape and their state of compaction only. Based on a consideration of the scaled binary void contraction, a linear relation for (1 - φ1)β as function of φ1 is proposed, with proportionality constant B, depending on the state of compaction only. This is validated using computational and experimental packing data concerning random close and random loose packing arrangements. Finally, using this β, the closed-form analytical expression governing the void fraction of Weibull distributions is thoroughly compared with empirical data reported in the literature, and good agreement is found. Furthermore, the present analysis yields an algebraic equation relating the void fraction of monosized particles at different compaction states. This expression appears to be in good agreement with a broad collection of random close and random loose packing data.

  18. Raindrop size distribution: Fitting performance of common theoretical models

    Science.gov (United States)

    Adirosi, E.; Volpi, E.; Lombardo, F.; Baldini, L.

    2016-10-01

    Modelling raindrop size distribution (DSD) is a fundamental issue to connect remote sensing observations with reliable precipitation products for hydrological applications. To date, various standard probability distributions have been proposed to build DSD models. Relevant questions to ask indeed are how often and how good such models fit empirical data, given that the advances in both data availability and technology used to estimate DSDs have allowed many of the deficiencies of early analyses to be mitigated. Therefore, we present a comprehensive follow-up of a previous study on the comparison of statistical fitting of three common DSD models against 2D-Video Distrometer (2DVD) data, which are unique in that the size of individual drops is determined accurately. By maximum likelihood method, we fit models based on lognormal, gamma and Weibull distributions to more than 42.000 1-minute drop-by-drop data taken from the field campaigns of the NASA Ground Validation program of the Global Precipitation Measurement (GPM) mission. In order to check the adequacy between the models and the measured data, we investigate the goodness of fit of each distribution using the Kolmogorov-Smirnov test. Then, we apply a specific model selection technique to evaluate the relative quality of each model. Results show that the gamma distribution has the lowest KS rejection rate, while the Weibull distribution is the most frequently rejected. Ranking for each minute the statistical models that pass the KS test, it can be argued that the probability distributions whose tails are exponentially bounded, i.e. light-tailed distributions, seem to be adequate to model the natural variability of DSDs. However, in line with our previous study, we also found that frequency distributions of empirical DSDs could be heavy-tailed in a number of cases, which may result in severe uncertainty in estimating statistical moments and bulk variables.

  19. Particle size distributions in the Eastern Mediterranean troposphere

    Directory of Open Access Journals (Sweden)

    N. Kalivitis

    2008-04-01

    Full Text Available Atmospheric particle size distributions were measured on Crete island, Greece in the Eastern Mediterranean during an intensive field campaign between 28 August and 20 October 2005. Our instrumentation combined a differential mobility particle sizer (DMPS and an aerodynamic particle sizer (APS and measured number size distributions in the size range 0.018 μm–10 μm. Four time periods with distinct aerosol characteristics were discriminated, two corresponding to marine and polluted air masses, respectively. In marine air, the sub-μm size distributions showed two particle modes centered at 67 nm and 195 nm having total number concentrations between 900 and 2000 cm−3. In polluted air masses, the size distributions were mainly unimodal with a mode typically centered at 140 nm, with number concentrations varying between 1800 and 2900 cm−3. Super-μm particles showed number concentrations in the range from 0.01 to 2.5 cm−3 without any clear relation to air mass origin. A small number of short-lived particle nucleation events were recorded, where the calculated particle formation rates ranged between 1.1–1.7 cm−3 s−1. However, no particle nucleation and growth events comparable to those typical for the continental boundary layer were observed. Particles concentrations (Diameter <50 nm were low compared to continental boundary layer conditions with an average concentration of 300 cm−3. The production of sulfuric acid and its subsequently condensation on preexisting particles was examined with the use of a simplistic box model. These calculations suggested that the day-time evolution of the Aitken particle population was governed mainly by coagulation and that particle formation was absent during most days.

  20. Particle size distributions in the Eastern Mediterranean troposphere

    Directory of Open Access Journals (Sweden)

    N. Kalivitis

    2008-11-01

    Full Text Available Atmospheric particle size distributions were measured on Crete island, Greece in the Eastern Mediterranean during an intensive field campaign between 28 August and 20 October, 2005. Our instrumentation combined a differential mobility particle sizer (DMPS and an aerodynamic particle sizer (APS and measured number size distributions in the size range 0.018 μm–10 μm. Four time periods with distinct aerosol characteristics were discriminated, two corresponding to marine and polluted air masses, respectively. In marine air, the sub-μm size distributions showed two particle modes centered at 67 nm and 195 nm having total number concentrations between 900 and 2000 cm−3. In polluted air masses, the size distributions were mainly unimodal with a mode typically centered at 140 nm, with number concentrations varying between 1800 and 2900 cm−3. Super-μm particles showed number concentrations in the range from 0.01 to 2.5 cm−3 without any clear relation to air mass origin. A small number of short-lived particle nucleation events were recorded, where the calculated particle formation rates ranged between 1.1–1.7 cm−3 s−1. However, no particle nucleation and growth events comparable to those typical for the continental boundary layer were observed. Particles concentrations (Diameter <50 nm were low compared to continental boundary layer conditions with an average concentration of 300 cm−3. The production of sulfuric acid and its subsequently condensation on preexisting particles was examined with the use of a simplistic box model. These calculations suggested that the day-time evolution of the Aitken particle population was governed mainly by coagulation and that particle formation was absent during most days.

  1. Tectonics Earthquake Distribution Pattern Analysis Based Focal Mechanisms (Case Study Sulawesi Island, 1993???2012)

    OpenAIRE

    Ismullah M, Muh.Fawzy; Lantu; Aswad, Sabrianto; MASSINAI, MUH.ALTIN

    2015-01-01

    Indonesia is the meeting zone between three world main plates: Eurasian Plate, Pacific Plate, and Indo ??? Australia Plate. Therefore, Indonesia has a high seismicity degree. Sulawesi is one of whose high seismicity level. The earthquake centre lies in fault zone so the earthquake data gives tectonic visualization in a certain place. This research purpose is to identify Sulawesi tectonic model by using earthquake data from 1993 to 2012. Data used in this research is the earthquake...

  2. Late Quaternary paleoseismic sedimentary archive from deep central Gulf of Corinth: time distribution of inferred earthquake-induced layers

    Directory of Open Access Journals (Sweden)

    Corina Campos

    2014-02-01

    Full Text Available A sedimentary archive corresponding to the last 17 cal kyr BP has been studied by means of a giant piston core retrieved on board R/V MARION-DUFRESNE in the North Central Gulf of Corinth. Based on previous methodological improvements, grain-size distribution and Magnetic Susceptibility Anisotropy (MSA have been analysed in order to detect earthquake-induced deposits. We indentified 36 specific layers -Homogenites+Turbidites (HmTu - intercalated within continuous hemipelagictype sediments (biogenic or bio-induced fraction and fine-grained siliciclastic fraction. The whole succession is divided into a non-marine lower half and a marine upper half. The “events” are distributed through the entire core and they are composed of two terms: a coarse-grained lower term and an upper homogeneous fine-grained term, sharply separated. Their average time recurrence interval could be estimated for the entire MD01-2477 core. The non-marine and the marine sections yielded close estimated values for event recurrence times of around 400 yrs to 500 yrs.

  3. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  4. MOLECULAR THERMODYNAMICS OF MICELLIZATION: MICELLE SIZE DISTRIBUTIONS AND GEOMETRY TRANSITIONS

    Directory of Open Access Journals (Sweden)

    M. S. Santos

    Full Text Available Abstract Surfactants are amphiphilic molecules that can spontaneously self-assemble in solution, forming structures known as micelles. Variations in temperature, pH, and electrolyte concentration imply changes in the interactions between surfactants and micelle stability conditions, including micelle size distribution and micelle shape. Here, molecular thermodynamics is used to describe and predict conditions of micelle formation in surfactant solutions by directly calculating the minimum Gibbs free energy of the system, corresponding to the most stable condition of the surfactant solution. In order to find it, the proposed methodology takes into account the micelle size distribution and two possible geometries (spherical and spherocylindrical. We propose a numerical optimization methodology where the minimum free energy can be reached faster and in a more reliable way. The proposed models predict the critical micelle concentration well when compared to experimental data, and also predict the effect of salt on micelle geometry transitions.

  5. Size distribution of FeNiB nanoparticles

    Directory of Open Access Journals (Sweden)

    Lackner P.

    2014-07-01

    Full Text Available Two samples of amorphous nanoparticles FeNiB, one of them with SiO2 sheath around the core and one without, were investigated by transmission electron microscopy and magnetic measurements. The coating gives mean particle diameters of 4.3 nm compared to 7.2 nm for the uncoated particles. Magnetic measurements prove superparamagnetic behaviour above 160 K (350 K for the coated (uncoated sample. With use of effective anisotropy constant Keff – determined from hysteresis loops – size distributions are determined both from ZFC curves, as well as from relaxation measurements. Both are in good agreement and are very similar for both samples. Comparison with the size distribution determined from TEM pictures shows that magnetic clusters consist of only few physical particles.

  6. Global Correlation between the Size of Subduction Earthquakes and the Magnitude of Crustal Normal Fault Aftershocks in the Forearc

    Science.gov (United States)

    Aron, F.; Allmendinger, R. W.; Jensen Siles, E.

    2013-12-01

    subduction event. Given the relatively large magnitude and shallow depth of these triggered earthquakes, understanding their behavior in the context of the subduction seismic cycle becomes important for seismic hazards evaluation. In general, the Mw 7.0 crustal events in both Chile and Japan struck in sparsely populated areas with relatively good building codes and basic infrastructure, though there was a triggered normal fault with surface rupture just 60 km south from the Fukushima nuclear plant. However, as population increases with concomitant land use and development, large crustal aftershocks pose a significant hazard to critical infrastructure. This documented correlation between size of the main shock and that of the intraplate aftershocks, along with field studies of these faults, suggests that the forearc structures should be incorporated in any seismic hazard assessment of subduction zones regions.

  7. Spatial distribution features of sequence types of moderate and strong earthquake in Chinese mainland

    Institute of Scientific and Technical Information of China (English)

    JIANG Hai-kun; LI Yong-li; QU Yan-jun; HUA Ai-jun; ZHENG Jian-chang; DAI Lei; HOU Hai-feng

    2006-01-01

    Based on 294 earthquake sequences with magnitude greater than or equal to 5.0 occurred in Chinese mainland since 1970, the spatial distribution features of sequence types have been studied. In southwestern China, it takes mainshock-aftershock sequence type (MAT) as the major in Chuan-Dian rhombic block and concerned Xianshuihe-Anninghe-Xiaojiang seismic belt, as well as in Jinshajiang-Honghe seismic belt. Multiple mainshock type (MMT) mainly distributes in western Yunnan, and Longlin and Lancang areas in Tengchong-Baoshan block in west of Nujiang-Lancangjiang fault zone. A few isolated earthquake type (IET) mainly occurred in northwestern Sichuan and there is no IET occurred in Yunnan region. In northwestern China, it takes mainshock-aftershock sequence type (MAT) as the major in west segment of South Tianshan in Xinjiang region. Some MMT also occurred in this area in the intersection of Kalpin block and the Puchang fault zone. It takes IET as the major in middle Tianshan in Xinjiang. Along the Qilianshan seismic belt, most of sequences are MAT. In Qinghai region, it takes MAT as the major, but the regional feature of the spatial distribution of sequence types is not very clear. In North China, it takes MAT as the major in Yinshan-Yanshan-Bohai seismic belt, north edge of North China, and in Hebei plain seismic belt, as well as in sub-plate of lower river area of Yangtze River. In intersection of north segment of Shanxi seismic belt and the NW-trending Yinshan-Yanshan-Bohai seismic belt, there are several moderate or strong MMT with magnitude from 5.0 to 6.0 occurred. In south of North China around the latitude line of 35°N, it takes IET as the major. The spatial distribution of sequence types is relevant to the patterns of tectonic movements.MAT is mostly produced by the ruptures of locked units or asperities or the neonatal separating segments inside the fault zones. MMT is generally relevant to the conjugate structures or intersection of many tectonic settings

  8. Energy conservation potential of Portland Cement particle size distribution control

    Energy Technology Data Exchange (ETDEWEB)

    Tresouthick, S.W.

    1985-01-01

    The main objective of Phase 3 is to develop practical economic methods of controlling the particle size distribution of portland cements using existing or modified mill circuits with the principal aim of reducing electrical energy requirements for cement manufacturing. The work of Phase 3, because of its scope, will be carried out in 10 main tasks, some of which will be handled simultaneously. Progress on each of these tasks is discussed in this paper.

  9. Numerical Shake Prediction for Earthquake Early Warning: More Precise and Rapid Prediction even for Deviated Distribution of Ground Shaking of M6-class Earthquakes

    Science.gov (United States)

    Hoshiba, M.; Ogiso, M.

    2015-12-01

    In many methods of the present EEW systems, hypocenter and magnitude are determined quickly, and then the strengths of ground motions are predicted using the hypocentral distance and magnitude based on a ground motion prediction equation (GMPE), which usually leads the prediction of concentric distribution. However, actual ground shaking is not always concentric, even when site amplification is corrected. At a common site, the strengths of shaking may be much different among earthquakes even when their hypocentral distances and magnitudes are almost the same. For some cases, PGA differs more than 10 times, which leads to imprecise prediction in EEW. Recently, Numerical Shake Prediction method was proposed (Hoshiba and Aoki, 2015), in which the present ongoing wavefield of ground shaking is estimated using data assimilation technique, and then future wavefield is predicted based on physics of wave propagation. Information of hypocentral location and magnitude is not required in this method. Because future is predicted from the present condition, it is possible to address the issue of the non-concentric distribution. Once the deviated distribution is actually observed in ongoing wavefield, future distribution is predicted accordingly to be non-concentric. We will indicate examples of M6-class earthquakes occurred at central Japan, in which strengths of shaking were observed to non-concentrically distribute. We will show their predictions using Numerical Shake Prediction method. The deviated distribution may be explained by inhomogeneous distribution of attenuation. Even without attenuation structure, it is possible to address the issue of non-concentric distribution to some extent once the deviated distribution is actually observed in ongoing wavefield. If attenuation structure is introduced, we can predict it before actual observation. The information of attenuation structure leads to more precise and rapid prediction in Numerical Shake Prediction method for EEW.

  10. Power law olivine crystal size distributions in lithospheric mantle xenoliths

    Science.gov (United States)

    Armienti, P.; Tarquini, S.

    2002-12-01

    Olivine crystal size distributions (CSDs) have been measured in three suites of spinel- and garnet-bearing harzburgites and lherzolites found as xenoliths in alkaline basalts from Canary Islands, Africa; Victoria Land, Antarctica; and Pali Aike, South America. The xenoliths derive from lithospheric mantle, from depths ranging from 80 to 20 km. Their textures vary from coarse to porphyroclastic and mosaic-porphyroclastic up to cataclastic. Data have been collected by processing digital images acquired optically from standard petrographic thin sections. The acquisition method is based on a high-resolution colour scanner that allows image capturing of a whole thin section. Image processing was performed using the VISILOG 5.2 package, resolving crystals larger than about 150 μm and applying stereological corrections based on the Schwartz-Saltykov algorithm. Taking account of truncation effects due to resolution limits and thin section size, all samples show scale invariance of crystal size distributions over almost three orders of magnitude (0.2-25 mm). Power law relations show fractal dimensions varying between 2.4 and 3.8, a range of values observed for distributions of fragment sizes in a variety of other geological contexts. A fragmentation model can reproduce the fractal dimensions around 2.6, which correspond to well-equilibrated granoblastic textures. Fractal dimensions >3 are typical of porphyroclastic and cataclastic samples. Slight bends in some linear arrays suggest selective tectonic crushing of crystals with size larger than 1 mm. The scale invariance shown by lithospheric mantle xenoliths in a variety of tectonic settings forms distant geographic regions, which indicate that this is a common characteristic of the upper mantle and should be taken into account in rheological models and evaluation of metasomatic models.

  11. Spatial damage distribution of August 16, 2003, Inner Mongolia, China, MS=5.9 earth-quake and analysis

    Institute of Scientific and Technical Information of China (English)

    GAO Meng-tan; XU Li-sheng; GUO Wen-sheng; WAN Bo; YU Yan-xiang

    2005-01-01

    The spatial damage distribution of August 16, 2003, Inner Mongolia, China, MS=5.9 earthquake is summarized through field investigation. The moment tensor solution and focal mechanism are inverted using the digital long-period waveform records of China Digital Seismograph Network (CDSN). The relation between the spatial damage distribution and focal mechanism is analyzed according to the focal mechanism, the aftershock distribution and the spatial damage distribution. The possible relation between the characteristics of ground motion and the tectonic background of the source region is discussed in terms of the global ground motion records, historical earthquake documents and the damage distribution. Investigation reveals that the meizoseismal region is in east-west direction, which is consistent with the nodal plane of focal mechanism inversion. The meizoseismal area is relatively large and the damage of single-story adobe houses or masonry houses is more severe. This may have relations with local seismotectonic environment.

  12. Is Earthquake Triggering Driven by Small Earthquakes?

    CERN Document Server

    Helmstetter, A

    2002-01-01

    Using a catalog of seismicity for Southern California, we measure how the number of triggered earthquakes increases with the earthquake magnitude. The trade-off between this scaling and the distribution of earthquake magnitudes controls the relative role of small compared to large earthquakes. We show that seismicity triggering is driven by the smallest earthquakes, which trigger fewer aftershocks than larger earthquakes, but which are much more numerous. We propose that the non-trivial scaling of the number of aftershocks emerges from the fractal spatial distribution of aftershocks.

  13. An overview of aerosol particle sensors for size distribution measurement

    Directory of Open Access Journals (Sweden)

    Panich Intra

    2007-08-01

    Full Text Available Fine aerosols are generally referred to airborne particles of diameter in submicron or nanometer size range. Measurement capabilities are required to gain understanding of these particle dynamics. One of the most important physical and chemical parameters is the particle size distribution. The aim of this article is to give an overview of recent development of already existing sensors for particle size distribution measurement based on electrical mobility determination. Available instruments for particle size measurement include a scanning mobility particle sizer (SMPS, an electrical aerosol spectrometer (EAS, an engine exhaust particle sizer (EEPS, a bipolar charge aerosol classifier (BCAC, a fast aerosol spectrometer (FAS a differential mobility spectrometer (DMS, and a CMU electrical mobility spectrometer (EMS. The operating principles, as well as detailed physical characteristics of these instruments and their main components consisting of a particle charger, a mobility classifier, and a signal detector, are described. Typical measurements of aerosol from various sources by these instruments compared with an electrical low pressure impactor (ELPI are also presented.

  14. Estimation of coal particle size distribution by image segmentation

    Institute of Scientific and Technical Information of China (English)

    Zhang Zelin; Yang Jianguo; Ding Lihua; Zhao Yuemin

    2012-01-01

    Several industrial coal processes are largely determined by the distribution of particle sizes in their feed.Currently these parameters are measured by manual sampling,which is time consuming and cannot provide real time feedback for automatic control purposes.In this paper,an approach using image segmentation on images of overlapped coal particles is described.The estimation of the particle size distribution by number is also described.The particle overlap problem was solved using image enhancement algorithms that converted those image parts representing material in lower layers to black.Exponential high-pass filter (EHPF) algorithms were used to remove the texture from particles on the surface.Finally,the edges of the surface particles were identified by morphological edge detection.These algorithms are described in detail as is the method of extracting the coal particle size.Tests indicate that using more coal images gives a higher accuracy estimate.The positive absolute error of 50 random tests was consistently less than 2.5% and the errors were reduced as the size of the fraction increased.

  15. Use of the truncated shifted Pareto distribution in assessing size distribution of oil and gas fields

    Science.gov (United States)

    Houghton, J.C.

    1988-01-01

    The truncated shifted Pareto (TSP) distribution, a variant of the two-parameter Pareto distribution, in which one parameter is added to shift the distribution right and left and the right-hand side is truncated, is used to model size distributions of oil and gas fields for resource assessment. Assumptions about limits to the left-hand and right-hand side reduce the number of parameters to two. The TSP distribution has advantages over the more customary lognormal distribution because it has a simple analytic expression, allowing exact computation of several statistics of interest, has a "J-shape," and has more flexibility in the thickness of the right-hand tail. Oil field sizes from the Minnelusa play in the Powder River Basin, Wyoming and Montana, are used as a case study. Probability plotting procedures allow easy visualization of the fit and help the assessment. ?? 1988 International Association for Mathematical Geology.

  16. Physically-based modelling of the competition between surface uplift and erosion caused by earthquakes and earthquake sequences.

    Science.gov (United States)

    Hovius, Niels; Marc, Odin; Meunier, Patrick

    2016-04-01

    Large earthquakes deform Earth's surface and drive topographic growth in the frontal zones of mountain belts. They also induce widespread mass wasting, reducing relief. Preliminary studies have proposed that above a critical magnitude earthquake would induce more erosion than uplift. Other parameters such as fault geometry or earthquake depth were not considered yet. A new seismologically consistent model of earthquake induced landsliding allow us to explore the importance of parameters such as earthquake depth and landscape steepness. We have compared these eroded volume prediction with co-seismic surface uplift computed with Okada's deformation theory. We found that the earthquake depth and landscape steepness to be the most important parameters compared to the fault geometry (dip and rake). In contrast with previous studies we found that largest earthquakes will always be constructive and that only intermediate size earthquake (Mw ~7) may be destructive. Moreover, with landscapes insufficiently steep or earthquake sources sufficiently deep earthquakes are predicted to be always constructive, whatever their magnitude. We have explored the long term topographic contribution of earthquake sequences, with a Gutenberg Richter distribution or with a repeating, characteristic earthquake magnitude. In these models, the seismogenic layer thickness, that sets the depth range over which the series of earthquakes will distribute, replaces the individual earthquake source depth.We found that in the case of Gutenberg-Richter behavior, relevant for the Himalayan collision for example, the mass balance could remain negative up to Mw~8 for earthquakes with a sub-optimal uplift contribution (e.g., transpressive or gently-dipping earthquakes). Our results indicate that earthquakes have probably a more ambivalent role in topographic building than previously anticipated, and suggest that some fault systems may not induce average topographic growth over their locked zone during a

  17. Estimation of Slip Distribution of the 2007 Bengkulu Earthquake from GPS Observation Using Least Squares Inversion Method

    Directory of Open Access Journals (Sweden)

    Moehammad Awaluddin

    2012-07-01

    Full Text Available Continuous Global Positioning System (GPS observations showed significant crustal displacements as a result of the Bengkulu earthquake occurring on September 12, 2007. A maximum horizontal displacement of 2.11 m was observed at PRKB station, while the vertical component at BSAT station was uplifted with a maximum of 0.73 m, and the vertical component at LAIS station was subsided by -0.97 m. The method of adding more constraint on the inversion for the Bengkulu earthquake slip distribution from GPS observations can help solve a least squares inversion with an under-determined condition. Checkerboard tests were performed to help conduct the weighting for constraining the inversion. The inversion calculation of the Bengkulu earthquake slip distribution yielded in an optimum value of slip distribution by giving a weight of smoothing constraint of 0.001 and a weight of slip value constraint = 0 at the edge of the earthquake rupture area. A maximum coseismic slip of the optimal inversion calculation was 5.12 m at the lower area of PRKB and BSAT stations. The seismic moment calculated from the optimal slip distribution was 7.14 x 1021 Nm, which is equivalent to a magnitude of 8.5.

  18. Tohoku earthquake: a surprise?

    CERN Document Server

    Kagan, Yan Y

    2011-01-01

    We consider three issues related to the 2011 Tohoku mega-earthquake: (1) how to evaluate the earthquake maximum size in subduction zones, (2) what is the repeat time for the largest earthquakes in Tohoku area, and (3) what are the possibilities of short-term forecasts during the 2011 sequence. There are two quantitative methods which can be applied to estimate the maximum earthquake size: a statistical analysis of the available earthquake record and the moment conservation principle. The latter technique studies how much of the tectonic deformation rate is released by earthquakes. For the subduction zones, the seismic or historical record is not sufficient to provide a reliable statistical measure of the maximum earthquake. The moment conservation principle yields consistent estimates of maximum earthquake size: for all the subduction zones the magnitude is of the order 9.0--9.7, and for major subduction zones the maximum earthquake size is statistically indistinguishable. Starting in 1999 we have carried out...

  19. Application of flower pollination algorithm for optimal placement and sizing of distributed generation in Distribution systems

    Directory of Open Access Journals (Sweden)

    P. Dinakara Prasad Reddy

    2016-05-01

    Full Text Available Distributed generator (DG resources are small, self contained electric generating plants that can provide power to homes, businesses or industrial facilities in distribution feeders. By optimal placement of DG we can reduce power loss and improve the voltage profile. However, the values of DGs are largely dependent on their types, sizes and locations as they were installed in distribution feeders. The main contribution of the paper is to find the optimal locations of DG units and sizes. Index vector method is used for optimal DG locations. In this paper new optimization algorithm i.e. flower pollination algorithm is proposed to determine the optimal DG size. This paper uses three different types of DG units for compensation. The proposed methods have been tested on 15-bus, 34-bus, and 69-bus radial distribution systems. MATLAB, version 8.3 software is used for simulation.

  20. Universal functional form of 1-minute raindrop size distribution?

    Science.gov (United States)

    Cugerone, Katia; De Michele, Carlo

    2015-04-01

    Rainfall remains one of the poorly quantified phenomena of the hydrological cycle, despite its fundamental role. No universal laws describing the rainfall behavior are available in literature. This is probably due to the continuous description of rainfall, which is a discrete phenomenon, made by drops. From the statistical point of view, the rainfall variability at particle size scale, is described by the drop size distribution (DSD). With this term, it is generally indicated as the concentration of raindrops per unit volume and diameter, as the probability density function of drop diameter at the ground, according to the specific problem of interest. Raindrops represent the water exchange, under liquid form, between atmosphere and earth surface, and the number of drops and their size have impacts in a wide range of hydrologic, meteorologic, and ecologic phenomena. DSD is used, for example, to measure the multiwavelength rain attenuation for terrestrial and satellite systems, it is an important input for the evaluation of the below cloud scavenging coefficient of the aerosol by precipitation, and is of primary importance to make estimates of rainfall rate through radars. In literature, many distributions have been used to this aim (Gamma and Lognormal above all), without statistical supports and with site-specific studies. Here, we present an extensive investigation of raindrop size distribution based on 18 datasets, consisting in 1-minute disdrometer data, sampled using Joss-Waldvogel or Thies instrument in different locations on Earth's surface. The aim is to understand if an universal functional form of 1-minute drop diameter variability exists. The study consists of three main steps: analysis of the high order moments, selection of the model through the AIC index and test of the model with the use of goodness-of-fit tests.

  1. Fog-Influenced Submicron Aerosol Number Size Distributions

    Science.gov (United States)

    Zikova, N.; Zdimal, V.

    2013-12-01

    The aim of this work is to evaluate the influence of fog on aerosol particle number size distributions (PNSD) in submicron range. Thus, five-year continuous time series of the SMPS (Scanning Mobility Particle Sizer) data giving information on PNSD in five minute time step were compared with detailed meteorological records from the professional meteorological station Kosetice in the Czech Republic. The comparison included total number concentration and PNSD in size ranges between 10 and 800 nm. The meteorological records consist from the exact times of starts and ends of individual meteorological phenomena (with one minute precision). The records longer than 90 minutes were considered, and corresponding SMPS spectra were evaluated. Evaluation of total number distributions showed considerably lower concentration during fog periods compared to the period when no meteorological phenomenon was recorded. It was even lower than average concentration during presence of hydrometeors (not only fog, but rain, drizzle, snow etc. as well). Typical PNSD computed from all the data recorded in the five years is in Figure 1. Not only median and 1st and 3rd quartiles are depicted, but also 5th and 95th percentiles are plotted, to see the variability of the concentrations in individual size bins. The most prevailing feature is the accumulation mode, which seems to be least influenced by the fog presence. On the contrary, the smallest aerosol particles (diameter under 40 nm) are effectively removed, as well as the largest particles (diameter over 500 nm). Acknowledgements: This work was supported by the projects GAUK 62213 and SVV-2013-267308. Figure 1. 5th, 25th, 50th, 75th and 95th percentile of aerosol particle number size distributions recorded during fog events.

  2. Measuring Technique of Bubble Size Distributions in Dough

    Science.gov (United States)

    Maeda, Tatsurou; Do, Gab-Soo; Sugiyama, Junichi; Oguchi, Kosei; Tsuta, Mizuki

    A novel technique to recognize bubbles in bread dough and analyze their size distribution was developed by using a Micro-Slicer Image Processing System (MSIPS). Samples were taken from the final stage of the mixing process of bread dough which generally consists of four distinctive stages. Also, to investigate the effect of freeze preservation on the size distribution of bubbles, comparisons were made between fresh dough and the dough that had been freeze preserved at .30°C for three months. Bubbles in the dough samples were identified in the images of MSIPS as defocusing spots due to the difference in focal distance created by vacant spaces. In case of the fresh dough, a total of 910 bubbles were recognized and their maximum diameter ranged from 0.4 to 70.5μm with an average of 11.1μm. On the other hand, a total of 1,195 bubbles were recognized from the freeze-preserved sample, and the maximum diameter ranged from 0.9 to 32.7μm with an average of 6.7μm. Small bubbles with maximum diameters less than 10μm comprised approximately 59% and 78% of total bubbles for fresh and freeze-preserved dough samples, respectively. The results indicated that the bubble size of frozen dough is smaller than that of unfrozen one. The proposed method can provide a novel tool to investigate the effects of mixing and preservation treatments on the size, morphology and distribution of bubbles in bread dough.

  3. Universal scaling of grain size distributions during dislocation creep

    Science.gov (United States)

    Aupart, Claire; Dunkel, Kristina G.; Angheluta, Luiza; Austrheim, Håkon; Ildefonse, Benoît; Malthe-Sørenssen, Anders; Jamtveit, Bjørn

    2017-04-01

    Grain size distributions are major sources of information about the mechanisms involved in ductile deformation processes and are often used as paleopiezometers (stress gauges). Several factors have been claimed to influence the stress vs grain size relation, including the water content (Jung & Karato 2001), the temperature (De Bresser et al., 2001), the crystal orientation (Linckens et al., 2016), the presence of second phase particles (Doherty et al. 1997; Cross et al., 2015), and heterogeneous stress distributions (Platt & Behr 2011). However, most of the studies of paleopiezometers have been done in the laboratory under conditions different from those in natural systems. It is therefore essential to complement these studies with observations of naturally deformed rocks. We have measured olivine grain sizes in ultramafic rocks from the Leka ophiolite in Norway and from Alpine Corsica using electron backscatter diffraction (EBSD) data, and calculated the corresponding probability density functions. We compared our results with samples from other studies and localities that have formed under a wide range of stress and strain rate conditions. All distributions collapse onto one universal curve in a log-log diagram where grain sizes are normalized by the mean grain size of each sample. The curve is composed of two straight segments with distinct slopes for grains above and below the mean grain size. These observations indicate that a surprisingly simple and universal power-law scaling describes the grain size distribution in ultramafic rocks during dislocation creep irrespective of stress levels and strain rates. Cross, Andrew J., Susan Ellis, and David J. Prior. 2015. « A Phenomenological Numerical Approach for Investigating Grain Size Evolution in Ductiley Deforming Rocks ». Journal of Structural Geology 76 (juillet): 22-34. doi:10.1016/j.jsg.2015.04.001. De Bresser, J. H. P., J. H. Ter Heege, and C. J. Spiers. 2001. « Grain Size Reduction by Dynamic

  4. Earthquake interevent time distributions reflect the proportion of dependent and independent events pairs and are therefore not universal

    Science.gov (United States)

    Naylor, Mark; Touati, Sarah; Main, Ian; Bell, Andrew

    2010-05-01

    Seismic activity is routinely quantified using event rates or their inverse, interevent times, which are more stable to extreme events [1]. It is common practice to model regional earthquake interevent times using a gamma distribution [2]. However, the use of this gamma distribution is empirically based, not physical. Our recent work has shown that the gamma distribution is an approximation that drops out of a physically based model after the commonly applied filtering of the raw data [3]. We show that in general, interevent time distributions have a fundamentally bimodal shape caused by the mixing of two contributions: correlated aftershocks, which have short interevent times and produce a gamma distribution; and independent events, which tend to be separated by longer intervals and are described by a Poisson distribution. The power-law segment of the gamma distribution arises at the cross over between these distributions. This physically based model is transferable to other fields to explain the form of cascading interevent time series with varying proportions of independent and dependent daughter events. We have found that when the independent or background rate of earthquakes is high, as is the case for earthquake catalogues spanning large regions, significant overlapping of separate aftershock sequences within the time series "masks" the effects of these aftershock sequences on the temporal statistics. The time series qualitatively appears more random; this is confirmed in the interevent time distribution, in the convergence of the mean interevent time, and in the poor performance of temporal ETAS parameter inversions on synthetic catalogues within this regime [4]. The aftershock-triggering characteristics within the data are thus hidden from observation in the time series by a high independent rate of events; spatial information about event occurrence is needed in this case to uncover the triggering structure in the data. We show that earthquake interevent

  5. Multimodal Dispersion of Nanoparticles: A Comprehensive Evaluation of Size Distribution with 9 Size Measurement Methods.

    Science.gov (United States)

    Varenne, Fanny; Makky, Ali; Gaucher-Delmas, Mireille; Violleau, Frédéric; Vauthier, Christine

    2016-05-01

    Evaluation of particle size distribution (PSD) of multimodal dispersion of nanoparticles is a difficult task due to inherent limitations of size measurement methods. The present work reports the evaluation of PSD of a dispersion of poly(isobutylcyanoacrylate) nanoparticles decorated with dextran known as multimodal and developed as nanomedecine. The nine methods used were classified as batch particle i.e. Static Light Scattering (SLS) and Dynamic Light Scattering (DLS), single particle i.e. Electron Microscopy (EM), Atomic Force Microscopy (AFM), Tunable Resistive Pulse Sensing (TRPS) and Nanoparticle Tracking Analysis (NTA) and separative particle i.e. Asymmetrical Flow Field-Flow Fractionation coupled with DLS (AsFlFFF) size measurement methods. The multimodal dispersion was identified using AFM, TRPS and NTA and results were consistent with those provided with the method based on a separation step prior to on-line size measurements. None of the light scattering batch methods could reveal the complexity of the PSD of the dispersion. Difference between PSD obtained from all size measurement methods tested suggested that study of the PSD of multimodal dispersion required to analyze samples by at least one of the single size particle measurement method or a method that uses a separation step prior PSD measurement.

  6. Fine structure of mass size distributions in an urban environment

    Science.gov (United States)

    Salma, Imre; Ocskay, Rita; Raes, Nico; Maenhaut, Willy

    As part of an urban aerosol research project, aerosol samples were collected by a small deposit area low-pressure impactor and a micro-orifice uniform deposit impactor in downtown Budapest in spring 2002. A total number of 23 samples were obtained with each device for separate daytime periods and nights. The samples were analysed by particle-induced X-ray emission spectrometry for 29 elements, or by gravimetry for particulate mass. The raw size distribution data were processed by the inversion program MICRON utilising the calibrated collection efficiency curve for each impactor stage in order to study the mass size distributions in the size range of about 50 nm to 10 μm in detail. Concentration, geometric mean aerodynamic diameter, and geometric standard deviation for each contributing mode were determined and further evaluated. For the crustal elements, two modes were identified in the mass size distributions: a major coarse mode and a (so-called) intermediate mode, which contained about 4% of the elemental mass. The coarse mode was associated with suspension, resuspension, and abrasion processes, whereby the major contribution likely came from road dust, while the particles of the intermediate mode may have originated from the same but also from the other sources. The typical anthropogenic elements exhibited usually trimodal size distributions including a coarse mode and two submicrometer modes instead of a single accumulation mode. The mode diameter of the upper submicrometer mode was somewhat lower for the particulate mass (PM) and S than for the anthropogenic metals, suggesting different sources and/or source processes. The different relative intensities of the two submicrometer modes for the anthropogenic elements and the PM indicate that the elements and PM have multiple sources. An Aitken mode was unambiguously observed for S, Zn, and K, but in a few cases only. The relatively large coarse mode of Cu and Zn, and the small night-to-daytime period

  7. Building predictive models of soil particle-size distribution

    Directory of Open Access Journals (Sweden)

    Alessandro Samuel-Rosa

    2013-04-01

    Full Text Available Is it possible to build predictive models (PMs of soil particle-size distribution (psd in a region with complex geology and a young and unstable land-surface? The main objective of this study was to answer this question. A set of 339 soil samples from a small slope catchment in Southern Brazil was used to build PMs of psd in the surface soil layer. Multiple linear regression models were constructed using terrain attributes (elevation, slope, catchment area, convergence index, and topographic wetness index. The PMs explained more than half of the data variance. This performance is similar to (or even better than that of the conventional soil mapping approach. For some size fractions, the PM performance can reach 70 %. Largest uncertainties were observed in geologically more complex areas. Therefore, significant improvements in the predictions can only be achieved if accurate geological data is made available. Meanwhile, PMs built on terrain attributes are efficient in predicting the particle-size distribution (psd of soils in regions of complex geology.

  8. Size Distributions of Solar Flares and Solar Energetic Particle Events

    Science.gov (United States)

    Cliver, E. W.; Ling, A. G.; Belov, A.; Yashiro, S.

    2012-01-01

    We suggest that the flatter size distribution of solar energetic proton (SEP) events relative to that of flare soft X-ray (SXR) events is primarily due to the fact that SEP flares are an energetic subset of all flares. Flares associated with gradual SEP events are characteristically accompanied by fast (much > 1000 km/s) coronal mass ejections (CMEs) that drive coronal/interplanetary shock waves. For the 1996-2005 interval, the slopes (alpha values) of power-law size distributions of the peak 1-8 Angs fluxes of SXR flares associated with (a) >10 MeV SEP events (with peak fluxes much > 1 pr/sq cm/s/sr) and (b) fast CMEs were approx 1.3-1.4 compared to approx 1.2 for the peak proton fluxes of >10 MeV SEP events and approx 2 for the peak 1-8 Angs fluxes of all SXR flares. The difference of approx 0.15 between the slopes of the distributions of SEP events and SEP SXR flares is consistent with the observed variation of SEP event peak flux with SXR peak flux.

  9. The space and time distribution characteristics of the shear stress field for the sequence of the Wuding earthquake

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Follow Chen and Duda's model of spectral fall-off of (3, the dependence of peak parameters of ground motion, peak displacement dm, peak velocity vm and peak acceleration am, upon the environment stress (0-values are studied using near source seismic digital recordings for the sequence of the Wuding, Yunnan, M = 6.5 earthquake, in which, as a new thought, the peak parameters are assumed to be related to the medium Q-value. Three formulae for estimating the environment stress (0-values by the peak parameters of three types of ground motions are derived. Using these formulae, the environment stress (0-values are calculated for the sequence of the Wuding earthquake. The result show that (0-values calculated by the three formulae are constant largely, the averages of (0 are in the range of 5.0~35 MPa for most earthquakes. It belongs to the high-stress earthquakes sequence: the high-stress values are restricted to the relatively small area closely near to the epicenter of the main shock. The fine distribution structure for the contours of the environment stress (0-values is related closely to the strong aftershocks. The analysis of spatial and temporal feature of (0-values suggests that the earthquakes sequence in a rupture process generated at the specific intersection zone of seismo-tectonics under high-stress background.

  10. Fault-slip distribution of the 1995 Colima-Jalisco, Mexico, earthquake

    Science.gov (United States)

    Mendoza, C.; Hartzell, S.

    1999-01-01

    Broadband teleseismic P waves have been analyzed to recover the rupture history of the large (M(s) 7.4) Colima-Jalisco, Mexico, shallow interplate thrust earthquake of 9 October 1995. Ground-displacement records in the period range of 1-60 sec are inverted using a linear, finite-fault waveform inversion procedure that allows a variable dislocation duration on a prescribed fault. The method is applied using both a narrow fault that simulates a line source with a dislocation window of 50 sec and a wide fault with a possible rise time of up to 20 sec that additionally allows slip updip and downdip from the hypocenter. The line-source analysis provides a spatio-temporal image of the slip distribution consisting of several large sources located northwest of the hypocenter and spanning a range of rupture velocities. The two-dimensional finite-fault inversion allows slip over this rupture-velocity range and indicates that the greatest coseismic displacement (3-4 m) is located between 70 and 130 km from the hypocenter at depths shallower than about 15 km. Slip in this shallow region consists of two major sources, one of which is delayed by about 10 sec relative to a coherent propagation of rupture along the plate interface. These two slip sources account for about one-third of the total P-wave seismic moment of 8.3 X 1027 dyne-cm (M(w) 7.9) and may have been responsible for the local tsunami observed along the coast following the earthquake.

  11. Grain size effects on He bubbles distribution and evolution

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J. [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); School of Physical Science and Technology, Lanzhou University, Lanzhou 730000 (China); Gao, X.; Gao, N. [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Wang, Z.G., E-mail: zhgwang@impcas.ac.cn [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Cui, M.H.; Wei, K.F.; Yao, C.F.; Sun, J.R.; Li, B.S.; Zhu, Y.B.; Pang, L.L. [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Li, Y.F. [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); School of Physical Science and Technology, Lanzhou University, Lanzhou 730000 (China); Wang, D. [Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Xie, E.Q. [School of Physical Science and Technology, Lanzhou University, Lanzhou 730000 (China)

    2015-02-15

    Highlights: • SMAT treated T91 and conventional T91 were implanted by 200 keV He{sup 2+} to 1 × 10{sup 21} He m{sup −2} at room temperature and annealed at 450 °C for 3.5 h. • He bubbles in nanometer-size-grained T91 are smaller in as-implanted case. • The bubbles in the matrix of nanograins were hard to detect and those along the nanograin boundaries coalesced and filled with the grain boundaries after annealing. • Brownian motion and coalescence and Ostwald ripening process might lead to bubbles morphology presented in the nanometer-size-grained T91 after annealing. - Abstract: Grain boundary and grain size effects on He bubble distribution and evolution were investigated by He implantation into nanometer-size-grained T91 obtained by Surface Mechanical Attrition Treatment (SMAT) and the conventional coarse-grained T91. It was found that bubbles in the nanometer-size-grained T91 were smaller than those in the conventional coarse-grained T91 in as-implanted case, and bubbles in the matrix of nanograins were undetectable while those at nanograin boundaries (GBs) coalesced and filled in GBs after heat treatment. These results suggested that the grain size of structural material should be larger than the mean free path of bubble’s Brownian motion and/or denuded zone around GBs in order to prevent bubbles accumulation at GBs, and multiple instead of one type of defects should be introduced into structural materials to effectively reduce the susceptibility of materials to He embrittlement and improve the irradiation tolerance of structural materials.

  12. Method for measuring the size distribution of airborne rhinovirus

    Energy Technology Data Exchange (ETDEWEB)

    Russell, M.L.; Goth-Goldstein, R.; Apte, M.G.; Fisk, W.J.

    2002-01-01

    About 50% of viral-induced respiratory illnesses are caused by the human rhinovirus (HRV). Measurements of the concentrations and sizes of bioaerosols are critical for research on building characteristics, aerosol transport, and mitigation measures. We developed a quantitative reverse transcription-coupled polymerase chain reaction (RT-PCR) assay for HRV and verified that this assay detects HRV in nasal lavage samples. A quantitation standard was used to determine a detection limit of 5 fg of HRV RNA with a linear range over 1000-fold. To measure the size distribution of HRV aerosols, volunteers with a head cold spent two hours in a ventilated research chamber. Airborne particles from the chamber were collected using an Andersen Six-Stage Cascade Impactor. Each stage of the impactor was analyzed by quantitative RT-PCR for HRV. For the first two volunteers with confirmed HRV infection, but with mild symptoms, we were unable to detect HRV on any stage of the impactor.

  13. Size distribution of wet crushed waste printed circuit boards

    Institute of Scientific and Technical Information of China (English)

    Tan Zhihai; He Yaqun; Xie Weining; Duan Chenlong; Zhou Enhui; Yu Zheng

    2011-01-01

    A wet impact crusher was used to breakdown waste printed circuit boards (PCB's) in a water medium.The relationship between the yield of crushed product and the operating parameters was established.The crushing mechanism was analyzed and the effects of hammerhead style,rotation speed,and inlet water volume on particle size distribution were investigated.The results show that the highest yield of -1 + 0.75 mm sized product was obtained with an inlet water volume flow rate of 5.97 m3/h and a smooth hammerhead turning at 1246.15 r/min.Cumulative undersize-product yield curves were fitted to a nonlinear function:the fitting correlation coefficient was greater than 0.998.These research results provide a theoretical basis for the highly effective wet crushing of PCB's.

  14. Dust generation in powders: Effect of particle size distribution

    Directory of Open Access Journals (Sweden)

    Chakravarty Somik

    2017-01-01

    Full Text Available This study explores the relationship between the bulk and grain-scale properties of powders and dust generation. A vortex shaker dustiness tester was used to evaluate 8 calcium carbonate test powders with median particle sizes ranging from 2μm to 136μm. Respirable aerosols released from the powder samples were characterised by their particle number and mass concentrations. All the powder samples were found to release respirable fractions of dust particles which end up decreasing with time. The variation of powder dustiness as a function of the particle size distribution was analysed for the powders, which were classified into three groups based on the fraction of particles within the respirable range. The trends we observe might be due to the interplay of several mechanisms like de-agglomeration and attrition and their relative importance.

  15. Influence of strong electromagnetic discharges on the dynamics of earthquakes time distribution in the Bishkek test area (Central Asia

    Directory of Open Access Journals (Sweden)

    P. Tosi

    2006-06-01

    Full Text Available From 08/01/1983 to 28/03/1990, at the Bishkek ElectroMagnetic (EM test site (Northern Tien Shan and Chu Valley area, Central Asia, strong currents, up to 2.5 kA, were released at a 4.5 km long electrical (grounded dipole. This area is seismically active and a catalogue with about 14100 events from 1975 to 1996 has been analyzed. The seismic catalogue was divided into three parts: 1975-1983 first part with no EM experiments, 1983-1990 second part during EM experiments and 1988-1996 after experiments part. Qualitative and quantitative time series non- linear analysis was applied to waiting times of earthquakes to the above three sub catalogue periods. The qualitative approach includes visual inspection of reconstructed phase space, Iterated Function Systems (IFS and Recurrence Quantification Analysis (RQA. The quantitative approach followed correlation integral calculation of reconstructed phase space of waiting time distribution, with noise reduction and surrogate testing methods. Moreover the Lempel- Ziv algorithmic complexity measure (LZC was calculated. General dynamics of earthquakes’ temporal distribution around the test area, reveals properties of low dimensional non linearity. Strong EM discharges lead to the increase in extent of regularity in earthquakes temporal distribution. After cessation of EM experiments the earthquakes’ temporal distribution becomes much more random than before experiments. To avoid non valid conclusions several tests were applied to our data set: differentiation of the time series was applied to check results not affected by non stationarity; the surrogate data approach was followed to reject the hypothesis that dynamics belongs to the colored noise type. Small earthquakes, below completeness threshold, were added to the analysis to check results robustness.

  16. Measurement of non-volatile particle number size distribution

    Science.gov (United States)

    Gkatzelis, G. I.; Papanastasiou, D. K.; Florou, K.; Kaltsonoudis, C.; Louvaris, E.; Pandis, S. N.

    2015-06-01

    An experimental methodology was developed to measure the non-volatile particle number concentration using a thermodenuder (TD). The TD was coupled with a high-resolution time-of-flight aerosol mass spectrometer, measuring the chemical composition and mass size distribution of the submicrometer aerosol and a scanning mobility particle sizer (SMPS) that provided the number size distribution of the aerosol in the range from 10 to 500 nm. The method was evaluated with a set of smog chamber experiments and achieved almost complete evaporation (> 98 %) of secondary organic as well as freshly nucleated particles, using a TD temperature of 400 °C and a centerline residence time of 15 s. This experimental approach was applied in a winter field campaign in Athens and provided a direct measurement of number concentration and size distribution for particles emitted from major pollution sources. During periods in which the contribution of biomass burning sources was dominant, more than 80 % of particle number concentration remained after passing through the thermodenuder, suggesting that nearly all biomass burning particles had a non-volatile core. These remaining particles consisted mostly of black carbon (60 % mass contribution) and organic aerosol, OA (40 %). Organics that had not evaporated through the TD were mostly biomass burning OA (BBOA) and oxygenated OA (OOA) as determined from AMS source apportionment analysis. For periods during which traffic contribution was dominant 50-60 % of the particles had a non-volatile core while the rest evaporated at 400 °C. The remaining particle mass consisted mostly of black carbon (BC) with an 80 % contribution, while OA was responsible for another 15-20 %. Organics were mostly hydrocarbon-like OA (HOA) and OOA. These results suggest that even at 400 °C some fraction of the OA does not evaporate from particles emitted from common combustion processes, such as biomass burning and car engines, indicating that a fraction of this type

  17. Simulation study of territory size distributions in subterranean termites.

    Science.gov (United States)

    Jeon, Wonju; Lee, Sang-Hee

    2011-06-21

    In this study, on the basis of empirical data, we have simulated the foraging tunnel patterns of two subterranean termites, Coptotermes formosanus Shiraki and Reticulitermes flavipes (Kollar), using a two-dimensional model. We have defined a territory as a convex polygon containing a tunnel pattern and explored the effects of competition among termite territory colonies on the territory size distribution in the steady state that was attained after a sufficient simulation time. In the model, territorial competition was characterized by a blocking probability P(block) that quantitatively describes the ease with which a tunnel stops its advancement when it meets another tunnel; higher P(block) values imply easier termination. In the beginning of the simulation run, N=10, 20,…,100 territory seeds, representing the founding pair, were randomly distributed on a square area. When the territory density was less (N=20), the differences in the territory size distributions for different P(block) values were small because the territories had sufficient space to grow without strong competitions. Further, when the territory density was higher (N>20), the territory sizes increased in accordance with the combinational effect of P(block) and N. In order to understand these effects better, we introduced an interference coefficient γ. We mathematically derived γ as a function of P(block) and N: γ(N,P(block))=a(N)P(block)/(P(block)+b(N)). a(N) and b(N) are functions of N/(N+c) and d/(N+c), respectively, and c and d are constants characterizing territorial competition. The γ function is applicable to characterize the territoriality of various species and increases with both the P(block) values and N; higher γ values imply higher limitations of the network growth. We used the γ function, fitted the simulation results, and determined the c and d values. In addition, we have briefly discussed the predictability of the present model by comparing it with our previous lattice model

  18. Simulation of soot size distribution in an ethylene counterflow flame

    KAUST Repository

    Zhou, Kun

    2014-01-06

    Soot, an aggregate of carbonaceous particles produced during the rich combustion of fossil fuels, is an undesirable pollutant and health hazard. Soot evolution involves various dynamic processes: nucleation soot formation from polycyclic aromatic hydrocarbons (PAHs) condensation PAHs condensing on soot particle surface surface processes hydrogen-abstraction-C2H2-addition, oxidation coagulation two soot particles coagulating to form a bigger particle This simulation work investigates soot size distribution and morphology in an ethylene counterflow flame, using i). Chemkin with a method of moments to deal with the coupling between vapor consumption and soot formation; ii). Monte Carlo simulation of soot dynamics.

  19. A Maximum Entropy Modelling of the Rain Drop Size Distribution

    Directory of Open Access Journals (Sweden)

    Francisco J. Tapiador

    2011-01-01

    Full Text Available This paper presents a maximum entropy approach to Rain Drop Size Distribution (RDSD modelling. It is shown that this approach allows (1 to use a physically consistent rationale to select a particular probability density function (pdf (2 to provide an alternative method for parameter estimation based on expectations of the population instead of sample moments and (3 to develop a progressive method of modelling by updating the pdf as new empirical information becomes available. The method is illustrated with both synthetic and real RDSD data, the latest coming from a laser disdrometer network specifically designed to measure the spatial variability of the RDSD.

  20. Saharan Dust Particle Size And Concentration Distribution In Central Ghana

    Science.gov (United States)

    Sunnu, A. K.

    2010-12-01

    A.K. Sunnu*, G. M. Afeti* and F. Resch+ *Department of Mechanical Engineering, Kwame Nkrumah University of Science and Technology (KNUST) Kumasi, Ghana. E-mail: albertsunnu@yahoo.com +Laboratoire Lepi, ISITV-Université du Sud Toulon-Var, 83162 La Valette cedex, France E-mail: resch@univ-tln.fr Keywords: Atmospheric aerosol; Saharan dust; Particle size distributions; Particle concentrations. Abstract The Saharan dust that is transported and deposited over many countries in the West African atmospheric environment (5°N), every year, during the months of November to March, known locally as the Harmattan season, have been studied over a 13-year period, between 1996 and 2009, using a location at Kumasi in central Ghana (6° 40'N, 1° 34'W) as the reference geographical point. The suspended Saharan dust particles were sampled by an optical particle counter, and the particle size distributions and concentrations were analysed. The counter gives the total dust loads as number of particles per unit volume of air. The optical particle counter used did not discriminate the smoke fractions (due to spontaneous bush fires during the dry season) from the Saharan dust. Within the particle size range measured (0.5 μm-25 μm.), the average inter-annual mean particle diameter, number and mass concentrations during the northern winter months of January and February were determined. The average daily number concentrations ranged from 15 particles/cm3 to 63 particles/cm3 with an average of 31 particles/cm3. The average daily mass concentrations ranged from 122 μg/m3 to 1344 μg/m3 with an average of 532 μg/m3. The measured particle concentrations outside the winter period were consistently less than 10 cm-3. The overall dust mean particle diameter, analyzed from the peak representative Harmattan periods over the 13-year period, ranged from 0.89 μm to 2.43 μm with an average of 1.5 μm ± 0.5. The particle size distributions exhibited the typical distribution pattern for

  1. Mass size distributions of elemental aerosols in industrial area

    Directory of Open Access Journals (Sweden)

    Mona Moustafa

    2015-11-01

    Full Text Available Outdoor aerosol particles were characterized in industrial area of Samalut city (El-minia/Egypt using low pressure Berner cascade impactor as an aerosol sampler. The impactor operates at 1.7 m3/h flow rate. Seven elements were investigated including Ca, Ba, Fe, K, Cu, Mn and Pb using atomic absorption technique. The mean mass concentrations of the elements ranged from 0.42 ng/m3 (for Ba to 89.62 ng/m3 (for Fe. The mass size distributions of the investigated elements were bi-modal log normal distribution corresponding to the accumulation and coarse modes. The enrichment factors of elements indicate that Ca, Ba, Fe, K, Cu and Mn are mainly emitted into the atmosphere from soil sources while Pb is mostly due to anthropogenic sources.

  2. Empirical Reference Distributions for Networks of Different Size

    CERN Document Server

    Smith, Anna; Browning, Christopher R

    2015-01-01

    Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although "normalized" versions of some network statistics exist, we demonstrate via simulation why direct comparison of raw and normalized statistics is often inappropriate. We examine a recent suggestion to normalize network statistics relative to Erdos-Renyi random graphs and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively ...

  3. Evolution of Pore Size Distribution and Mean Pore Size in Lotus-type Porous Magnesium Fabricated with Gasar Process

    Institute of Scientific and Technical Information of China (English)

    Yuan LIU; Yanxiang LI; Huawei ZHANG; Jiang WAN

    2006-01-01

    The effect of gas pressures on the mean pore size, the porosity and the pore size distribution of lotus-type porous magnesium fabricated with Gasar process were investigated. The theoretical analysis and the experimental results all indicate that there exists an optimal ratio of the partial pressures of hydrogen pH2 to argon pAr for producing lotus-type structures with narrower pore size distribution and smaller pore size. The effect of solidification mode on the pore size distribution and pore size was also discussed.

  4. Optimal placement and sizing of multiple distributed generating units in distribution

    Directory of Open Access Journals (Sweden)

    D. Rama Prabha

    2016-06-01

    Full Text Available Distributed generation (DG is becoming more important due to the increase in the demands for electrical energy. DG plays a vital role in reducing real power losses, operating cost and enhancing the voltage stability which is the objective function in this problem. This paper proposes a multi-objective technique for optimally determining the location and sizing of multiple distributed generation (DG units in the distribution network with different load models. The loss sensitivity factor (LSF determines the optimal placement of DGs. Invasive weed optimization (IWO is a population based meta-heuristic algorithm based on the behavior of weeds. This algorithm is used to find optimal sizing of the DGs. The proposed method has been tested for different load models on IEEE-33 bus and 69 bus radial distribution systems. This method has been compared with other nature inspired optimization methods. The simulated results illustrate the good applicability and performance of the proposed method.

  5. Slip distribution of the 2013 Mw 8.0 Santa Cruz Islands earthquake by tsunami waveforms inversion

    Science.gov (United States)

    Romano, Fabrizio; Molinari, Irene; Lorito, Stefano; Piatanesi, Alessio

    2014-05-01

    On February 6, 2013 a Mw8.0 interplate earthquake occurred in the Santa Cruz Islands region. The epicenter is located near a complex section of the Australia-Pacific plate boundary, where a short segment of dominantly strike-slip plate motion links the Solomon Trench to the New Hebrides Trench. In this region, the Australia plate subducts beneath the Pacific plate with a convergence rate of ~9cm/yr. This earthquake generated a tsunami that struck the city of Lata and several villages located on the main island, Nendo, near the epicenter. The tsunami has been distinctly recorded by 5 DART buoys located in the Pacific ocean. In this work we present the slip distribution of the earthquake obtained by inverting the tsunami signals recorded by the DART buoys. In order to honour the complex geometry of the subducting plate, we use a fault model that accounts for the variability of the strike and dip angles along the slipping surface. We use the Green's function approach and a simulated annealing technique to solve the inverse problem. Synthetic checkerboard tests indicate that the azimuthal coverage of the available DART stations is sufficient to retrieve the main features of the rupture process with a minimum subfault area of about 20x20 km. We retrieve the slip distribution of the Santa Cruz Island earthquake that, at the first order, is consistent with previous slip models obtained by using teleseismic data.

  6. Vertical Raindrop Size Distribution in Central Spain: A Case Study

    Directory of Open Access Journals (Sweden)

    Roberto Fraile

    2015-01-01

    Full Text Available A precipitation event that took place on 12 October 2008 in Madrid, Spain, is analyzed in detail. Three different devices were used to characterize the precipitation: a disdrometer, a rain gauge, and a Micro Rain Radar (MRR. These instruments determine precipitation intensity indirectly, based on measuring different parameters in different sampling points in the atmosphere. A comparative study was carried out based on the data provided by each of these devices, revealing that the disdrometer and the rain gauge measure similar precipitation intensity values, whereas the MRR measures different rain fall volumes. The distributions of drop sizes show that the mean diameter of the particles varied considerably depending on the altitude considered. The level at which saturation occurs in the atmosphere is decisive in the distribution of drop sizes between 2,700 m and 3,000 m. As time passes, the maximum precipitation intensities are registered at a lower height and are less intense. The maximum precipitation intensities occurred at altitudes above 1,000 m, while the maximum fall speeds are typically found at altitudes below 700 m.

  7. Size Distribution of Main-Belt Asteroids with High Inclination

    CERN Document Server

    Terai, Tsuyoshi

    2010-01-01

    We investigated the size distribution of high-inclination main-belt asteroids (MBAs) to explore asteroid collisional evolution under hypervelocity collisions of around 10 km/s. We performed a wide-field survey for high-inclination sub-km MBAs using the 8.2-m Subaru Telescope with the Subaru Prime Focus Camera (Suprime-Cam). Suprime-Cam archival data were also used. A total of 616 MBA candidates were detected in an area of 9.0 deg^2 with a limiting magnitude of 24.0 mag in the SDSS r filter. Most of candidate diameters were estimated to be smaller than 1 km. We found a scarcity of sub-km MBAs with high inclination. Cumulative size distributions (CSDs) were constructed using Subaru data and published asteroid catalogs. The power-law indexes of the CSDs were 2.17 +/- 0.02 for low-inclination ( 15 deg) MBAs in the 0.7-50 km diameter range. The high-inclination MBAs had a shallower CSD. We also found that the CSD of S-like MBAs had a small slope with high inclination, whereas the slope did not vary with inclinatio...

  8. Characterization of the Tail of the Distribution of Earthquake Magnitudes by Combining the GEV and GPD Descriptions of Extreme Value Theory

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, A.; Sornette, D.; Rodkin, M. V.

    2014-08-01

    The present work is a continuation and improvement of the method suggested in P isarenko et al. (Pure Appl Geophys 165:1-42, 2008) for the statistical estimation of the tail of the distribution of earthquake sizes. The chief innovation is to combine the two main limit theorems of Extreme Value Theory (EVT) that allow us to derive the distribution of T-maxima (maximum magnitude occurring in sequential time intervals of duration T) for arbitrary T. This distribution enables one to derive any desired statistical characteristic of the future T-maximum. We propose a method for the estimation of the unknown parameters involved in the two limit theorems corresponding to the Generalized Extreme Value distribution (GEV) and to the Generalized Pareto Distribution (GPD). We establish the direct relations between the parameters of these distributions, which permit to evaluate the distribution of the T-maxima for arbitrary T. The duality between the GEV and GPD provides a new way to check the consistency of the estimation of the tail characteristics of the distribution of earthquake magnitudes for earthquake occurring over an arbitrary time interval. We develop several procedures and check points to decrease the scatter of the estimates and to verify their consistency. We test our full procedure on the global Harvard catalog (1977-2006) and on the Fennoscandia catalog (1900-2005). For the global catalog, we obtain the following estimates: = 9.53 ± 0.52 and = 9.21 ± 0.20. For Fennoscandia, we obtain = 5.76 ± 0.165 and = 5.44 ± 0.073. The estimates of all related parameters for the GEV and GPD, including the most important form parameter, are also provided. We demonstrate again the absence of robustness of the generally accepted parameter characterizing the tail of the magnitude-frequency law, the maximum possible magnitude M max, and study the more stable parameter Q T ( q), defined as the q-quantile of the distribution of T-maxima on a future interval of duration T.

  9. Size Distribution of Chlorinated Polycyclic Aromatic Hydrocarbons in Atmospheric Particles.

    Science.gov (United States)

    Kakimoto, Kensaku; Nagayoshi, Haruna; Konishi, Yoshimasa; Kajimura, Keiji; Ohura, Takeshi; Nakano, Takeshi; Hata, Mitsuhiko; Furuuchi, Masami; Tang, Ning; Hayakawa, Kazuichi; Toriba, Akira

    2017-01-01

    The particle size distribution of chlorinated polycyclic aromatic hydrocarbons (ClPAHs) in particulate matter (PM) in Japan is examined for the first time. PM was collected using a PM0.1 air sampler with a six-stage filter. PM was collected in October 2014 and January 2015 to observe potential seasonal variation in the atmospheric behavior and size of PM, including polycyclic aromatic hydrocarbons (PAHs) and ClPAHs. We found that the concentration of PAHs and ClPAHs between 0.5-1.0 μm and 1.0-2.5 μm markedly increase in January (i.e., the winter season). Among the ClPAHs, 1-ClPyrene and 6-ClBenzo[a]Pyrene were the most commonly occurring compounds; further, approximately 15% of ClPAHs were in the nanoparticle phase (<0.1 μm). The relatively high presence of nanoparticles is a potential human health concern because these particles can easily be deposited in the lung periphery. Lastly, we evaluated the aryl hydrocarbon receptor (AhR) ligand activity of PM extracts in each size fraction. The result indicates that PM < 2.5 μm has the strong AhR ligand activity.

  10. Bble Size Distribution for Waves Propagating over A Submerged Breakwater

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Experiments are carried out to study the characteristics of active bubbles entrained by breaking waves as these propagate over an abruptly topographical change or a submerged breakwater. Underwater sounds generated by the entrained air bubbles are detected by a hydrophone connected to a charge amplifier and a data acquisition system. The size distribution of the bubbles is then determined inversely from the received sound frequencies. The sound signals are converted from time domain to time-frequency domain by applying Gabor transform. The number of bubbles with different sizes are counted from the signal peaks in the time-frequency domain. The characteristics of the bubbles are in terms of bubble size spectra, which account for the variation in bubble probability density related to the bubble radius r. The experimental data demonstrate that the bubble probability density function shows a-2.39 power-law scaling with radius for r>0.8 mm, and a-1.11 power law for r<0.8 mm.

  11. Constraints on the Size of the Smallest Triggering Earthquake from the ETAS Model, Baath's Law, and Observed Aftershock Sequences

    CERN Document Server

    Sornette, D

    2004-01-01

    The physics of earthquake triggering together with simple assumptions of self-similarity impose the existence of a minimum magnitude m0 below which earthquakes do not trigger other earthquakes. Noting that the magnitude md of completeness of seismic catalogs has no reason to be the same as the magnitude m0 of the smallest triggering earthquake, we use quantitative fits and maximum likelihood inversions of observed aftershock sequences as well as Baath's law, compare with ETAS model predictions and thereby constrain the value of m0. We show that the branching ratio $n$ (average number of triggered earthquake per earthquake also equal to the fraction of aftershocks in seismic catalogs) is the key parameter controlling the minimum triggering magnitude m0. Conversely, physical upper bounds for m0 derived from state- and velocity-weakening friction indicate that at least 60 to 70 percent of all earthquakes are aftershocks.

  12. Scale effects on the variability of the raindrop size distribution

    Science.gov (United States)

    Raupach, Timothy; Berne, Alexis

    2016-04-01

    The raindrop size distribution (DSD) is of utmost important to the study of rainfall processes and microphysics. All important rainfall variables can be calculated as weighted moments of the DSD. Qualitative precipitation estimation (QPE) algorithms and numerical weather prediction (NWP) models both use the DSD in order to calculate quantities such as the rain rate. Often these quantities are calculated at a pixel scale: radar reflectivities, for example, are integrated over a volume, so a DSD for the volume must be calculated or assumed. We present results of a study in which we have investigated the change of support problem with respect to the DSD. We have attempted to answer the following two questions. First, if a DSD measured at point scale is used to represent an area, how much error does this introduce? Second, how representative are areal DSDs calculated by QPE and NWP algorithms of the microphysical process happening inside the pixel of interest? We simulated fields of DSDs at two representative spatial resolutions: at the 2.1x2.1 km2 resolution of a typical NWP pixel, and at the 5x5 km2 resolution of a Global Precipitation Mission (GPM) satellite-based weather radar pixel. The simulation technique uses disdrometer network data and geostatistics to simulate the non-parametric DSD at 100x100 m2 resolution, conditioned by the measured DSD values. From these simulations, areal DSD measurements were derived and compared to point measurements of the DSD. The results show that the assumption that a point represents an area introduces error that increases with areal size and drop size and decreases with integration time. Further, the results show that current areal DSD estimation algorithms are not always representative of sub-grid DSDs. Idealised simulations of areal DSDs produced representative values for rain rate and radar reflectivity, but estimations of drop concentration and characteristic drop size that were often outside the sub-grid value ranges.

  13. Seismic Moment and Slip Distribution of the 1960 and 2010 Chilean Earthquakes as Inferred from Tsunami Waveforms

    Science.gov (United States)

    Satake, K.; Fujii, Y.

    2010-12-01

    The 27 February 2010 Chilean earthquake generated tsunami and caused significant damage on the Chilean coast. The tsunami was recorded at many tide gauge stations around the Pacific Ocean, as well as bottom ocean bottom pressure gauges of DART system. We inverted tsunami waveform data, recorded at 11 tide gauges in Chile and Peru and 4 nearby DART stations, to estimate the slip distribution on the fault. When we assume 36 subfaults (12 along strike by 3 downdip, size of each subfault is 50 km × 50 km), very large slip is located at the most downdip subfaults beneath coast and land. Tsunami waveforms recorded other DART stations also require such deep slips. However, other geodetic and seismic data do not show such deep slips, and tsunami data have limited resolution for such a deep onshore slip. We therefore used coastal uplift and subsidence data at 36 locations reported by Farias et al. (2010). The joint inversion indicates two asperities, one to the north around Constitucion and the other to the south around Arauco peninsula. While the largest slip is still located beneath the coast, the offshore slips generally become larger than the tsunami inversion. The total seismic moment is about 1.8 × 1022 Nm (Mw 8.8), similar to the value estimated from tsunami waveforms only, and the fault length is 450 km. For the 22 May 1960 Chilean earthquake, we first made an inversion of tsunami data, recorded at 12 tide gauge stations mostly in South America. When we assume 27 subfaults (9 along strike by 3 downdip, size of each subfault is 100 km × 50 km), the total seismic moment is 4.6 × 1022 Nm (Mw 9.0). Again, the largest slip is estimated at the deepest subfault beneath land near the epicenter, which would produce large coastal uplift where the coastal subsidence was reported by Plafker and Savage (1970). Poor station coverage of tide gauges may limit the resolution of slip distribution particularly at the southern part of the source area. We therefore made a joint

  14. A Comparative Study of Source Complexity of Two Moderate-Sized Earthquakes in Southern Taiwan: 2016 Mw6.4 and 2010 Mw6.2 Earthquakes

    Science.gov (United States)

    Zhao, Xu; Hao, Jinlai; Liu, Jie; Yao, Zhenxing

    2017-07-01

    We study the complex rupture history of the 2016 Mw6.4 Meinong earthquake and the 2010 Mw6.2 Jiaxian earthquake in southern Taiwan by simultaneously inverting near-field strong-motion records, local and teleseismic broadband waveforms along with long-period surface waves. The focal mechanism results reveal that both earthquakes may rupture two different blind faults. The rupture of the 2016 event is dominated by the strike-slip motion with minor thrust faulting, while the 2010 event is a relatively high-angle thrust earthquake. Our preferred finite-fault source models suggest that the coseismic rupture history of the 2016 event should be relatively more complex than the 2010 event. During the 2016 event, two main patches of slip characterize the coseismic slip history. The cumulative seismic moment within 12 s of rupture is about 5.36 × 1018 N m, which is 1.9 times of the moment of the 2010 event. The slip of the 2016 event is associated with relatively larger perk slip of 0.7 m at shallower depth ( 18-19 km), higher average slip of 0.3 m and faster slip rate with the maximum value of 3.4 m/s. Our kinematic models can shed some light on the cause of the difference in seismic hazard between these two earthquakes, and will be further applied to analyze the deep blind fault structure in southern Taiwan.

  15. Bubble size distribution in surface wave breaking entraining process

    Institute of Scientific and Technical Information of China (English)

    HAN; Lei; YUAN; YeLi

    2007-01-01

    From the similarity theorem,an expression of bubble population is derived as a function of the air entrainment rate,the turbulent kinetic energy (TKE) spectrum density and the surface tension.The bubble size spectrum that we obtain has a dependence of a-2.5+nd on the bubble radius,in which nd is positive and dependent on the form of TKE spectrum within the viscous dissipation range.To relate the bubble population with wave parameters,an expression about the air entrainment rate is deduced by introducing two statistical relations to wave breaking.The bubble population vertical distribution is also derived,based on two assumptions from two typical observation results.

  16. Raindrop size distributions and storm classification in Mexico City

    Science.gov (United States)

    Amaro-Loza, Alejandra; Pedrozo-Acuña, Adrián; Agustín| Breña-Naranjo, José

    2017-04-01

    Worldwide, the effects of urbanization and land use change have caused alterations to the hydrological response of urban catchments. This observed phenomenon implies high resolution measurements of rainfall patterns. The work provides the first dataset of raindrop size distributions and storm classification, among others, across several locations of Mexico City. Data were derived from a recent established network of laser optical disdrometers (LOD) and retrieving measurements of rainrate, reflectivity, number of drops, drop diameter & velocity, and kinetic energy, at a 1-minute resolution. Moreover, the comparison of hourly rainfall patterns revealed the origin and classification of storms into three types: stratiform, transition and convective, by means of its corresponding reflectivity and rainrate relationship (Z-R). Finally, a set of rainfall statistics was applied to evaluate the performance of the LOD disdrometer and weighing precipitation gauge (WPG) data at different aggregated timescales. It was found that WPG gauge estimates remain below the precipitation amounts measured by the LOD.

  17. Pore Size Distribution of High Performance Metakaolin Concrete

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The Compressive strength, porosity and pore size distribution of high performance metakaolin (MK) concrete were investigated. Concretes containing 0,5%,10% and 20% metakaolin were prepared at a water/cementitious material ratio (W/C) of 0.30. In parallel, concrete mixtures with the replacement of cement by 20% fly ash or 5 and 10% silica fume were prepared for comparison. The specimens were cured in water at 27℃ for 3 to 90 days. The results show that at the early age of curing (3 days and 7 days), metakaolin replacements increase the compressive strength, but silica fume replacement slightly reduces the compressive strength. At the age of and after 28 days, the compressive strength of the concrete with metakaolin and silica fume replacement increases.A strong reduction in the total porosity and average pore diameter were observed in the concrete with MK 20% and 10% in the first 7 days.

  18. Influence of particle size distribution on nanopowder cold compaction processes

    Science.gov (United States)

    Boltachev, G.; Volkov, N.; Lukyashin, K.; Markov, V.; Chingina, E.

    2017-06-01

    Nanopowder uniform and uniaxial cold compaction processes are simulated by 2D granular dynamics method. The interaction of particles in addition to wide-known contact laws involves the dispersion forces of attraction and possibility of interparticle solid bridges formation, which have a large importance for nanopowders. Different model systems are investigated: monosized systems with particle diameter of 10, 20 and 30 nm; bidisperse systems with different content of small (diameter is 10 nm) and large (30 nm) particles; polydisperse systems corresponding to the log-normal size distribution law with different width. Non-monotone dependence of compact density on powder content is revealed in bidisperse systems. The deviations of compact density in polydisperse systems from the density of corresponding monosized system are found to be minor, less than 1 per cent.

  19. Landslides triggered by the 3 August 2014 Ludian earthquake in China: geological properties, geomorphologic characteristics and spatial distribution analysis

    Directory of Open Access Journals (Sweden)

    Jia-Wen Zhou

    2016-07-01

    Full Text Available On 3 August 2014, an earthquake of Mw 6.5 happened in Ludian County, Yunnan Province, China. This earthquake triggered hundreds of landslides of various types, dominated by shallow slides, deep-seated slides, rock falls, debris flow and unstable slopes. Using field investigations and remote sensing images, 413 landslides triggered by the Ludian earthquake were statistically analyzed. Statistical analyses show that most of the landslides are shallow slides with a small volume. Most of these landslides are concentrated near the epicentre with distances ranging from 6–12 km, especially at the upper slope along the river valley. The number of landslides increased with increasing distance from the epicentre (0–9 km and then decreased with increasing distance from the epicentre (>9 km. The landslides decreased in density with increasing distance from the fault rupture. More than 70% of the landslides occurred on the right side of the Xiyuhe-Zhaotong fault, when viewed from Southwest (SW to Northeast (NE. Slope aspect and gradient had a substantial influence on the landslide distribution and landslide density increased with increasing slope gradient. Approximately, 65% of the landslides happened at the back slope with respect to the earthquake epicentre.

  20. Slip Distribution and Seismic Moment of the 2010 and 1960 Chilean Earthquakes Inferred from Tsunami Waveforms and Coastal Geodetic Data

    Science.gov (United States)

    Fujii, Yushiro; Satake, Kenji

    2013-09-01

    The slip distribution and seismic moment of the 2010 and 1960 Chilean earthquakes were estimated from tsunami and coastal geodetic data. These two earthquakes generated transoceanic tsunamis, and the waveforms were recorded around the Pacific Ocean. In addition, coseismic coastal uplift and subsidence were measured around the source areas. For the 27 February 2010 Maule earthquake, inversion of the tsunami waveforms recorded at nearby coastal tide gauge and Deep Ocean Assessment and Reporting of Tsunamis (DART) stations combined with coastal geodetic data suggest two asperities: a northern one beneath the coast of Constitucion and a southern one around the Arauco Peninsula. The total fault length is approximately 400 km with seismic moment of 1.7 × 1022 Nm (Mw 8.8). The offshore DART tsunami waveforms require fault slips beneath the coasts, but the exact locations are better estimated by coastal geodetic data. The 22 May 1960 earthquake produced very large, ~30 m, slip off Valdivia. Joint inversion of tsunami waveforms, at tide gauge stations in South America, with coastal geodetic and leveling data shows total fault length of ~800 km and seismic moment of 7.2 × 1022 Nm (Mw 9.2). The seismic moment estimated from tsunami or joint inversion is similar to previous estimates from geodetic data, but much smaller than the results from seismic data analysis.

  1. Size distribution and seasonal variation of atmospheric cellulose

    Science.gov (United States)

    Puxbaum, Hans; Tenze-Kunit, Monika

    Atmospheric cellulose is a main constituent of the insoluble organic aerosol and a "macrotracer" for plant debris. A time series of the cellulose concentration at a downtown site in Vienna showed a maximum concentration during fall and a secondary maximum during spring. The fall maximum appears to be associated with leaf litter production, the spring maximum with increased biological activity involving repulsion of cellulose-containing particles, e.g. seed production. The grand average of the time series over 9 months was 0.374 μg m -3 cellulose, respectively, 0.75 μg m -3 plant debris. Compared to an annual average of 5.7 μg m -3 organic carbon as observed at a Vienna downtown site it becomes clear that plant debris is a major contributor to the organic aerosol and has to be considered in source attribution studies. Simultaneous measurements at the downtown and a suburban site indicated that particulate cellulose is obviously not produced within the city in notable amounts, at least during the campaign in December. Size distribution measurements with impactors showed the unexpected result that "fine aerosol" size particles (0.1- 1.6 μm aerodynamic diameter) contained 0.7% "free cellulose" on a mass basis, forming a wettable, but insoluble part of the accumulation mode aerosol.

  2. Passive acoustic inversion to estimate bedload size distribution in rivers

    Science.gov (United States)

    Petrut, Teodor; Geay, Thomas; Belleudy, Philippe; Gervaise, Cédric

    2016-04-01

    The knowledge of sediment transport rate in rivers is related to issues like changes in channel forms, inundation risks and river's ecological functions. The passive acoustic method introduced here measures the bedload processes by recording the noise generated by the inter-particle collisions. In this research, an acoustic inversion is proposed to estimate the size distribution of mobile particles. The theoretical framework of Hertz's impact between two solids rigid is used to model the sediment-generated noise. This model combined with the acoustical power spectrum density gives the information on the particle sizes. The sensitivity of the method is performed and finally the experimental validation is done through a series of tests in the laboratory as well in a natural stream. The limitations of the proposed inversion method are drawn assuming the wave propagation effects in the channel. It is stated that propagation effects limit the applicability of the method to large rivers, like fluvial channels, in the detriment of mountain torrents.

  3. Vesicle Size Distribution as a Novel Nuclear Forensics Tool

    Science.gov (United States)

    Simonetti, Antonio

    2016-01-01

    The first nuclear bomb detonation on Earth involved a plutonium implosion-type device exploded at the Trinity test site (33°40′38.28″N, 106°28′31.44″W), White Sands Proving Grounds, near Alamogordo, New Mexico. Melting and subsequent quenching of the local arkosic sand produced glassy material, designated “Trinitite”. In cross section, Trinitite comprises a thin (1–2 mm), primarily glassy surface above a lower zone (1–2 cm) of mixed melt and mineral fragments from the precursor sand. Multiple hypotheses have been put forward to explain these well-documented but heterogeneous textures. This study reports the first quantitative textural analysis of vesicles in Trinitite to constrain their physical and thermal history. Vesicle morphology and size distributions confirm the upper, glassy surface records a distinct processing history from the lower region, that is useful in determining the original sample surface orientation. Specifically, the glassy layer has lower vesicle density, with larger sizes and more rounded population in cross-section. This vertical stratigraphy is attributed to a two-stage evolution of Trinitite glass from quench cooling of the upper layer followed by prolonged heating of the subsurface. Defining the physical regime of post-melting processes constrains the potential for surface mixing and vesicle formation in a post-detonation environment. PMID:27658210

  4. Airborne Measurements of Aerosol Size Distributions During PACDEX

    Science.gov (United States)

    Rogers, D. C.; Gandrud, B.; Campos, T.; Kok, G.; Stith, J.

    2007-12-01

    The Pacific Dust Experiment (PACDEX) is an airborne project that attempts to characterize the indirect aerosol effect by tracing plumes of dust and pollution across the Pacific Ocean. This project occurred during April-May 2007 and used the NSF/NCAR HIAPER research aircraft. When a period of strong generation of dust particles and pollution was detected by ground-based and satellite sensors, then the aircraft was launched from Colorado to Alaska, Hawaii, and Japan. Its mission was to intercept and track these plumes from Asia, across the Pacific Ocean, and ultimately to the edges of North America. For more description, see the abstract by Stith and Ramanathan (this conference) and other companion papers on PACDEX. The HIAPER aircraft carried a wide variety of sensors for measuring aerosols, cloud particles, trace gases, and radiation. Sampling was made in several weather regimes, including clean "background" air, dust and pollution plumes, and regions with cloud systems. Altitude ranges extended from 100 m above the ocean to 13.4 km. This paper reports on aerosol measurements made with a new Ultra-High Sensitivity Aerosol Spectrometer (UHSAS), a Radial Differential Mobility Analyzer (RDMA), a water-based CN counter, and a Cloud Droplet Probe (CDP). These cover the size range 10 nm to 10 um diameter. In clear air, dust was detected with the UHSAS and CDP. Polluted air was identified with high concentrations of carbon monoxide, ozone, and CN. Aerosol size distributions will be presented, along with data to define the context of weather regimes.

  5. EARTHQUAKE SCALING PARADOX

    Institute of Scientific and Technical Information of China (English)

    WU ZHONG-LIANG

    2001-01-01

    Two measures of earthquakes, the seismic moment and the broadband radiated energy, show completely different scaling relations. For shallow earthquakes worldwide from January 1987 to December 1998, the frequency distribution of the seismic moment shows a clear kink between moderate and large earthquakes, as revealed by previous works. But the frequency distribution of the broadband radiated energy shows a single power law, a classical Gutenberg-Richter relation. This inconsistency raises a paradox in the self-organized criticality model of earthquakes.

  6. Rank-size Distributions of Chinese Cities: Macro and Micro Patterns

    Institute of Scientific and Technical Information of China (English)

    LI Shujuan

    2016-01-01

    A large number of studies have been conducted to find a better fit for city rank-size distributions in different countries.Many theoretical curves have been proposed,but no consensus has been reached.This study argues for the importance of examining city rank-size distribution across different city size scales.In addition to focusing on macro patterns,this study examines the micro patterns of city rank-size distributions in China.A moving window method is developed to detect rank-size distributions of cities in different sizes incrementally.The results show that micro patterns of the actual city rank-size distributions in China are much more complex than those suggested by the three theoretical distributions examined (Pareto,quadratic,and q-exponential distributions).City size distributions present persistent discontinuities.Large cities are more evenly distributed than small cities and than that predicted by Zipf's law.In addition,the trend is becoming more pronounced over time.Medium-sized cities became evenly distributed first and then unevenly distributed thereafter.The rank-size distributions of small cities are relatively consistent.While the three theoretical distributions examined in this study all have the ability to detect the overall dynamics of city rank-size distributions,the actual macro distribution may be composed of a combination of the three theoretical distributions.

  7. Crystal Size Distributions in Igneous rocks: Where are we now?

    Science.gov (United States)

    Higgins, M.

    2003-12-01

    Modern Crystal Size Distributions (CSD) studies started in 1988 and have expanded since then, albeit somewhat slowly. We have now measured CSDs in a variety of different compositions and for both plutonic and volcanic rocks. However, the subject still lags far behind chemical petrology and we need many more studies. CSD methodology has advanced considerably, both for 3D and 2D methods, but it is unfortunate that some 2D studies still do not use appropriate stereological conversions or publish their raw data. The nature of the lower size limit is very important, real or measurement artefact, but is not commonly stated. All this is especially important for comparing data with earlier studies. Individual CSDs of minerals are not always very informative. A much better approach is to look at suites of related CSDs. For instance, different minerals within a single sample, ensembles of related whole rock samples, comparison of late and early textures as preserved in oikocrysts, dykes or volcanic rocks. As more data become available it will be possible to compare usefully unrelated suites of rocks. Straight or nearly straight CSDs in volcanic rocks can be produced by steady-state crystallisation. If the growth rate is known then the residence time can be determined. In some rocks there is a good agreement with other chronometric techniques, but others show no such concordance. In the latter case another model may be more appropriate, such as textural coarsening. This model has been applied in some cases in inappropriate situations, which has cast doubt on the whole subject of CSDs. For plutonic rocks exponentially increasing undercooling can also produce straight CSDs. However, many CSDs are slightly curved and other models are possible, especially if no small crystals are present. Within ensembles of straight CSDs the slope and intercept are commonly correlated. This is mostly accounted for by closure and hence this correlation is not significant, although the variation

  8. Statistical properties of the normalized ice particle size distribution

    Science.gov (United States)

    Delanoë, Julien; Protat, Alain; Testud, Jacques; Bouniol, Dominique; Heymsfield, A. J.; Bansemer, A.; Brown, P. R. A.; Forbes, R. M.

    2005-05-01

    Testud et al. (2001) have recently developed a formalism, known as the "normalized particle size distribution (PSD)", which consists in scaling the diameter and concentration axes in such a way that the normalized PSDs are independent of water content and mean volume-weighted diameter. In this paper we investigate the statistical properties of the normalized PSD for the particular case of ice clouds, which are known to play a crucial role in the Earth's radiation balance. To do so, an extensive database of airborne in situ microphysical measurements has been constructed. A remarkable stability in shape of the normalized PSD is obtained. The impact of using a single analytical shape to represent all PSDs in the database is estimated through an error analysis on the instrumental (radar reflectivity and attenuation) and cloud (ice water content, effective radius, terminal fall velocity of ice crystals, visible extinction) properties. This resulted in a roughly unbiased estimate of the instrumental and cloud parameters, with small standard deviations ranging from 5 to 12%. This error is found to be roughly independent of the temperature range. This stability in shape and its single analytical approximation implies that two parameters are now sufficient to describe any normalized PSD in ice clouds: the intercept parameter N*0 and the mean volume-weighted diameter Dm. Statistical relationships (parameterizations) between N*0 and Dm have then been evaluated in order to reduce again the number of unknowns. It has been shown that a parameterization of N*0 and Dm by temperature could not be envisaged to retrieve the cloud parameters. Nevertheless, Dm-T and mean maximum dimension diameter -T parameterizations have been derived and compared to the parameterization of Kristjánsson et al. (2000) currently used to characterize particle size in climate models. The new parameterization generally produces larger particle sizes at any temperature than the Kristjánsson et al. (2000

  9. Carbon-based phytoplankton size classes retrieved via ocean color estimates of the particle size distribution

    Science.gov (United States)

    Kostadinov, Tihomir S.; Milutinović, Svetlana; Marinov, Irina; Cabré, Anna

    2016-04-01

    Owing to their important roles in biogeochemical cycles, phytoplankton functional types (PFTs) have been the aim of an increasing number of ocean color algorithms. Yet, none of the existing methods are based on phytoplankton carbon (C) biomass, which is a fundamental biogeochemical and ecological variable and the "unit of accounting" in Earth system models. We present a novel bio-optical algorithm to retrieve size-partitioned phytoplankton carbon from ocean color satellite data. The algorithm is based on existing methods to estimate particle volume from a power-law particle size distribution (PSD). Volume is converted to carbon concentrations using a compilation of allometric relationships. We quantify absolute and fractional biomass in three PFTs based on size - picophytoplankton (0.5-2 µm in diameter), nanophytoplankton (2-20 µm) and microphytoplankton (20-50 µm). The mean spatial distributions of total phytoplankton C biomass and individual PFTs, derived from global text">SeaWiFS monthly ocean color data, are consistent with current understanding of oceanic ecosystems, i.e., oligotrophic regions are characterized by low biomass and dominance of picoplankton, whereas eutrophic regions have high biomass to which nanoplankton and microplankton contribute relatively larger fractions. Global climatological, spatially integrated phytoplankton carbon biomass standing stock estimates using our PSD-based approach yield ˜ 0.25 Gt of C, consistent with analogous estimates from two other ocean color algorithms and several state-of-the-art Earth system models. Satisfactory in situ closure observed between PSD and POC measurements lends support to the theoretical basis of the PSD-based algorithm. Uncertainty budget analyses indicate that absolute carbon concentration uncertainties are driven by the PSD parameter No which determines particle number concentration to first order, while uncertainties in PFTs' fractional contributions to total C biomass are mostly due to the

  10. The effect of complex fault rupture on the distribution of landslides triggered by the 12 January 2010, Haiti earthquake

    Science.gov (United States)

    Harp, Edwin L.; Jibson, Randall W.; Dart, Richard L.; Margottini, Claudio; Canuti, Paolo; Sassa, Kyoji

    2013-01-01

    The MW 7.0, 12 January 2010, Haiti earthquake triggered more than 7,000 landslides in the mountainous terrain south of Port-au-Prince over an area that extends approximately 50 km to the east and west from the epicenter and to the southern coast. Most of the triggered landslides were rock and soil slides from 25°–65° slopes within heavily fractured limestone and deeply weathered basalt and basaltic breccia. Landslide volumes ranged from tens of cubic meters to several thousand cubic meters. Rock slides in limestone typically were 2–5 m thick; slides within soils and weathered basalt typically were less than 1 m thick. Twenty to thirty larger landslides having volumes greater than 10,000 m3 were triggered by the earthquake; these included block slides and rotational slumps in limestone bedrock. Only a few landslides larger than 5,000 m3 occurred in the weathered basalt. The distribution of landslides is asymmetric with respect to the fault source and epicenter. Relatively few landslides were triggered north of the fault source on the hanging wall. The densest landslide concentrations lie south of the fault source and the Enriquillo-Plantain-Garden fault zone on the footwall. Numerous landslides also occurred along the south coast west of Jacmél. This asymmetric distribution of landsliding with respect to the fault source is unusual given the modeled displacement of the fault source as mainly thrust motion to the south on a plane dipping to the north at approximately 55°; landslide concentrations in other documented thrust earthquakes generally have been greatest on the hanging wall. This apparent inconsistency of the landslide distribution with respect to the fault model remains poorly understood given the lack of any strong-motion instruments within Haiti during the earthquake.

  11. Determination of Size Distribution of Nano-particles by Capillary Zone Electrophoresis

    Institute of Scientific and Technical Information of China (English)

    Yan XUE; Hai Ying YANG; Yong Tan YANG

    2005-01-01

    A new method was developed for the determination of the size distribution of nano-particles by capillary zone electrophoresis (CZE). Scattering effect of nanoparticles was studied. This method for the determination of size distribution was statistical.

  12. Analytical Approach for Loss Minimization in Distribution Systems by Optimum Placement and Sizing of Distributed Generation

    Directory of Open Access Journals (Sweden)

    Bakshi Surbhi

    2016-01-01

    Full Text Available Distributed Generation has drawn the attention of industrialists and researchers for quite a time now due to the advantages it brings loads. In addition to cost-effective and environmentally friendly, but also brings higher reliability coefficient power system. The DG unit is placed close to the load, rather than increasing the capacity of main generator. This methodology brings many benefits, but has to address some of the challenges. The main is to find the optimal location and size of DG units between them. The purpose of this paper is distributed generation by adding an additional means to reduce losses on the line. This paper attempts to optimize the technology to solve the problem of optimal location and size through the development of multi-objective particle swarm. The problem has been reduced to a mathematical optimization problem by developing a fitness function considering losses and voltage distribution line. Fitness function by using the optimal value of the size and location of this algorithm was found to be minimized. IEEE-14 bus system is being considered, in order to test the proposed algorithm and the results show improved performance in terms of accuracy and convergence rate.

  13. ESTIMATING SOIL PARTICLE-SIZE DISTRIBUTION FOR SICILIAN SOILS

    Directory of Open Access Journals (Sweden)

    Vincenzo Bagarello

    2009-09-01

    Full Text Available The soil particle-size distribution (PSD is commonly used for soil classification and for estimating soil behavior. An accurate mathematical representation of the PSD is required to estimate soil hydraulic properties and to compare texture measurements from different classification systems. The objective of this study was to evaluate the ability of the Haverkamp and Parlange (HP and Fredlund et al. (F PSD models to fit 243 measured PSDs from a wide range of 38 005_Bagarello(547_33 18-11-2009 11:55 Pagina 38 soil textures in Sicily and to test the effect of the number of measured particle diameters on the fitting of the theoretical PSD. For each soil textural class, the best fitting performance, established using three statistical indices (MXE, ME, RMSE, was obtained for the F model with three fitting parameters. In particular, this model performed better in the fine-textured soils than the coarse-textured ones but a good performance (i.e., RMSE < 0.03 was detected for the majority of the investigated soil textural classes, i.e. clay, silty-clay, silty-clay-loam, silt-loam, clay-loam, loamy-sand, and loam classes. Decreasing the number of measured data pairs from 14 to eight determined a worse fitting of the theoretical distribution to the measured one. It was concluded that the F model with three fitting parameters has a wide applicability for Sicilian soils and that the comparison of different PSD investigations can be affected by the number of measured data pairs.

  14. Single and Joint Multifractal Analysis of Soil Particle Size Distributions

    Institute of Scientific and Technical Information of China (English)

    LI Yi; LI Min; R.HORTON

    2011-01-01

    It is noted that there has been little research to compare volume-based and number-based soil particle size distributions (PSDs).Our objectives were to characterize the scaling properties and the possible connections between volume-based and number-based PSDs by applying single and joint multifractal analysis.Twelve soil samples were taken from selected sites in Northwest China and their PSDs were analyzed using laser diffractometry.The results indicated that the volume-based PSDs of all 12 samples and thc number-based PSDs of 4 samples had multifractal scalings for moment order -6 < q < 6.Some empirical relationships were identified between the extreme probability values, maximum probability (Pmax), minimum probability (Pmin), and Pmax/Pmin, and the multifractal indices,the difference and the ratio of generalized dimensions at q=0 and 1(D0-D1 and D1/D0), maximum and minimum singularity strength (αmax and αmin) and their difference (αmax - αmin, spectrum width), and asymmetric index (RD).An increase in Pmax generally resulted in corresponding increases of D0 - D1, αmax, αmax - αmin, and RD, which indicated that a large Pmax increased the multifractality of a distribution.Joint multifractal analysis showed that there was significant correlation between the scaling indices of volume-based and number-based PSDs.The multifractality indices indicated that for a given soil, the volume-based PSD was more homogeneous than the number-based PSD, and more likely to display monofractal rather than multifractal scaling.

  15. Co-seismic Slip Distribution of 2010 Darfield Mw 7.1 Earthquake Derived from InSAR Measurements

    Science.gov (United States)

    Luo, X.; Sun, J.; Shen, Z.

    2012-12-01

    The New Zealand islands locating at the boundary between the Pacific and Australian plates, is one of the most seismically active regions in the world. However, the 2010 Darfield earthquake occurred on previously-unknown faults, which absorb only a minor portion of relative plate motion there. We attempt to obtain detailed information about fault geometry and rupture distribution of this event using InSAR data, the result will be useful in understanding tectonic process and seismic hazards of the region. We use ALOS PALSAR data from JAXA to derive SAR los interferograms associated with coseismic deformation of this earthquake. More than one evidence reveals that not only Greendale fault involved in this earthquake. Combining geological field survey observations with SAR displacement fringes, correlation, and range and azimuth offsets, we identify four faults slipped during coseismic rupture, of which seven segments are distinguished with various strikes and dip angles, as shown in the figure. Our inversion result shows that slip is concentrated in the upper 10 km depth. Slips along the Greendale fault (segments 1-4 in figure) are predominantly dextral with a maximum of up to 8 m. Fault segments 5 and 6 slipped reversely, with peaks of ~3 m and 3.8 m respectively. Slip on fault segment 7 is minor, no more than 1.8 m. We also compare the top 1 km slip along the Greendale fault with surface rupture distribution, and find very good agreement. The maximum surface slip is about 6 m, located about 26 km east of the west end of the fault surface rupture. The total seismic moment released equivalents to an Mw=7.1 event. Main features of InSAR data are well recovered, the residuals near the epicenter are less than 20 cm, confirming good data fitting of our fault slip model. The main data residuals are at footwall, possibly due to strong variation of deformation field revealed by complex interferogram fringes there. We also find that some displacements are not well explained

  16. Optimal Placement and Sizing of Renewable Distributed Generations and Capacitor Banks into Radial Distribution Systems

    Directory of Open Access Journals (Sweden)

    Mahesh Kumar

    2017-06-01

    Full Text Available In recent years, renewable types of distributed generation in the distribution system have been much appreciated due to their enormous technical and environmental advantages. This paper proposes a methodology for optimal placement and sizing of renewable distributed generation(s (i.e., wind, solar and biomass and capacitor banks into a radial distribution system. The intermittency of wind speed and solar irradiance are handled with multi-state modeling using suitable probability distribution functions. The three objective functions, i.e., power loss reduction, voltage stability improvement, and voltage deviation minimization are optimized using advanced Pareto-front non-dominated sorting multi-objective particle swarm optimization method. First a set of non-dominated Pareto-front data are called from the algorithm. Later, a fuzzy decision technique is applied to extract the trade-off solution set. The effectiveness of the proposed methodology is tested on the standard IEEE 33 test system. The overall results reveal that combination of renewable distributed generations and capacitor banks are dominant in power loss reduction, voltage stability and voltage profile improvement.

  17. Aerosol size distribution seasonal characteristics measured in Tiksi, Russian Arctic

    Directory of Open Access Journals (Sweden)

    E. Asmi

    2015-07-01

    Full Text Available Four years of continuous aerosol number size distribution measurements from an Arctic Climate Observatory in Tiksi Russia are analyzed. Source region effects on particle modal features, and number and mass concentrations are presented for different seasons. The monthly median total aerosol number concentration in Tiksi ranges from 184 cm-3 in November to 724 cm-3 in July with a local maximum in March of 481 cm-3. The total mass concentration has a distinct maximum in February–March of 1.72–2.38 μg m-3 and two minimums in June of 0.42 μg m-3 and in September–October of 0.36–0.57 μg m-3. These seasonal cycles in number and mass concentrations are related to isolated aerosol sources such as Arctic haze in early spring which increases accumulation and coarse mode numbers, and biogenic emissions in summer which affects the smaller, nucleation and Aitken mode particles. The impact of temperature dependent natural emissions on aerosol and cloud condensation nuclei numbers was significant. Therefore, in addition to the precursor emissions of biogenic volatile organic compounds, the frequent Siberian forest fires, although far are suggested to play a role in Arctic aerosol composition during the warmest months. During calm and cold months aerosol concentrations were occasionally increased by nearby aerosol sources in trapping inversions. These results provide valuable information on inter-annual cycles and sources of Arctic aerosols.

  18. Controllable microgels from multifunctional molecules: structure control and size distribution

    Science.gov (United States)

    Gu, Zhenyu; Patterson, Gary; Cao, Rong; Armitage, Bruce

    2004-03-01

    Supramolecular microgels with fractal structures were produced by engineered multifunctional molecules. The combination of static and dynamic light scattering was utilized to characterize the fractal dimension (Df) of the microgels and analyze the aggregation process of the microgels. The microgels are assembled from (1) a tetrafunctional protein (avidin), (2) a trifunctional DNA construct known as a three-way junction, and (3) a biotinylated peptide nucleic acid (PNA) that acts as a crosslinker by binding irreversibly to four equivalent binding sites on the protein and thermoreversibly to three identical binding sites on the DNA. The structure of microgels can be controlled through different aggregation mechanisms. The initial microgels formed by titration have a compact structure with Df ˜2.6; while the reversible microgels formed from melted aggregates have an open structure with Df ˜1.8. The values are consistent with the point-cluster and the cluster-cluster aggregation mechanisms, respectively. A narrow size distribution of microgels was observed and explained in terms of the Flory theory of reversible self-assembly.

  19. Bubble Size Distribution in a Vibrating Bubble Column

    Science.gov (United States)

    Mohagheghian, Shahrouz; Wilson, Trevor; Valenzuela, Bret; Hinds, Tyler; Moseni, Kevin; Elbing, Brian

    2016-11-01

    While vibrating bubble columns have increased the mass transfer between phases, a universal scaling law remains elusive. Attempts to predict mass transfer rates in large industrial scale applications by extrapolating laboratory scale models have failed. In a stationary bubble column, mass transfer is a function of phase interfacial area (PIA), while PIA is determined based on the bubble size distribution (BSD). On the other hand, BSD is influenced by the injection characteristics and liquid phase dynamics and properties. Vibration modifies the BSD by impacting the gas and gas-liquid dynamics. This work uses a vibrating cylindrical bubble column to investigate the effect of gas injection and vibration characteristics on the BSD. The bubble column has a 10 cm diameter and was filled with water to a depth of 90 cm above the tip of the orifice tube injector. BSD was measured using high-speed imaging to determine the projected area of individual bubbles, which the nominal bubble diameter was then calculated assuming spherical bubbles. The BSD dependence on the distance from the injector, injector design (1.6 and 0.8 mm ID), air flow rates (0.5 to 5 lit/min), and vibration conditions (stationary and vibration conditions varying amplitude and frequency) will be presented. In addition to mean data, higher order statistics will also be provided.

  20. An alternative method for determining particle-size distribution of forest road aggregate and soil with large-sized particles

    Science.gov (United States)

    Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese

    2014-01-01

    Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...

  1. The Influence of Crystal Size Distributions (CSD) on the Rheology of Magma: New Insights from Analogue Experiments

    Science.gov (United States)

    Klein, J.; Mueller, S.; Castro, J. M.

    2016-12-01

    Knowing the flow properties, or rheology, of magma is of great importance for volcanological research. It is vital for understanding eruptive and depositional features, modelling magma flow rates and distances, interpreting pre-eruptive volcanic unrest and earthquakes, and ultimately predicting volcanic hazards related to magma motion. Despite its key role in governing volcanic processes, magma rheology is extremely difficult to constrain in time and space within a natural volcanic system, because it is dependent upon so many variables. Therefore, both analogue and experimental studies of permissible yet simplified scenarios are needed to isolate different rheological influences. Despite significant progress in understanding the rheological properties of silicate melts and two-phase mixtures (e.g. melt + crystals), as well as the impact of the volume fraction (e.g. Pinkerton & Stevenson, 1992; Caricchi et al., 2007; Mueller et al., 2010) and shape (Mueller et al., 2011) of crystals on magma rheology, the effect of the crystal size distribution (CSD) is still poorly constrained. A highly disperse CSD (i.e., a great variety of different crystal sizes) leads to a much more efficient packing of crystals in a flowing magma which predominantly controls the rheological behavior of magma in a sheared particle Accounting for, or neglecting, the size distribution of crystals can therefore make a considerable difference in magma flow models. We present the results of systematic rheometric experiments using multimodal analogue particle suspensions of well-defined size fractions of micrometer-sized glass beads in silicone oil as magma-analogue material. Starting with simple bimodal distributions (i.e. particles of two distinct sizes), the complexity of the samples' particle size distribution has been successively increased and evaluated towards tetramodal distributions (four distinct size fractions). Statistical values of the given suspensions have been calculated and

  2. Tectonics earthquake distribution pattern analysis based focal mechanisms (Case study Sulawesi Island, 1993–2012)

    Energy Technology Data Exchange (ETDEWEB)

    Ismullah M, Muh. Fawzy, E-mail: mallaniung@gmail.com [Master Program Geophysical Engineering, Faculty of Mining and Petroleum Engineering (FTTM), Bandung Institute of Technology (ITB), Jl. Ganesha no. 10, Bandung, 40116, Jawa Barat (Indonesia); Lantu,; Aswad, Sabrianto; Massinai, Muh. Altin [Geophysics Program Study, Faculty of Mathematics and Natural Sciences, Hasanuddin University (UNHAS), Jl. PerintisKemerdekaan Km. 10, Makassar, 90245, Sulawesi Selatan (Indonesia)

    2015-04-24

    Indonesia is the meeting zone between three world main plates: Eurasian Plate, Pacific Plate, and Indo – Australia Plate. Therefore, Indonesia has a high seismicity degree. Sulawesi is one of whose high seismicity level. The earthquake centre lies in fault zone so the earthquake data gives tectonic visualization in a certain place. This research purpose is to identify Sulawesi tectonic model by using earthquake data from 1993 to 2012. Data used in this research is the earthquake data which consist of: the origin time, the epicenter coordinate, the depth, the magnitude and the fault parameter (strike, dip and slip). The result of research shows that there are a lot of active structures as a reason of the earthquake in Sulawesi. The active structures are Walannae Fault, Lawanopo Fault, Matano Fault, Palu – Koro Fault, Batui Fault and Moluccas Sea Double Subduction. The focal mechanism also shows that Walannae Fault, Batui Fault and Moluccas Sea Double Subduction are kind of reverse fault. While Lawanopo Fault, Matano Fault and Palu – Koro Fault are kind of strike slip fault.

  3. Earthquake statistics, spatiotemporal distribution of foci and source mechanisms as a key to understanding of causes leading to the West Bohemia/Vogtland earthquake swarms

    Science.gov (United States)

    Horalek, Josef; Jakoubkova, Hana

    2017-04-01

    The origin of earthquake swarms is still unclear. The swarms typically occur at the plate margins but also in intracontinental areas. West Bohemia-Vogtland represents one of the most active intraplate earthquake-swarm areas in Europe. It is characterised by a frequent reoccurrence of ML forming a focal belt of about 15 x 6 km, focal depths vary from 6 to 15 km. An exceptional non-swarm activity (mainshock-aftershock sequences) up to magnitudes ML = 4.5, stroke the region in May to August 2014, the events were also located in the NK swarm-focal belt. We analysed geometry of the NK focal zone applying the double-difference method to seismicity in the period 1997 - 2014. The swarms are located close to each other at depths between 6 and 13 km, the 2014 maishock-aftershock sequences among them. The 2000 and 2008 swarms were located on the same portion of the NK fault, similarly the swarms of 1997, 2011 and 2013 also occurred on the same fault segment. Other fault segment hosted three mainshock-aftershock sequences of 2014. The individual swarms differ considerably in their evolution, mainly in the rate of the seismic-moment release and foci migration. The frequency-magnitude distributions of all the swarms show bimodal-like character: the most events obey the b-value = 1.0 distribution, however, a group of the largest events ( ML > 2.8) depart significantly from it. Furthermore, we disclose that all the ML > 2.8 swarm events, which occurred in the given time span, are located in a few dense clusters. It implies that the most of seismic energy in the individual swarms has been released in step by step rupturing of one or a few asperities. The source mechanisms have been retrieved in the full moment-tensor description (MT). The mechanism patters of the individual swarms indicate their complexity. All the swarms exhibit both oblique-normal and oblique-thrust faulting but the former prevails. We found a several families of mechanisms, which fit well geometry of respective

  4. Focal depths for moderate-sized aftershocks of the Wenchuan M_S8.0 earthquake and their implications

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Sliding-window cross-correlation method is firstly adopted to identify sPn phase, and to constrain focal depth from regional seismograms, by measuring the time separation between sPn and Pn phases. We present the focal depths of the 17 moderate-sized aftershocks (MS≥5.0) of the Wenchuan MS8.0 earth-quake, using the data recorded by the regional seismic broadband networks of Shaanxi, Qinghai, Gansu, Yunnan and Sichuan. Our results show focal depths of aftershocks range from 8 to 20 km, and tend to cluster at two average depths, separate at 32.5°N, i.e., 11 km to the south and 17 km to the north, indicating that these aftershocks are origin of upper-to-middle crust. Combined with other results, we suggest that the Longmenshan fault is not a through-going crustal fault and the Pingwu-Qingchuan fault may be not the northward extension of the Longmenshan thrust fault.

  5. New geological perspectives on earthquake recurrence models

    Energy Technology Data Exchange (ETDEWEB)

    Schwartz, D.P. [Geological Survey, Menlo Park, CA (United States)

    1997-02-01

    In most areas of the world the record of historical seismicity is too short or uncertain to accurately characterize the future distribution of earthquakes of different sizes in time and space. Most faults have not ruptured once, let alone repeatedly. Ultimately, the ability to correctly forecast the magnitude, location, and probability of future earthquakes depends on how well one can quantify the past behavior of earthquake sources. Paleoseismological trenching of active faults, historical surface ruptures, liquefaction features, and shaking-induced ground deformation structures provides fundamental information on the past behavior of earthquake sources. These studies quantify (a) the timing of individual past earthquakes and fault slip rates, which lead to estimates of recurrence intervals and the development of recurrence models and (b) the amount of displacement during individual events, which allows estimates of the sizes of past earthquakes on a fault. When timing and slip per event are combined with information on fault zone geometry and structure, models that define individual rupture segments can be developed. Paleoseismicity data, in the form of timing and size of past events, provide a window into the driving mechanism of the earthquake engine--the cycle of stress build-up and release.

  6. Comparisons of Particulate Size Distributions from Multiple Combustion Strategies

    Science.gov (United States)

    Zhang, Yizhou

    In this study, a comparison of particle size distribution (PSD) measurements from eight different combustion strategies was conducted at four different load-speed points. The PSDs were measured using a scanning mobility particle sizer (SMPS) together with a condensation particle counter (CPC). To study the influence of volatile particles, PSD measurements were performed with and without a volatile particle remover (thermodenuder, TD) at both low and high dilution ratios. The common engine platform utilized in the experiment helps to eliminate the influence of background particulate and ensures similarity in dilution conditions. The results show a large number of volatile particles were present under LDR sample conditions for most of the operating conditions. The use of a TD, especially when coupled with HDR, was demonstrated to be effective at removing volatile particles and provided consistent measurements across all combustion strategies. The PSD comparison showed that gasoline premixed combustion strategies such as HCCI and GCI generally have low PSD magnitudes for particle sizes greater than the Particle Measurement Programme (PMP) cutoff diameter (23 nm), and the PSDs were highly nuclei-mode particle dominated. The strategies using diesel as the only fuel (DLTC and CDC) generally showed the highest particle number emissions for particles larger than 23 nm and had accumulation-mode particle dominated PSDs. A consistent correlation between the increase of the direct-injection of diesel fuel and a higher fraction of accumulation-mode particles was observed over all combustion strategies. A DI fuel substitution study and injector nozzle geometry study were conducted to better understand the correlation between PSD shape and DI fueling. It was found that DI fuel properties has a clear impact on PSD behavior for CDC and NG DPI. Fuel with lower density and lower sooting tendency led to a nuclei-mode particle dominated PSD shape. For NG RCCI, accumulation

  7. Single-peak distribution model of particulate size for welding aerosols

    Institute of Scientific and Technical Information of China (English)

    施雨湘; 李爱农

    2003-01-01

    A large number of particulate size distributions of welding aerosols are measured by means of DMPS method, several distribution types are presented. Among them the single-peak distribution is the basic composing unit of particulate size. The research on the mathematic models and distributions functions shows that the single-peak distribution features the log-normal distribution. The diagram-estimating method (DEM) is a concise approach to dealing with distribution types, obtaining distribution functions for the particulate sizes of welding aerosols. It proves that the distribution function of particulate size possesses the extending property, being from quantity distribution to volume, as well as high-order moment distributions, with K-S method verifying the application of single-peak distribution and of DEM.

  8. Comment on "Analysis of the Spatial Distribution between Successive Earthquakes" by Davidsen and Paczuski

    CERN Document Server

    Werner, M J

    2006-01-01

    By analyzing a southern California earthquake catalog, Davidsen and Paczuski [Phys. Rev. Lett. 94, 048501 (2005)] claim to have found evidence contradicting the theory of aftershock zone scaling in favor of scale-free statistics. We present four elements showing that Davidsen and Paczuski's results may be insensitive to the existence of physical length scales associated with aftershock zones or mainshock rupture lengths, so that their claim is unsubstantiated. (i) Their exponent smaller than 1 for a pdf implies that the power law statistics they report is at best an intermediate asymptotic; (ii) their power law is not robust to the removal of 6 months of data around Landers earthquake within a period of 17 years; (iii) the same analysis for Japan and northern California shows no evidence of robust power laws; (iv) a statistical model of earthquake triggering that explicitely obeys aftershock zone scaling can reproduce the observed histogram of Davidsen and Paczuski, demonstrating that their statistic may not ...

  9. Geodetically resolved slip distribution of the 27 August 2012 Mw=7.3 El Salvador earthquake

    Science.gov (United States)

    Geirsson, H.; La Femina, P. C.; DeMets, C.; Hernandez, D. A.; Mattioli, G. S.; Rogers, R.; Rodriguez, M.

    2013-12-01

    On 27 August 2012 a Mw=7.3 earthquake occurred offshore of Central America causing a small tsunami in El Salvador and Nicaragua but little damage otherwise. This is the largest magnitude earthquake in this area since 2001. We use co-seismic displacements estimated from episodic and continuous GPS station time series to model the magnitude and spatial variability of slip for this event. The estimated surface displacements are small (El Salvador. Additionally, we observe a deeper region of slip to the east, that reaches towards the Gulf of Fonseca between El Salvador and Nicaragua. The observed tsunami additionally indicates near-trench rupture off the coast of El Salvador. The duration of the rupturing is estimated from seismic data to be 70 s, which indicates a slow rupture process. Since the geodetic moment we obtain agrees with the seismic moment, this indicates that the earthquake was not associated with aseismic slip.

  10. Disaster waste characteristics and radiation distribution as a result of the Great East Japan Earthquake.

    Science.gov (United States)

    Shibata, Tomoyuki; Solo-Gabriele, Helena; Hata, Toshimitsu

    2012-04-03

    The compounded impacts of the catastrophes that resulted from the Great East Japan Earthquake have emphasized the need to develop strategies to respond to multiple types and sources of contamination. In Japan, earthquake and tsunami-generated waste were found to have elevated levels of metals/metalloids (e.g., mercury, arsenic, and lead) with separation and sorting more difficult for tsunami-generated waste as opposed to earthquake-generated waste. Radiation contamination superimposed on these disaster wastes has made it particularly difficult to manage the ultimate disposal resulting in delays in waste management. Work is needed to develop policies a priori for handling wastes from combined catastrophes such as those recently observed in Japan.

  11. The distribution of earthquakes with depth and stress in subducting slabs

    Science.gov (United States)

    Vassiliou, M. S.; Hager, B. H.; Raefsky, A.

    1984-01-01

    The global variation of Benioff zone seismicity with depth and the orientation of stress axes of deep and intermediate earthquakes is explained using numerical models of subducting slabs. Models that match the seismicity and stress require a barrier to flow at the 670 km seismic discontinuity. The barrier may be a viscosity increase of at least an order of magnitude or a chemical discontinuity. Instantaneous flow is subparallel to the slabs for models with a viscosity increase but contorted for models with a chemical barrier. Log N (number of earthquakes) decreases linearly to 250-300 km depth and increases thereafter. Stress magnitude in the models shows the same pattern, in accord with experiments showing N proportional to e(k-sigma), with k a constant and sigma the stress magnitude. The models predict downdip compression in the slabs at depths below 300-400 km, as observed for earthquake stress axes.

  12. Simulation of 2D Fields of Raindrop Size Distributions

    Science.gov (United States)

    Berne, A.; Schleiss, M.; Uijlenhoet, R.

    2008-12-01

    The raindrop size distribution (DSD hereafter) is of primary importance for quantitative applications of weather radar measurements. The radar reflectivity~Z (directly measured by radar) is related to the power backscattered by the ensemble of hydrometeors within the radar sampling volume. However, the rain rate~R (the flux of water to the surface) is the variable of interest for many applications (hydrology, weather forecasting, air traffic for example). Usually, radar reflectivity is converted into rain rate using a power law such as Z=aRb. The coefficients a and b of the Z-R relationship depend on the DSD. The variability of the DSD in space and time has to be taken into account to improve radar rain rate estimates. Therefore, the ability to generate a large number of 2D fields of DSD which are statistically homogeneous provides a very useful simulation framework that nicely complements experimental approaches based on DSD data, in order to investigate radar beam propagation through rain as well as radar retrieval techniques. The proposed approach is based on geostatistics for structural analysis and stochastic simulation. First, the DSD is assumed to follow a gamma distribution. Hence a 2D field of DSDs can be adequately described as a 2D field of a multivariate random function consisting of the three DSD parameters. Such fields are simulated by combining a Gaussian anamorphosis and a multivariate Gaussian random field simulation algorithm. Using the (cross-)variogram models fitted on data guaranties that the spatial structure of the simulated fields is consistent with the observed one. To assess its validity, the proposed method is applied to data collected during intense Mediterranean rainfall. As only time series are available, Taylor's hypothesis is assumed to convert time series in 1D range profile. Moreover, DSD fields are assumed to be isotropic so that the 1D structure can be used to simulate 2D fields. A large number of 2D fields of DSD parameters are

  13. Rank-size distribution and primate city characteristics in India--a temporal analysis.

    Science.gov (United States)

    Das, R J; Dutt, A K

    1993-02-01

    "This paper is an analysis of the historical change in city size distribution in India....Rank-size distribution at national level and primate city-size distribution at regional levels are examined....The paper also examines, in the Indian context, the relation between rank-size distribution and an integrated urban system, and the normative nature of the latter as a spatial organization of human society. Finally, we have made a modest attempt to locate the research on city-size distribution...." excerpt

  14. Geological structure of Osaka basin and characteristic distributions of structural damage caused by earthquake; Osaka bonchi kozo to shingai tokusei

    Energy Technology Data Exchange (ETDEWEB)

    Nakagawa, K.; Shiono, K.; Inoue, N.; Senda, S. [Osaka City University, Osaka (JP. Faculty of Science); Ryoki, K. [Osaka Polytechnic Collage, Osaka (Japan); Shichi, R. [Nagoya University, Nagoya (Japan). Faculty of Science

    1996-05-01

    The paper investigates relations between the damage caused by the Hyogo-ken Nanbu earthquake and the deep underground structures. A characteristic of the earthquake damage distribution is that the damage concentrated near faults. Most of the damages were seen on the side of faults` relatively falling rather than right above the faults and of their slightly slanting to the seaside. Distribution like this seems to be closely related to underground structures. Therefore, a distribution map of the depth of basement granite in Osaka sedimentary basin was drawn, referring to the data on basement rock depth obtained from the distribution map of gravity anomaly and the result of the survey using the air gun reflection method. Moreover, cubic underground structures were determined by 3-D gravity analysis. The result was concluded as follows: when observing the M7 zone of the low land, in particular, where the damage was great from an aspect of gravity anomaly, the basement rock below the zone declined near the cliff toward the sea, which indicates a great possibility of its being a fault. There is a high possibility that the zone suffered mostly from the damage caused by focusing by refraction and total reflection of seismic wave rays. 3 refs., 8 figs.

  15. Thermal Properties, Sizes, and Size Distribution of Jupiter-Family Cometary Nuclei

    CERN Document Server

    Fernandez, Y R; Lamy, P L; Toth, I; Groussin, O; Lisse, C M; A'Hearn, M F; Bauer, J M; Campins, H; Fitzsimmons, A; Licandro, J; Lowry, S C; Meech, K J; Pittichova, J; Reach, W T; Snodgrass, C; Weaver, H A

    2013-01-01

    We present results from SEPPCoN, an on-going Survey of the Ensemble Physical Properties of Cometary Nuclei. In this report we discuss mid-infrared measurements of the thermal emission from 89 nuclei of Jupiter-family comets (JFCs). All data were obtained in 2006 and 2007 with the Spitzer Space Telescope. For all 89 comets, we present new effective radii, and for 57 comets we present beaming parameters. Thus our survey provides the largest compilation of radiometrically-derived physical properties of nuclei to date. We conclude the following. (a) The average beaming parameter of the JFC population is 1.03+/-0.11, consistent with unity, and indicating low thermal inertia. (b) The known JFC population is not complete even at 3 km radius, and even for comets with perihelia near ~2 AU. (c) We find that the JFC nuclear cumulative size distribution (CSD) has a power-law slope of around -1.9. (d) This power-law is close to that derived from visible-wavelength observations, suggesting that there is no strong dependenc...

  16. Slip distribution of the 2014 Mw = 8.1 Pisagua, northern Chile, earthquake sequence estimated from coseismic fore-arc surface cracks

    Science.gov (United States)

    Loveless, John P.; Scott, Chelsea P.; Allmendinger, Richard W.; González, Gabriel

    2016-10-01

    The 2014 Mw = 8.1 Iquique (Pisagua), Chile, earthquake sequence ruptured a segment of the Nazca-South America subduction zone that last hosted a great earthquake in 1877. The sequence opened >3700 surface cracks in the fore arc of decameter-scale length and millimeter-to centimeter-scale aperture. We use the strikes of measured cracks, inferred to be perpendicular to coseismically applied tension, to estimate the slip distribution of the main shock and largest aftershock. The slip estimates are compatible with those based on seismic, geodetic, and tsunami data, indicating that geologic observations can also place quantitative constraints on rupture properties. The earthquake sequence ruptured between two asperities inferred from a regional-scale distribution of surface cracks, interpreted to represent a modal or most common rupture scenario for the northern Chile subduction zone. We suggest that past events, including the 1877 earthquake, broke the 2014 Pisagua source area together with adjacent sections in a throughgoing rupture.

  17. Analysis of afterslip distribution following the 2007 September 12 southern Sumatra earthquake using poroelastic and viscoelastic media

    Science.gov (United States)

    Lubis, Ashar Muda; Hashima, Akinori; Sato, Toshinori

    2013-01-01

    Most studies of afterslip distribution consider only elastic media. However, the effects of poroelastic rebound in the upper crust and viscoelastic relaxation in the asthenosphere are part of the observed post-seismic deformation. Therefore, these effects should be removed to give a more reliable and correct afterslip distribution. We developed a method for calculating an afterslip distribution in elastic, poroelastic and viscoelastic media, and we applied this method to the case of the 2007 southern Sumatra earthquake (Mw 8.5). To estimate the coseismic slip and time evolution of the afterslip distribution, we applied Akaike's Bayesian Information Criterion (ABIC) inversion method of coseismic displacement, and analysed 15 months of GPS post-seismic deformation data in 3-month observation periods. To calculate afterslip in each period, we considered not only viscoelastic responses to coseismic slip but also viscoelastic responses to afterslip in the preceding periods. We used viscoelastic model to compute post-seismic deformation models every 3 months during the 15 months after the earthquake. The viscosity value for the asthenosphere layer is a crucial unknown parameter. To overcome this problem, we used a grid search method to determine the best-viscosity value, and we found that the best viscosity for the Sumatra subduction zone was 2.5 × 1018 Pa·s. After removing the poroelastic and viscoelastic responses, we obtained maximum afterslip of 0.5 m during the 15-month investigation (the same as maximum afterslip estimated using the elastic medium only), but the poroelastic and viscoelastic responses brought the afterslip distribution to a shallower depth than the main coseismic rupture area. The results showed that the poroelastic and viscoelastic responses added significant corrections to the afterslip distribution. Compared with the traditional method, this method improved the determination of the afterslip distribution. We conclude that consideration of

  18. Evaluating the role of genome downsizing and size thresholds from genome size distributions in angiosperms.

    Science.gov (United States)

    Zenil-Ferguson, Rosana; Ponciano, José M; Burleigh, J Gordon

    2016-07-01

    Whole-genome duplications (WGDs) can rapidly increase genome size in angiosperms. Yet their mean genome size is not correlated with ploidy. We compared three hypotheses to explain the constancy of genome size means across ploidies. The genome downsizing hypothesis suggests that genome size will decrease by a given percentage after a WGD. The genome size threshold hypothesis assumes that taxa with large genomes or large monoploid numbers will fail to undergo or survive WGDs. Finally, the genome downsizing and threshold hypothesis suggests that both genome downsizing and thresholds affect the relationship between genome size means and ploidy. We performed nonparametric bootstrap simulations to compare observed angiosperm genome size means among species or genera against simulated genome sizes under the three different hypotheses. We evaluated the hypotheses using a decision theory approach and estimated the expected percentage of genome downsizing. The threshold hypothesis improves the approximations between mean genome size and simulated genome size. At the species level, the genome downsizing with thresholds hypothesis best explains the genome size means with a 15% genome downsizing percentage. In the genus level simulations, the monoploid number threshold hypothesis best explains the data. Thresholds of genome size and monoploid number added to genome downsizing at species level simulations explain the observed means of angiosperm genome sizes, and monoploid number is important for determining the genome size mean at the genus level. © 2016 Botanical Society of America.

  19. Geographic distribution of blood collections in Haiti before and after the 2010 earthquake.

    Science.gov (United States)

    Bjork, A; Jean Baptiste, A E; Noel, E; Jean Charles, N P D; Polo, E; Pitman, J P

    2017-05-01

    The January 2010 Haiti earthquake destroyed the National Blood Transfusion Center and reduced monthly national blood collections by > 46%. Efforts to rapidly scale-up blood collections outside of the earthquake-affected region were investigated. Blood collection data for 2004-2014 from Haiti's 10 administrative departments were grouped into four regions: Northern, Central, Port-au-Prince and Southern. Analyses compared regional collection totals during the study period. Collections in Port-au-Prince accounted for 52% of Haiti's blood supply in 2009, but fell 96% in February 2010. Haiti subsequently increased blood collections in the North, Central and Southern regions to compensate. By May 2010, national blood collections were only 10·9% lower than in May 2009, with 70% of collections coming from outside of Port-au-Prince. By 2013 national collections (27 478 units) had surpassed 2009 levels by 30%, and Port-au-Prince collections had recovered (from 11 074 units in 2009 to 11 670 units in 2013). Haiti's National Blood Safety Program managed a rapid expansion of collections outside of Port-au-Prince following the earthquake. Annual collections exceeded pre-earthquake levels by 2012 and continued rising annually. Increased regional collections provided a greater share of the national blood supply, reducing dependence on Port-au-Prince for collections.

  20. Growth and change in the analysis of rank - size distributions: empirical findings

    OpenAIRE

    Malecki, E.J.

    1980-01-01

    This paper analyzes the interrelationships of city size and growthin the American Midwest from 1940 to 1970 in an effort to synthesize the study of urban growth rates and of city-size distributions. Changes in the rank - size distribution are related to the differential growth of different-size urban places; some relationship in changes over time is evident, but there is little correspondence in static analyses. The urban system analyzed by various threshold sizes examines the sensitivity of ...

  1. Stress Distribution Near the Seismic Gap Between Wenchuan and Lushan Earthquakes

    Science.gov (United States)

    Yang, Yihai; Liang, Chuntao; Li, Zhongquan; Su, Jinrong; Zhou, Lu; He, Fujun

    2016-08-01

    The Wenchuan M S 8.0 earthquake and Lushan M S 7.0 earthquake unilaterally fractured northeastward and southwestward, respectively, along the Longmenshan fault belt. The aftershock areas of the two earthquakes were separated by a gap with a length of nearly 60 km. We have determined the focal mechanisms of 471 earthquakes with magnitude M ≥ 3 from Jan 2008 to July 2014 near the seismic gap using a full waveform inversion method. Normal, thrust and strike-slip focal mechanisms can be found in northern segment. But in a significant contrast, focal mechanisms of the earthquakes in the southern segment are dominated by thrust faulting. Based on the determined source parameters, we further apply a damped linear inversion method to derive the regional stress field. The southern segment is characterized by an obvious thrust faulting stress regime with a nearly horizontal maximum compression that orients in SE-NW direction. The stress environment in the northern segment is a lot more complicated. The maximum compressional stresses appear to rotate around the "asperity" near west of the Dujiangyan city. Stress field also shows strong variation with time and depth. Before 2009, the seismic activities are more concentrated on the Pengxian-Guanxian fault and Yingxiu-Beichuan fault with dominant strike-slip faulting and normal faulting, while after 2009, the seismic activities are dominated by thrust faulting from north to south, while the activities are more concentrated on the Wenchuan-Maoxian fault in northern segment and Pengxian-Guanxian fault in southern segment. The maximum compressional stresses vary in different depths from north to south, thus may imply the decoupled movement in shallow and in depth.

  2. Drop Size Distribution - Based Separation of Stratiform and Convective Rain

    Science.gov (United States)

    Thurai, Merhala; Gatlin, Patrick; Williams, Christopher

    2014-01-01

    For applications in hydrology and meteorology, it is often desirable to separate regions of stratiform and convective rain from meteorological radar observations, both from ground-based polarimetric radars and from space-based dual frequency radars. In a previous study by Bringi et al. (2009), dual frequency profiler and dual polarization radar (C-POL) observations in Darwin, Australia, had shown that stratiform and convective rain could be separated in the log10(Nw) versus Do domain, where Do is the mean volume diameter and Nw is the scaling parameter which is proportional to the ratio of water content to the mass weighted mean diameter. Note, Nw and Do are two of the main drop size distribution (DSD) parameters. In a later study, Thurai et al (2010) confirmed that both the dual-frequency profiler based stratiform-convective rain separation and the C-POL radar based separation were consistent with each other. In this paper, we test this separation method using DSD measurements from a ground based 2D video disdrometer (2DVD), along with simultaneous observations from a collocated, vertically-pointing, X-band profiling radar (XPR). The measurements were made in Huntsville, Alabama. One-minute DSDs from 2DVD are used as input to an appropriate gamma fitting procedure to determine Nw and Do. The fitted parameters - after averaging over 3-minutes - are plotted against each other and compared with a predefined separation line. An index is used to determine how far the points lie from the separation line (as described in Thurai et al. 2010). Negative index values indicate stratiform rain and positive index indicate convective rain, and, moreover, points which lie somewhat close to the separation line are considered 'mixed' or 'transition' type precipitation. The XPR observations are used to evaluate/test the 2DVD data-based classification. A 'bright-band' detection algorithm was used to classify each vertical reflectivity profile as either stratiform or convective

  3. Raindrop size distribution variability estimated using ensemble statistics

    Directory of Open Access Journals (Sweden)

    C. R. Williams

    2009-02-01

    Full Text Available Before radar estimates of the raindrop size distribution (DSD can be assimilated into numerical weather prediction models, the DSD estimate must also include an uncertainty estimate. Ensemble statistics are based on using the same observations as inputs into several different models with the spread in the outputs providing an uncertainty estimate. In this study, Doppler velocity spectra from collocated vertically pointing profiling radars operating at 50 and 920 MHz were the input data for 42 different DSD retrieval models. The DSD retrieval models were perturbations of seven different DSD models (including exponential and gamma functions, two different inverse modeling methodologies (convolution or deconvolution, and three different cost functions (two spectral and one moment cost functions.

    Two rain events near Darwin, Australia, were analyzed in this study producing 26 725 independent ensembles of mass-weighted mean raindrop diameter Dm and rain rate R. The mean and the standard deviation (indicated by the symbols <x> and σx of Dm and R were estimated for each ensemble. For small ranges of <Dm> or <R>, histograms of σDm and σR were found to be asymmetric, which prevented Gaussian statistics from being used to describe the uncertainties. Therefore, 10, 50, and 90 percentiles of σDm and σR were used to describe the uncertainties for small intervals of <Dm> or <R>. The smallest Dm uncertainty occurred for <Dm> between 0.8 and 1.8 mm with the 90th and 50th percentiles being less than 0.15 and 0.11 mm, which correspond to relative errors of less than 20% and 15%, respectively. The uncertainty increased for smaller and larger <Dm> values. The uncertainty of R increased with <R>. While the 90th percentile

  4. Mega-earthquake vs. small size seismic events: tradeoff and limits of Remote Sensing in the application of source parameters

    OpenAIRE

    S. Stramondo; Bignami, C.; Cannelli, V.; Melini, D.; Moro, M; Polcari, M.; Samsonov, S.; M. Saroli; P. Vannoli

    2014-01-01

    The aim of this work is to provide an overview of the capabilities and limitations of Differential Interferometric SAR (DInSAR) technique to supply reliable information about earthquakes over a very wide range of magnitudes, from mega-earthquakes (of magnitude 8+) up to those reaching the lower limits of detection. The capability of DIn- SAR to detect surface movements over large areas has been successfully used in seismology, where traditionally the main topic of scientists is to determine t...

  5. Aftershock distribution as a constraint on the geodetic model of coseismic slip for the 2004 Parkfield earthquake

    Science.gov (United States)

    Bennington, Ninfa; Thurber, Clifford; Feigl, Kurt; ,

    2011-01-01

    Several studies of the 2004 Parkfield earthquake have linked the spatial distribution of the event’s aftershocks to the mainshock slip distribution on the fault. Using geodetic data, we find a model of coseismic slip for the 2004 Parkfield earthquake with the constraint that the edges of coseismic slip patches align with aftershocks. The constraint is applied by encouraging the curvature of coseismic slip in each model cell to be equal to the negative of the curvature of seismicity density. The large patch of peak slip about 15 km northwest of the 2004 hypocenter found in the curvature-constrained model is in good agreement in location and amplitude with previous geodetic studies and the majority of strong motion studies. The curvature-constrained solution shows slip primarily between aftershock “streaks” with the continuation of moderate levels of slip to the southeast. These observations are in good agreement with strong motion studies, but inconsistent with the majority of published geodetic slip models. Southeast of the 2004 hypocenter, a patch of peak slip observed in strong motion studies is absent from our curvature-constrained model, but the available GPS data do not resolve slip in this region. We conclude that the geodetic slip model constrained by the aftershock distribution fits the geodetic data quite well and that inconsistencies between models derived from seismic and geodetic data can be attributed largely to resolution issues.

  6. Coseismic slip distribution for the Mw 9 2011 Tohoku-Oki earthquake derived from 3-D FE modeling

    Science.gov (United States)

    Kyriakopoulos, C.; Masterlark, T.; Stramondo, S.; Chini, M.; Bignami, C.

    2013-07-01

    coseismic slip distribution of the Mw 9.0 2011 Tohoku-Oki earthquake has been estimated by inverting near-field onshore and offshore geodetic data, using Green's function calculated with a 3-D finite element (FE) model. The FE model simulates several geophysical features of the subduction zone that hosted the rupture surface of the event. These features include a 3-D geometric configuration and distribution of material properties of the tectonic system, a precise geometric configuration of the irregular rupture surface, and an irregular free surface according to the topography and bathymetry. A model that simulates rupture along the interface between the relatively weak overriding Okhotsk plate and stiff subducting slab of the Pacific Plate requires less slip to produce the observed surface deformation, compared to a model having uniform material properties across the rupture interface. Furthermore, the estimated slip of the heterogeneous model is more widely distributed over the shallow portion of the plate boundary, whereas the estimated slip of the homogeneous model is more focused updip of the epicenter. This demonstrates the sensitivity of inverse analyses of geodetic data for the 2011 Tohoku-Oki earthquake to the simulated domain geometry and configuration of material properties.

  7. The Hierarchy Model of the Size Distribution of Centres

    NARCIS (Netherlands)

    J. Tinbergen (Jan)

    1968-01-01

    textabstractWe know that human beings live in centres, that is, cities, towns and villages of different size. Both large and small centres have a number of advantages and disadvantages, different for different people and this is why we have a whole range of sizes. Statistically, we even find that th

  8. Increasing lengths of aftershock zones with depths of moderate-size earthquakes on the San Jacinto Fault suggests triggering of deep creep in the middle crust

    Science.gov (United States)

    Meng, Xiaofeng; Peng, Zhigang

    2016-01-01

    Recent geodetic studies along the San Jacinto Fault (SJF) in southern California revealed a shallower locking depth than the seismogenic depth outlined by microseismicity. This disagreement leads to speculations that creeping episodes drive seismicity in the lower part of the seismogenic zone. Whether deep creep occurs along the SJF holds key information on how fault slips during earthquake cycle and potential seismic hazard imposed to southern California. Here we apply a matched filter technique to 10 M > 4 earthquake sequences along the SJF since 2000 and obtain more complete earthquake catalogues. We then systematic investigate spatio-temporal evolutions of these aftershock sequences. We find anomalously large aftershock zones for earthquakes occurred below the geodetically inferred locking depth (i.e. 11-12 km), while aftershock zones of shallower main shocks are close to expectations from standard scaling relationships. Although we do not observe clear migration of aftershocks, most aftershock zones do expand systematically with logarithmic time since the main shock. All the evidences suggest that aftershocks near or below the locking depth are likely driven by deep creep following the main shock. The presence of a creeping zone below 11-12 km may have significant implications on the maximum sizes of events in this region.

  9. Theoretical Study on the Effects of Particle Size Distribution on the Optical Properties of Colloidal Gold

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyo Jeong; Chandra, Saha Leton; Jang, Joon Kyung [Pusan National University, Busan (Korea, Republic of)

    2007-10-15

    Mie theory has been used to calculate the extinction of a gold nanoparticle in water by varying its diameter from 1 to 1000 nm. Utilizing this size-dependent theoretical spectrum, we have calculated the extinction spectrum of a colloidal gold by taking into account the size distribution of particle. Such calculation is in better agreement with experiment than the calculation without considering the size distribution. A least-squares fitting is used to deduce the size distribution from an experimental extinction spectrum. For particles with their diameters ranging from 10 to 28 nanometers, the fitting gives reasonable agreement with the size distribution obtained from tunneling electron microscope images.

  10. Calculation method for particle mean diameter and particle size distribution function under dependent model algorithm

    Institute of Scientific and Technical Information of China (English)

    Hong Tang; Xiaogang Sun; Guibin Yuan

    2007-01-01

    In total light scattering particle sizing technique, the relationship among Sauter mean diameter D32, mean extinction efficiency Q, and particle size distribution function is studied in order to inverse the mean diameter and particle size distribution simply. We propose a method which utilizes the mean extinction efficiency ratio at only two selected wavelengths to solve D32 and then to inverse the particle size distribution associated with (Q) and D32. Numerical simulation results show that the particle size distribution is inversed accurately with this method, and the number of wavelengths used is reduced to the greatest extent in the measurement range. The calculation method has the advantages of simplicity and rapidness.

  11. Re-examination of the damage distribution and the source of the 1828 Sanjo Earthquake in central Japan

    Science.gov (United States)

    Nishiyama, A.; Satake, K.; Yata, T.; Urabe, A.

    2010-12-01

    . Seismic intensity 5- (XIII on MM scale): 0 % collapse ratio of houses. The estimated focal region of the Sanjo Earthquake is neither in Sanjo town, the worst-damaged area, nor Yoita town. The collapse ratio of houses and seismic intensity distribution revealed in this study indicate that the focal region of the 1828 Sanjo Earthquake is estimated in the Higashiyama hill located to the east of Nagaoka plain, which is now in the south part of Mitsuke City. Our study supports that the earthquake source fault of the 1828 Sanjo Earthquake was not along the western edge of Nagaoka plain, but possibly along the eastern edge of the plain which separates the flood plain and the Higashiyama hill.

  12. 2010 Chile Earthquake Aftershock Response

    Science.gov (United States)

    Barientos, Sergio

    2010-05-01

    1906? Since the number of M>7.0 aftershocks has been low, does the distribution of large-magnitude aftershocks differ from previous events of this size? What is the origin of the extensional-type aftershocks at shallow depths within the upper plate? The international seismological community (France, Germany, U.K., U.S.A.) in collaboration with the Chilean seismological community responded with a total of 140 portable seismic stations to deploy in order to record aftershocks. This combined with the Chilean permanent seismic network, in the area results in 180 stations now in operation recording continuous at 100 cps. The seismic equipment is a mix of accelerometers, short -period and broadband seismic sensors deployed along the entire length of the aftershock zone that will record the aftershock sequence for three to six months. The collected seismic data will be merged and archived to produce an international data set open to the entire seismological community immediately after archiving. Each international group will submit their data as soon as possible in standard (mini seed) format with accompanying meta data to the IRIS DMC where the data will be merged into a combined data set and available to individuals and other data centers. This will be by far the best-recorded aftershock sequence of a large megathrust earthquake. This outstanding international collaboration will provide an open data set for this important earthquake as well as provide a model for future aftershock deployments around the world.

  13. A grain size distribution model for non-catalytic gas-solid reactions

    NARCIS (Netherlands)

    Heesink, Albertus B.M.; Prins, W.; van Swaaij, Willibrordus Petrus Maria

    1993-01-01

    A new model to describe the non-catalytic conversion of a solid by a reactant gas is proposed. This so-called grain size distribution (GSD) model presumes the porous particle to be a collection of grains of various sizes. The size distribution of the grains is derived from mercury porosimetry measur

  14. Distribution of surface rupture associated the 2016 Kumamoto earthquake and its significance

    Science.gov (United States)

    Goto, H.; Kumahara, Y.; Tsutsumi, H.; Toda, S.; Ishimura, D.; Okada, S.; Nakata, T.; Kagohara, K.; Kaneda, H.; Suzuki, Y.; Watanabe, M.; Tsumura, S.; Matsuta, N.; Ishiyama, T.; Sugito, N.; Hirouchi, D.; Ishiguro, S.; Yoshida, H.; Tanaka, K.; Takenami, D.; Kashihara, S.; Tanaka, T.; Moriki, H.

    2016-12-01

    A Mj 6.5 earthquake hit Kumamoto prefecture, central Kyushu, southwest Japan at 21:26 JST on April 14th. About 28 hours after, The Mj 7.3 earthquake occurred at 01:25 JST on April 16, and caused severe shaking in and around the epicentral region. An ENE-to-NE-trending surface rupture zone associated with the earthquakes appeared along the previously mapped 100-km-long active fault called the Futagawa-Hinagu fault zone (FHFZ) (Watanabe et al., 1979; Research Group for Active Tectonics in Kyushu, 1989; Research Group for Active Faults of Japan, 1991; Ikeda et al., 2001; Nakata and Imaizumi ed, 2002). According to our field survey for three months, we found the 31-km-length surface rupture close to the traces of the northeastern part of the FHFZ, and another 5-km-length rupture on a part of the Denokuchi fault. The rupture along the FHFZ shows right-lateral strike slip mainly ( 2.1 m in maximum). The rupture on the Denokuchi fault, far from about 2km east of the FHFZ, is the normal component with down to northwest. These coseismic ruptures of the Mj 7.3 earthquake have been represented to be a characteristic movement of the northeastern part of the FHFZ. The deformations such as a series of the open cracks with NW-SE trending were traceable for a distance of 5.4 km from Kengun to Shirakawa River in and around the downtown of Kumamoto city. Those features followed on tectonic landform by the active fault and on the line of the fringe abnormal in InSAR image (Geospatial Information Authority of Japan, 2016) represent small triggered slip. The eyewitness of local resident and our observation revealed that a coseismic small rupture of the Mj 6.5 earthquake had been appeared along the southern end of Mj 7.3 earthquake ruptures. Seismic inversion theory (DPRI, Kyoto Univ, 2016) showed that the coseismic rupture propagated toward NE along the strike of the FHFZ, and an asperity near the surface was recognized from 10 km far from the epicenter toward NE. The area of maximum

  15. [Change in distribution of pathogens and nosocomial antibiotic resistant Gram-negative Bacilli infection in intensive care units one month after an earthquake].

    Science.gov (United States)

    Kong, Qing-quan; Tu, Chong-qi; Pei, Fu-xing; Huang, Fu-guo; Liu, Hao; Song, Yue-ming; Yang, Tian-fu; Kang, Yan; Wang, Guang-lin; Liu, Li-min; Fang, Yue; Zhang, Hui

    2010-03-01

    To investigate the change in distribution of pathogens and nosocomial antibiotic resistant Gram-negative Bacilli infection in intensive care units one month after an earthquake. A retrospective survey on the distribution of nosocomial Gram-negative bacilli infection in intensive care units before and one month after the Wenchuan Earthquake was conducted in the West China Hospital. MicroScan Walkaway 96SI or PHOENIX 100 Automatic System in combined with manual identification, was employed to identify the Gram-negative bacilli infection and antibiotic resistance. The proportion of wound infection increased from 7.9% to 20.2% one month after the earthquake, but infection in respiratory tract stayed the most common infection. The common pathogens included Acinetobacter spp. (36.2%), Pseudomonas aeruginosa (22.7%), and Klebsiella spp. (12.3%) before the earthquake. One month after the earthquake, Imipenem remained highly sensitive against Escherichia coli and Klebsiella. spp., while their resistance to ceftazidime increased. Amikacin became the most sensitive antibiotics against Pseudomonas aeruginosa. Acinetobacter spp. had increased resistance to imipenem, but was highly sensitive to gatifloxacin and cefoxitin. The prevalence of extended spectrum beta-lactamases (ESBLs) in Klebsiella spp. and Escherichia coli increased from 52.6% and 48.8% before the earthquake to 55.0% and 87.5% one month after the earthquake, respectively. There is a significant change in distribution of pathogens and nosocomial antibiotic resistant Gram-negative Bacilli infection in intensive care units one month after the Earthquake, which might be associated with a sudden increase in injured patients. It is essential to regularly monitor the resistant rate of bacilli to antibiotics.

  16. A generalized statistical model for the size distribution of wealth

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  17. Particle size distribution and physico-chemical composition of clay.

    African Journals Online (AJOL)

    HP USER

    obtained after acid digestion of clay samples were used in determining the elements by Atomic. Absorption ... ignition (LOI) reveal a general reduction in composition as particles sizes reduces. However, Mg .... Murray, H.H. Diagnostic Tests for.

  18. Simulating the particle size distribution of rockfill materials based on its statistical regularity

    Institute of Scientific and Technical Information of China (English)

    YAN Zongling; QIU Xiande; YU Yongqiang

    2003-01-01

    The particle size distribution of rockfill is studied by using granular mechanics, mesomechanics and probability statistics to reveal the relationship of the distribution of particle size to that of the potential energy intensity before fragmentation,which finds out that the potential energy density has a linear relation to the logarithm of particle size and deduces that the distribution of the logarithm of particle size conforms to normal distribution because the distribution of the potential energy density does so. Based on this finding and by including the energy principle of rock fragmentation, the logarithm distribution model of particle size is formulated, which uncovers the natural characteristics of particle sizes on statistical distribution. Exploring the properties of the average value, the expectation, and the unbiased variance of particle size indicates that the expectation does notequal to the average value, but increases with increasing particle size and its ununiformity, and is always larger than the average value, and the unbiased variance increases as the ununiformity and geometric average value increase. A case study proves that the simulated results by the proposed logarithm distribution model accord with the actual data. It is concluded that the logarithm distribution model and Kuz-Ram model can be used to forecast the particle-size distribution of inartificial rockfill while for blasted rockfill, Kuz-Ram model is an option, and in combined application of the two models, it is necessary to do field tests to adjust some parameters of the model.

  19. Study on the Evaluation Method for Fault Displacement: Probabilistic Approach Based on Japanese Earthquake Rupture Data - Distributed fault displacements -

    Science.gov (United States)

    Inoue, N.; Kitada, N.; Tonagi, M.

    2016-12-01

    Distributed fault displacements in Probabilistic Fault Displace- ment Analysis (PFDHA) have an important rule in evaluation of important facilities such as Nuclear Installations. In Japan, the Nu- clear Installations should be constructed where there is no possibility that the displacement by the earthquake on the active faults occurs. Youngs et al. (2003) defined the distributed fault as displacement on other faults or shears, or fractures in the vicinity of the principal rup- ture in response to the principal faulting. Other researchers treated the data of distribution fault around principal fault and modeled according to their definitions (e.g. Petersen et al., 2011; Takao et al., 2013 ). We organized Japanese fault displacements data and constructed the slip-distance relationship depending on fault types. In the case of reverse fault, slip-distance relationship on the foot-wall indicated difference trend compared with that on hanging-wall. The process zone or damaged zone have been studied as weak structure around principal faults. The density or number is rapidly decrease away from the principal faults. We contrasted the trend of these zones with that of distributed slip-distance distributions. The subsurface FEM simulation have been carried out to inves- tigate the distribution of stress around principal faults. The results indicated similar trend compared with the distribution of field obser- vations. This research was part of the 2014-2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  20. Size distribution of rare earth elements in coal ash

    Science.gov (United States)

    Scott, Clinton T.; Deonarine, Amrika; Kolker, Allan; Adams, Monique; Holland, James F.

    2015-01-01

    Rare earth elements (REEs) are utilized in various applications that are vital to the automotive, petrochemical, medical, and information technology industries. As world demand for REEs increases, critical shortages are expected. Due to the retention of REEs during coal combustion, coal fly ash is increasingly considered a potential resource. Previous studies have demonstrated that coal fly ash is variably enriched in REEs relative to feed coal (e.g, Seredin and Dai, 2012) and that enrichment increases with decreasing size fractions (Blissett et al., 2014). In order to further explore the REE resource potential of coal ash, and determine the partitioning behavior of REE as a function of grain size, we studied whole coal and fly ash size-fractions collected from three U.S commercial-scale coal-fired generating stations burning Appalachian or Powder River Basin coal. Whole fly ash was separated into , 5 um, to 5 to 10 um and 10 to 100 um particle size fractions by mechanical shaking using trace-metal clean procedures. In these samples REE enrichments in whole fly ash ranges 5.6 to 18.5 times that of feedcoals. Partitioning results for size separates relative to whole coal and whole fly ash will also be reported. 

  1. Archiving and Distributing Seismic Data at the Southern California Earthquake Data Center (SCEDC)

    Science.gov (United States)

    Appel, V. L.

    2002-12-01

    The Southern California Earthquake Data Center (SCEDC) archives and provides public access to earthquake parametric and waveform data gathered by the Southern California Seismic Network and since January 1, 2001, the TriNet seismic network, southern California's earthquake monitoring network. The parametric data in the archive includes earthquake locations, magnitudes, moment-tensor solutions and phase picks. The SCEDC waveform archive prior to TriNet consists primarily of short-period, 100-samples-per-second waveforms from the SCSN. The addition of the TriNet array added continuous recordings of 155 broadband stations (20 samples per second or less), and triggered seismograms from 200 accelerometers and 200 short-period instruments. Since the Data Center and TriNet use the same Oracle database system, new earthquake data are available to the seismological community in near real-time. Primary access to the database and waveforms is through the Seismogram Transfer Program (STP) interface. The interface enables users to search the database for earthquake information, phase picks, and continuous and triggered waveform data. Output is available in SAC, miniSEED, and other formats. Both the raw counts format (V0) and the gain-corrected format (V1) of COSMOS (Consortium of Organizations for Strong-Motion Observation Systems) are now supported by STP. EQQuest is an interface to prepackaged waveform data sets for select earthquakes in Southern California stored at the SCEDC. Waveform data for large-magnitude events have been prepared and new data sets will be available for download in near real-time following major events. The parametric data from 1981 to present has been loaded into the Oracle 9.2.0.1 database system and the waveforms for that time period have been converted to mSEED format and are accessible through the STP interface. The DISC optical-disk system (the "jukebox") that currently serves as the mass-storage for the SCEDC is in the process of being replaced

  2. Multi-component Erlang distribution of plant seed masses and sizes

    Science.gov (United States)

    Fan, San-Hong; Wei, Hua-Rong

    2012-12-01

    The mass and the size distributions of plant seeds are very similar to the multi-component Erlang distribution of final-state particle multiplicities in high-energy collisions. We study the mass, length, width, and thickness distributions of pumpkin and marrow squash seeds in this paper. The corresponding distribution curves are obtained and fitted by using the multi-component Erlang distribution. In the comparison, the method of χ2-testing is used. The mass and the size distributions of the mentioned seeds are shown to obey approximately the multi-component Erlang distribution with the component number being 1.

  3. Particle size distributions in and exhausted from a poultry house

    Science.gov (United States)

    Here we describe a study looking at the full particulate size range of particles in a poultry house. Agricultural particulates are typically thought of as coarse mode dust. But recent emphasis of PM2.5 regulations on pre-cursors such as ammonia and volatile organic compounds increasingly makes it ne...

  4. Effect of the Size Distribution of Nanoscale Dispersed Particles on the Zener Drag Pressure

    Science.gov (United States)

    Eivani, A. R.; Valipour, S.; Ahmed, H.; Zhou, J.; Duszczyk, J.

    2011-04-01

    In this article, a new relationship for the calculation of the Zener drag pressure is described in which the effect of the size distribution of nanoscale dispersed particles is taken into account, in addition to particle radius and volume fraction, which have been incorporated in the existing relationships. Microstructural observations indicated a clear correlation between the size distribution of dispersed particles and recrystallized grain sizes in the AA7020 aluminum alloy. However, the existing relationship to calculate the Zener drag pressure yielded a negligible difference of 0.016 pct between the two structures homogenized at different conditions resulting in totally different size distributions of nanoscale dispersed particles and, consequently, recrystallized grain sizes. The difference in the Zener drag pressure calculated by the application of the new relationship was 5.1 pct, being in line with the experimental observations of the recrystallized grain sizes. Mathematical investigations showed that the ratio of the Zener drag pressure from the new equation to that from the existing equation is maximized when the number densities of all the particles with different sizes are equal. This finding indicates that in the two structures with identical parameters except the size distribution of nanoscale dispersed particles, the one that possesses a broader size distribution of particles, i.e., the number densities of particles with different sizes being equal, gives rise to a larger Zener drag pressure than that having a narrow size distribution of nanoscale dispersed particles, i.e., most of the particles being in the same size range.

  5. Degree Distribution, Rank-size Distribution, and Leadership Persistence in Mediation-Driven Attachment Networks

    CERN Document Server

    Hassan, Md Kamrul; Haque, Syed Arefinul

    2016-01-01

    We investigate the growth of a class of networks in which a new node first picks a mediator at random and connects with $m$ randomly chosen neighbors of the mediator at each time step. We show that degree distribution in such a mediation-driven attachment (MDA) network exhibits power-law $P(k)\\sim k^{-\\gamma(m)}$ with a spectrum of exponents depending on $m$. To appreciate the contrast between MDA and Barab\\'{a}si-Albert (BA) networks, we then discuss their rank-size distribution. To quantify how long a leader, the node with the maximum degree, persists in its leadership as the network evolves, we investigate the leadership persistence probability $F(\\tau)$ i.e. the probability that a leader retains its leadership up to time $\\tau$. We find that it exhibits a power-law $F(\\tau)\\sim \\tau^{-\\theta(m)}$ with persistence exponent $\\theta(m) \\approx 1.51 \\ \\forall \\ m$ in the MDA networks and $\\theta(m) \\rightarrow 1.53$ exponentially with $m$ in the BA networks.

  6. Degree distribution, rank-size distribution, and leadership persistence in mediation-driven attachment networks

    Science.gov (United States)

    Hassan, Md. Kamrul; Islam, Liana; Haque, Syed Arefinul

    2017-03-01

    We investigate the growth of a class of networks in which a new node first picks a mediator at random and connects with m randomly chosen neighbors of the mediator at each time step. We show that the degree distribution in such a mediation-driven attachment (MDA) network exhibits power-law P(k) ∼k - γ(m) with a spectrum of exponents depending on m. To appreciate the contrast between MDA and Barabási-Albert (BA) networks, we then discuss their rank-size distribution. To quantify how long a leader, the node with the maximum degree, persists in its leadership as the network evolves, we investigate the leadership persistence probability F(τ) i.e. the probability that a leader retains its leadership up to time τ. We find that it exhibits a power-law F(τ) ∼τ - θ(m) with persistence exponent θ(m) ≈ 1.51 ∀ m in MDA networks and θ(m) → 1.53 exponentially with m in BA networks.

  7. City-size distributions and the world urban system in the twentieth century.

    Science.gov (United States)

    Ettlinger, N; Archer, J C

    1987-09-01

    "In this paper we trace and interpret changes in the geographical pattern and city-size distribution of the world's largest cities in the twentieth century. Since 1900 the geographical distribution of these cities has become increasingly dispersed; their city-size distribution by rank was nearly linear in 1900 and 1940, and convex in 1980. We interpret the convex distribution which emerged following World War 2 as reflecting an economically integrated but politically and demographically partitioned global urban system. Our interpretation of changes in size distribution of cities emphasizes demographic considerations, largely neglected in previous investigations, including migration and relative rates of population change."

  8. Tsunami earthquake can occur elsewhere along the Japan Trench—Historical and geological evidence for the 1677 earthquake and tsunami

    Science.gov (United States)

    Yanagisawa, H.; Goto, K.; Sugawara, D.; Kanamaru, K.; Iwamoto, N.; Takamori, Y.

    2016-05-01

    Since the 11 March 2011 Tohoku earthquake, the mechanisms of large earthquakes along the Japan Trench have been intensely investigated. However, characteristics of tsunami earthquakes, which trigger unusually large tsunami, remain unknown. The earthquake of 4 November 1677 was a tsunami earthquake striking the southern part of the Japan Trench. Its source mechanism remains unclear. This study elucidates the fault slip and moment magnitude of the 1677 earthquake and tsunami based on integrated analyses of historical documents, tsunami deposits, and numerical simulation. Geological survey results, the analytical results of thickness and grain size distributions and diatoms, revealed that tsunami deposits in a small pond at 11 m elevation were probably formed by the 1677 event. This finding and historical descriptions are useful as important constraint conditions to estimate unusually large fault slips and moment magnitude of the 1677 earthquake. Numerical simulation results reveal that 8.34-8.63 moment magnitude with the large 11-16 m slip area is necessary to satisfy the constraint conditions. This fault slip and magnitude are equivalent to those of the 1896 Sanriku earthquake: a well-known tsunami earthquake in the northern part of the Japan Trench. We therefore conclude that a tsunami earthquake of moment magnitude 8.3-8.6 with unusually large slip can occur elsewhere along the Japan Trench. This point should be considered for future tsunami risk assessment along the Japan Trench and along any trench having similar tectonic settings to those of the Japan Trench.

  9. Size distribution of particle systems analyzed with organic photodetectors

    CERN Document Server

    Sentis, Matthias

    2015-01-01

    As part of a consortium between academic and industry, this PhD work investigates the interest and capabilities of organic photo-sensors (OPS) for the optical characterization of suspensions and two-phase flows. The principle of new optical particle sizing instruments is proposed to characterize particle systems confined in a cylinder glass (standard configuration for Process Analytical Technologies). To evaluate and optimize the performance of these systems, a Monte-Carlo model has been specifically developed. This model accounts for the numerous parameters of the system: laser beam profile, mirrors, lenses, sample cell, particle medium properties (concentration, mean & standard deviation, refractive indices), OPS shape and positions, etc. Light scattering by particles is treated either by using Lorenz-Mie theory, Debye, or a hybrid model (that takes into account the geometrical and physical contributions). For diluted media (single scattering), particle size analysis is based on the inversion of scatter...

  10. Body size distributions of the pale grass blue butterfly in Japan: Size rules and the status of the Fukushima population

    Science.gov (United States)

    Taira, Wataru; Iwasaki, Mayo; Otaki, Joji M.

    2015-01-01

    The body size of the pale grass blue butterfly, Zizeeria maha, has been used as an environmental indicator of radioactive pollution caused by the Fukushima nuclear accident. However, geographical and temporal size distributions in Japan and temperature effects on size have not been established in this species. Here, we examined the geographical, temporal, and temperature-dependent changes of the forewing size of Z. maha argia in Japan. Butterflies collected in 2012 and 2013 from multiple prefectures throughout Japan demonstrated an inverse relationship of latitude and forewing size, which is the reverse of Bergmann’s cline. The Fukushima population was significantly larger than the Aomori and Miyagi populations and exhibited no difference from most of the other prefectural populations. When monitored at a single geographic locality every other month, forewing sizes were the largest in April and the smallest in August. Rearing larvae at a constant temperature demonstrated that forewing size followed the temperature-size rule. Therefore, the converse Bergmann’s rule and the temperature-size rule coexist in this multivoltine species. Our study establishes this species as a useful environmental indicator and supports the idea that the size reduction observed only in Fukushima Prefecture in 2011 was caused by the environmental stress of radioactive pollution. PMID:26197998

  11. Spatial Distribution of Ground water Level Changes Induced by the 2006 Hengchun Earthquake Doublet

    OpenAIRE

    Yeeping Chia; Jessie J. Chiu; Po-Yu Chung; Ya-Lan Chang; Wen-Chi Lai; Yen-Chun Kuan

    2009-01-01

    Water-level changes were ob served in 107 wells at 67 monitoring stations in the southern coastal plain of Tai wan during the 2006 Mw 7.1 Hengchun earthquake doublet. Two consecutive coseismic changes induced by the earth quake doublet can be observed from high-frequency data. Obervations from multiple-well stations indicate that the magnitude and direction of coseismic change may vary in wells of different depths. Coseismic rises were dominant on the south east side of the costal plain; wher...

  12. Fault geometry inversion and slip distribution of the 2010 Mw 7.2 El Mayor-Cucapah earthquake from geodetic data

    Science.gov (United States)

    Huang, Mong-Han; Fielding, Eric J.; Dickinson, Haylee; Sun, Jianbao; Gonzalez-Ortega, J. Alejandro; Freed, Andrew M.; Bürgmann, Roland

    2017-01-01

    The 4 April 2010 Mw 7.2 El Mayor-Cucapah (EMC) earthquake in Baja, California, and Sonora, Mexico, had primarily right-lateral strike-slip motion and a minor normal-slip component. The surface rupture extended about 120 km in a NW-SE direction, west of the Cerro Prieto fault. Here we use geodetic measurements including near- to far-field GPS, interferometric synthetic aperture radar (InSAR), and subpixel offset measurements of radar and optical images to characterize the fault slip during the EMC event. We use dislocation inversion methods and determine an optimal nine-segment fault geometry, as well as a subfault slip distribution from the geodetic measurements. With systematic perturbation of the fault dip angles, randomly removing one geodetic data constraint, or different data combinations, we are able to explore the robustness of the inferred slip distribution along fault strike and depth. The model fitting residuals imply contributions of early postseismic deformation to the InSAR measurements as well as lateral heterogeneity in the crustal elastic structure between the Peninsular Ranges and the Salton Trough. We also find that with incorporation of near-field geodetic data and finer fault patch size, the shallow slip deficit is reduced in the EMC event by reductions in the level of smoothing. These results show that the outcomes of coseismic inversions can vary greatly depending on model parameterization and methodology.

  13. Number size distributions and seasonality of submicron particles in Europe 2008-2009

    NARCIS (Netherlands)

    Asmi, A.; Wiedensohler, A.; Laj, P.; Fjaeraa, A.-M.; Sellegri, K.; Birmili, W.; Weingartner, E.; Baltensperger, U.; Zdimal, V.; Zikova, N.; Putaud, J.-P.; Marinoni, A.; Tunved, P.; Hansson, H.-C.; Fiebig, M.; Kivekäs, N.; Lihavainen, H.; Asmi, E.; Ulevicius, V.; Aalto, P.P.; Swietlicki, E.; Kristensson, A.; Mihalopoulos, N.; Kalivitis, N.; Kalapov, I.; Kiss, G.; Leeuw, G. de; Henzing, B.; Harrison, R.M.; Beddows, D.; O'Dowd, C.; Jennings, S.G.; Flentje, H.; Weinhold, K.; Meinhardt, F.; Ries, L.; Kulmala, M.

    2011-01-01

    Two years of harmonized aerosol number size distribution data from 24 European field monitoring sites have been analysed. The results give a comprehensive overview of the European near surface aerosol particle number concentrations and number size distributions between 30 and 500 nm of dry particle

  14. An analysis of the size distribution of Italian firms by age

    Science.gov (United States)

    Cirillo, Pasquale

    2010-02-01

    In this paper we analyze the size distribution of Italian firms by age. In other words, we want to establish whether the way that the size of firms is distributed varies as firms become old. As a proxy of size we use capital. In [L.M.B. Cabral, J. Mata, On the evolution of the firm size distribution: Facts and theory, American Economic Review 93 (2003) 1075-1090], the authors study the distribution of Portuguese firms and they find out that, while the size distribution of all firms is fairly stable over time, the distributions of firms by age groups are appreciably different. In particular, as the age of the firms increases, their size distribution on the log scale shifts to the right, the left tails becomes thinner and the right tail thicker, with a clear decrease of the skewness. In this paper, we perform a similar analysis with Italian firms using the CEBI database, also considering firms’ growth rates. Although there are several papers dealing with Italian firms and their size distribution, to our knowledge a similar study concerning size and age has not been performed yet for Italy, especially with such a big panel.

  15. A model study of the size and composition distribution of aerosols in an aircraft exhaust

    Energy Technology Data Exchange (ETDEWEB)

    Sorokin, A.A. [SRC `ECOLEN`, Moscow (Russian Federation)

    1997-12-31

    A two-dimensional, axisymmetric flow field model which includes water and sulphate aerosol formation represented by moments of the size and composition distribution function is used to calculate the effect of radial turbulent jet mixing on the aerosol size distribution and mean modal composition. (author) 6 refs.

  16. The Effects of Mergers and Acquisitions on the Firm Size Distribution

    NARCIS (Netherlands)

    Cefis, E.; Marsili, O.; Schenk, E.J.J

    2006-01-01

    This paper provides new empirical evidence on the effects of mergers and acquisitions on the shape of the firm size distribution (FSD), by using data of the population of manufacturing firms in the Netherlands. Our analysis shows that M&As do not affect the size distribution when we consider the

  17. The effects of mergers and acquisitions on the firm size distribution

    NARCIS (Netherlands)

    Cefis, E.; Marsili, Orietta; Schenk, E.J.J.

    2008-01-01

    This paper provides new empirical evidence on the effects of mergers and acquisitions (M&As) on the shape of the firm size distribution, by using data of the population of manufacturing firms in the Netherlands. Our analysis shows that M&As do not affect the size distribution when we consider the

  18. The effects of mergers and acquisitions on the firm size distribution

    NARCIS (Netherlands)

    E. Cefis (Elena); O. Marsili (Orietta); H. Schenk (Hans)

    2009-01-01

    textabstractThis paper provides new empirical evidence on the effects of mergers and acquisitions (M&As) on the shape of the firm size distribution, by using data of the population of manufacturing firms in the Netherlands. Our analysis shows that M&As do not affect the size distribution when we

  19. [Mathematical processing of human platelet distribution according to size for determination of cell heterogeneity].

    Science.gov (United States)

    Kosmovskiĭ, S Iu; Vasin, S L; Rozanova, I B; Sevast'ianov, V I

    1999-01-01

    The paper proposes a method for mathematical treatment of the distribution of human platelets by sizes to detect the heterogeneity of cell populations. Its use allowed the authors to identify three platelet populations that have different parameters of size distribution. The proposed method opens additional vistas for analyzing the heterogeneity of platelet populations without sophisticating experimental techniques.

  20. Control over Particle Size Distribution by Autoclaving Poloxamer-Stabilized Trimyristin Nanodispersions.

    Science.gov (United States)

    Göke, Katrin; Roese, Elin; Arnold, Andreas; Kuntsche, Judith; Bunjes, Heike

    2016-09-06

    Lipid nanoparticles are under investigation as delivery systems for poorly water-soluble drugs. The particle size in these dispersions strongly influences important pharmaceutical properties like biodistribution and drug loading capacity; it should be below 500 nm for direct injection into the bloodstream. Consequently, small particles with a narrow particle size distribution are desired. Hitherto, there are, however, only limited possibilities for the preparation of monodisperse, pharmaceutically relevant dispersions. In this work, the effect of autoclaving at 121 °C on the particle size distribution of lipid nanoemulsions and -suspensions consisting of the pharmaceutically relevant components trimyristin and poloxamer 188 was studied. Additionally, the amount of emulsifier needed to stabilize both untreated and autoclaved particles was assessed. In our study, four dispersions of mean particle sizes from 45 to 150 nm were prepared by high-pressure melt homogenization. The particle size distribution before and after autoclaving was characterized using static and dynamic light scattering, differential scanning calorimetry, and transmission electron microscopy. Asymmetrical flow field-flow fractionation was used for particle size distribution analyses and for the determination of free poloxamer 188. Upon autoclaving, the mean particle size increased to up to 200 nm, but not proportionally to the initial size. At the same time, the particle size distribution width decreased remarkably. Heat treatment thus seems to be a promising approach to achieve the desired narrow particle size distribution of such dispersions. Related to the lipid content, suspension particles needed more emulsifier for stabilization than emulsion droplets, and smaller particles more than larger ones.

  1. 3D Hail Size Distribution Interpolation/Extrapolation Algorithm

    Science.gov (United States)

    Lane, John

    2013-01-01

    Radar data can usually detect hail; however, it is difficult for present day radar to accurately discriminate between hail and rain. Local ground-based hail sensors are much better at detecting hail against a rain background, and when incorporated with radar data, provide a much better local picture of a severe rain or hail event. The previous disdrometer interpolation/ extrapolation algorithm described a method to interpolate horizontally between multiple ground sensors (a minimum of three) and extrapolate vertically. This work is a modification to that approach that generates a purely extrapolated 3D spatial distribution when using a single sensor.

  2. New algorithm and system for measuring size distribution of blood cells

    Institute of Scientific and Technical Information of China (English)

    Cuiping Yao(姚翠萍); Zheng Li(李政); Zhenxi Zhang(张镇西)

    2004-01-01

    In optical scattering particle sizing, a numerical transform is sought so that a particle size distribution can be determined from angular measurements of near forward scattering, which has been adopted in the measurement of blood cells. In this paper a new method of counting and classification of blood cell, laser light scattering method from stationary suspensions, is presented. The genetic algorithm combined with nonnegative least squared algorithm is employed to inverse the size distribution of blood cells. Numerical tests show that these techniques can be successfully applied to measuring size distribution of blood cell with high stability.

  3. Evidence of bimodal crystallite size distribution in {mu}c-Si:H films

    Energy Technology Data Exchange (ETDEWEB)

    Ram, Sanjay K. [Laboratoire de Physique des Interfaces et des Couches Minces (UMR 7647 du CNRS), Ecole Polytechnique, 91128 Palaiseau Cedex (France); Department of Physics, Indian Institute of Technology Kanpur, Kanpur 208016 (India)], E-mail: sanjayk.ram@gmail.com; Islam, Md. Nazrul [QAED-SRG, Space Application Centre (ISRO), Ahmedabad 380015 (India); Kumar, Satyendra [Department of Physics, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Roca i Cabarrocas, P. [Laboratoire de Physique des Interfaces et des Couches Minces (UMR 7647 du CNRS), Ecole Polytechnique, 91128 Palaiseau Cedex (France)

    2009-03-15

    We report on the microstructural characterization studies carried out on plasma deposited highly crystalline undoped microcrystalline silicon films to explore the crystallite size distribution present in this material. The modeling of results of spectroscopic ellipsometry using two different sized crystallites is corroborated by the deconvolution of experimental Raman profiles using a modeling method that incorporates a bimodal size distribution of crystallites. The presence of a bimodal size distribution of crystallites is demonstrated as well by the results of atomic force microscopy and X-ray diffraction studies. The qualitative agreement between the results of different studies is discussed.

  4. Experimental study on bubble size distributions in a direct-contact evaporator

    Directory of Open Access Journals (Sweden)

    Ribeiro Jr. C. P.

    2004-01-01

    Full Text Available Experimental bubble size distributions and bubble mean diameters were obtained by means of a photographic technique for a direct-contact evaporator operating in the quasi-steady-state regime. Four gas superficial velocities and three different spargers were analysed for the air-water system. In order to assure the statistical significance of the determined size distributions, a minimum number of 450 bubbles was analysed for each experimental condition. Some runs were also conducted with an aqueous solution of sucrose to study the solute effect on bubble size distribution. For the lowest gas superficial velocity considered, at which the homogeneous bubbling regime is observed, the size distribution was log-normal and depended on the orifice diameter in the sparger. As the gas superficial velocity was increased, the size distribution progressively acquired a bimodal shape, regardless of the sparger employed. The presence of sucrose in the continuous phase led to coalescence hindrance.

  5. ON ESTIMATION AND HYPOTHESIS TESTING OF THE GRAIN SIZE DISTRIBUTION BY THE SALTYKOV METHOD

    Directory of Open Access Journals (Sweden)

    Yuri Gulbin

    2011-05-01

    Full Text Available The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.

  6. Iteration method for the inversion of simulated multiwavelength lidar signals to determine aerosol size distribution

    Institute of Scientific and Technical Information of China (English)

    Tao Zong-Ming; Zhang Yin-Chao; Liu Xiao-Qin; Tan Kun; Shao Shi-Sheng; Hu Huan-Ling; Zhang Gai-Xia; Lü Yong-Hui

    2004-01-01

    A new method is proposed to derive the size distribution of aerosol from the simulated multiwavelength lidar extinction coefficients. The basis for this iteration is to consider the extinction efficiency factor of particles as a set of weighting function covering the entire radius region of a distribution. The weighting functions are calculated exactly from Mie theory. This method extends the inversion region by subtracting some extinction coefficient. The radius range of simulated size distribution is 0.1-10.0μm, the inversion radius range is 0.1-2.0μm, but the inverted size distributions are in good agreement with the simulated one.

  7. Porosity and pore size distribution in a sedimentary rock: Implications for the distribution of chlorinated solvents

    Science.gov (United States)

    Shapiro, Allen M.; Evans, Chrsitopher E.; Hayes, Erin C.

    2017-01-01

    Characterizing properties of the rock matrix that control retention and release of chlorinated solvents is essential in evaluating the extent of contamination and the application of remediation technologies in fractured rock. Core samples from seven closely spaced boreholes in a mudstone subject to trichloroethene (TCE) contamination were analyzed using Mercury Intrusion Porosimetry to investigate porosity and pore size distribution as a function of mudstone characteristics, and depth and lateral extent in the aquifer; organic carbon content was also evaluated to identify the potential for adsorption. Porosity and retardation factor varied over two orders of magnitude, with the largest porosities and largest retardation factors associated with carbon-rich mudstone layers. Larger porosities were also measured in the shallow rock that has been subject to enhanced groundwater flow. Porosity also varied over more than an order of magnitude in spatially continuous mudstone layers. The analyses of the rock cores indicated that the largest pore diameters may be accessible to entry of the nonaqueous form of TCE. Although the porosity associated with the largest pore diameters is small (~ 0.1%), that volume of TCE can significantly affect the total TCE that is retained in the rock matrix. The dimensions of the largest pore diameters may also be accessible to microbes responsible for reductive dechlorination; however, the small percentage of the pore space that can accommodate microbes may limit the extent of reductive dechlorination in the rock matrix.

  8. Porosity and pore size distribution in a sedimentary rock: Implications for the distribution of chlorinated solvents.

    Science.gov (United States)

    Shapiro, Allen M; Evans, Christopher E; Hayes, Erin C

    2017-08-01

    Characterizing properties of the rock matrix that control retention and release of chlorinated solvents is essential in evaluating the extent of contamination and the application of remediation technologies in fractured rock. Core samples from seven closely spaced boreholes in a mudstone subject to trichloroethene (TCE) contamination were analyzed using Mercury Intrusion Porosimetry to investigate porosity and pore size distribution as a function of mudstone characteristics, and depth and lateral extent in the aquifer; organic carbon content was also evaluated to identify the potential for adsorption. Porosity and retardation factor varied over two orders of magnitude, with the largest porosities and largest retardation factors associated with carbon-rich mudstone layers. Larger porosities were also measured in the shallow rock that has been subject to enhanced groundwater flow. Porosity also varied over more than an order of magnitude in spatially continuous mudstone layers. The analyses of the rock cores indicated that the largest pore diameters may be accessible to entry of the nonaqueous form of TCE. Although the porosity associated with the largest pore diameters is small (~0.1%), that volume of TCE can significantly affect the total TCE that is retained in the rock matrix. The dimensions of the largest pore diameters may also be accessible to microbes responsible for reductive dechlorination; however, the small percentage of the pore space that can accommodate microbes may limit the extent of reductive dechlorination in the rock matrix. Published by Elsevier B.V.

  9. Estimation of 1-D velocity models beneath strong-motion observation sites in the Kathmandu Valley using strong-motion records from moderate-sized earthquakes

    Science.gov (United States)

    Bijukchhen, Subeg M.; Takai, Nobuo; Shigefuji, Michiko; Ichiyanagi, Masayoshi; Sasatani, Tsutomu; Sugimura, Yokito

    2017-07-01

    The Himalayan collision zone experiences many seismic activities with large earthquakes occurring at certain time intervals. The damming of the proto-Bagmati River as a result of rapid mountain-building processes created a lake in the Kathmandu Valley that eventually dried out, leaving thick unconsolidated lacustrine deposits. Previous studies have shown that the sediments are 600 m thick in the center. A location in a seismically active region, and the possible amplification of seismic waves due to thick sediments, have made Kathmandu Valley seismically vulnerable. It has suffered devastation due to earthquakes several times in the past. The development of the Kathmandu Valley into the largest urban agglomerate in Nepal has exposed a large population to seismic hazards. This vulnerability was apparent during the Gorkha Earthquake (Mw7.8) on April 25, 2015, when the main shock and ensuing aftershocks claimed more than 1700 lives and nearly 13% of buildings inside the valley were completely damaged. Preparing safe and up-to-date building codes to reduce seismic risk requires a thorough study of ground motion amplification. Characterizing subsurface velocity structure is a step toward achieving that goal. We used the records from an array of strong-motion accelerometers installed by Hokkaido University and Tribhuvan University to construct 1-D velocity models of station sites by forward modeling of low-frequency S-waves. Filtered records (0.1-0.5 Hz) from one of the accelerometers installed at a rock site during a moderate-sized (mb4.9) earthquake on August 30, 2013, and three moderate-sized (Mw5.1, Mw5.1, and Mw5.5) aftershocks of the 2015 Gorkha Earthquake were used as input motion for modeling of low-frequency S-waves. We consulted available geological maps, cross-sections, and borehole data as the basis for initial models for the sediment sites. This study shows that the basin has an undulating topography and sediment sites have deposits of varying thicknesses

  10. Why liquid displacement methods are sometimes wrong in estimating the pore-size distribution

    NARCIS (Netherlands)

    Gijsbertsen-Abrahamse, A.J.; Boom, R.M.; Padt, van der A.

    2004-01-01

    The liquid displacement method is a commonly used method to determine the pore size distribution of micro- and ultrafiltration membranes. One of the assumptions for the calculation of the pore sizes is that the pores are parallel and thus are not interconnected. To show that the estimated pore size

  11. A facile synthesis of Te nanoparticles with binary size distribution by green chemistry.

    Science.gov (United States)

    He, Weidong; Krejci, Alex; Lin, Junhao; Osmulski, Max E; Dickerson, James H

    2011-04-01

    Our work reports a facile route to colloidal Te nanocrystals with binary uniform size distributions at room temperature. The binary-sized Te nanocrystals were well separated into two size regimes and assembled into films by electrophoretic deposition. The research provides a new platform for nanomaterials to be efficiently synthesized and manipulated.

  12. Bipartite Producer-Consumer Networks and the Size Distribution of Firms

    CERN Document Server

    Dahui, W; Zengru, D; Dahui, Wang; Li, Zhou; Zengru, Di

    2005-01-01

    A bipartite producer-consumer network is constructed to describe the industrial structure. The edges from consumer to producer represent the choices of the consumer for the final products and the degree of producer can represent its market share. So the size distribution of firms can be characterized by producer's degree distribution. The probability for a producer receiving a new consumption is determined by its competency described by initial attractiveness and the self-reinforcing mechanism in the competition described by preferential attachment. The cases with constant total consumption and with growing market are studied. The following results are obtained: 1, Without market growth and a uniform initial attractiveness $a$, the final distribution of firm sizes is Gamma distribution for $a>1$ and is exponential for $a=1$. If $a<1$, the distribution is power in small size and exponential in upper tail; 2, For a growing market, the size distribution of firms obeys the power law. The exponent is affected b...

  13. RESUSPENSION METHOD FOR ROAD SURFACE DUST COLLECTION AND AERODYNAMIC SIZE DISTRIBUTION CHARACTERIZATION

    Institute of Scientific and Technical Information of China (English)

    Jianhua Chen; Hongfeng Zheng; Wei Wang; Hongjie Liu; Ling Lu; Linfa Bao; Lihong Ren

    2006-01-01

    Traffic-generated fugitive dust is a source of urban atmospheric particulate pollution in Beijing. This paper introduces the resuspension method, recommended by the US EPA in AP-42 documents, for collecting Beijing road-surface dust. Analysis shows a single-peak distribution in the number size distribution and a double-peak mode for mass size distribution of the road surface dust. The median diameter of the mass concentration distribution of the road dust on a high-grade road was higher than that on a low-grade road. The ratio of PM2.5 to PM10 was consistent with that obtained in a similar study for Hong Kong. For the two selected road samples, the average relative deviation of the size distribution was 10.9% and 11.9%. All results indicate that the method introduced in this paper can effectively determine the size distribution of fugitive dust from traffic.

  14. On bimodal size distribution of spin clusters in the one dimensional Ising model

    OpenAIRE

    Ivanytskyi, A. I.; Chelnokov, V. O.

    2015-01-01

    The size distribution of geometrical spin clusters is exactly found for the one dimensional Ising model of finite extent. For the values of lattice constant $\\beta$ above some "critical value" $\\beta_c$ the found size distribution demonstrates the non-monotonic behavior with the peak corresponding to the size of largest available cluster. In other words, at high values of lattice constant there are two ways to fill the lattice: either to form a single largest cluster or to create many cluster...

  15. Effect of the size distribution of nanoscale dispersed particles on the Zener drag pressure

    OpenAIRE

    Eivani, A.R.; Valipour, S.; Ahmed, H.; Zhou, J; Duszczyk, J.

    2010-01-01

    In this article, a new relationship for the calculation of the Zener drag pressure is described in which the effect of the size distribution of nanoscale dispersed particles is taken into account, in addition to particle radius and volume fraction, which have been incorporated in the existing relationships. Microstructural observations indicated a clear correlation between the size distribution of dispersed particles and recrystallized grain sizes in the AA7020 aluminum alloy. However, the ex...

  16. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    NARCIS (Netherlands)

    van Rijssel, Jozef; Kuipers, Bonny W M; Erne, Ben

    2015-01-01

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal d

  17. Bimodal distribution of the magnetic dipole moment in nanoparticles with a monomodal distribution of the physical size

    Energy Technology Data Exchange (ETDEWEB)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H., E-mail: B.H.Erne@uu.nl

    2015-04-15

    High-frequency applications of magnetic nanoparticles, such as therapeutic hyperthermia and magnetic particle imaging, are sensitive to nanoparticle size and dipole moment. Usually, it is assumed that magnetic nanoparticles with a log-normal distribution of the physical size also have a log-normal distribution of the magnetic dipole moment. Here, we test this assumption for different types of superparamagnetic iron oxide nanoparticles in the 5–20 nm range, by multimodal fitting of magnetization curves using the MINORIM inversion method. The particles are studied while in dilute colloidal dispersion in a liquid, thereby preventing hysteresis and diminishing the effects of magnetic anisotropy on the interpretation of the magnetization curves. For two different types of well crystallized particles, the magnetic distribution is indeed log-normal, as expected from the physical size distribution. However, two other types of particles, with twinning defects or inhomogeneous oxide phases, are found to have a bimodal magnetic distribution. Our qualitative explanation is that relatively low fields are sufficient to begin aligning the particles in the liquid on the basis of their net dipole moment, whereas higher fields are required to align the smaller domains or less magnetic phases inside the particles. - Highlights: • Multimodal fits of dilute ferrofluids reveal when the particles are multidomain. • No a priori shape of the distribution is assumed by the MINORIM inversion method. • Well crystallized particles have log-normal TEM and magnetic size distributions. • Defective particles can combine a monomodal size and a bimodal dipole moment.

  18. Comment on Pisarenko et al. "Characterization of the Tail of the Distribution of Earthquake Magnitudes by Combining the GEV and GPD Descriptions of Extreme Value Theory"

    CERN Document Server

    Raschke, Mathias

    2015-01-01

    In this short note, I comment on the research of Pisarenko et al. (2014) regarding the extreme value theory and statistics in case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peak over thresholds (POT) of the same random variable is presented more clearly. Pisarenko et al. (2014) have inappropriately neglected that the approximations by GEVD and GPD work only asymptotically in most cases. This applies particularly for the truncated exponential distribution (TED), being a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. As a consequence, the classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, different issues of s...

  19. Scaling behavior of the earthquake intertime distribution: influence of large shocks and time scales in the Omori law.

    Science.gov (United States)

    Lippiello, Eugenio; Corral, Alvaro; Bottiglieri, Milena; Godano, Cataldo; de Arcangelis, Lucilla

    2012-12-01

    We present a study of the earthquake intertime distribution D(Δt) for a California catalog in temporal periods of short duration T. We compare experimental results with theoretical predictions and analytical approximate solutions. For the majority of intervals, rescaling intertimes by the average rate leads to collapse of the distributions D(Δt) on a universal curve, whose functional form is well fitted by a Gamma distribution. The remaining intervals, exhibiting a more complex D(Δt), are all characterized by the presence of large shocks. These results can be understood in terms of the relevance of the ratio between the characteristic time c in the Omori law and T: Intervals with Gamma-like behavior are indeed characterized by a vanishing c/T. The above features are also investigated by means of numerical simulations of the Epidemic Type Aftershock Sequence (ETAS) model. This study shows that collapse of D(Δt) is also observed in numerical catalogs; however, the fit with a Gamma distribution is possible only assuming that c depends on the main-shock magnitude m. This result confirms that the dependence of c on m, previously observed for m>6 main shocks, extends also to small m>2.

  20. Geospatial modeling of fire-size distributions in historical low-severity fire regimes

    Science.gov (United States)

    McKenzie, D.; Kellogg, L. B.; Larkin, N. K.

    2006-12-01

    Low-severity fires are recorded by fire-scarred trees. These records can provide temporal depth for reconstructing fire history because one tree may record dozens of separate fires over time, thereby providing adequate sample size for estimating fire frequency. Estimates of actual fire perimeters from these point-based records are uncertain, however, because fire boundaries can only be located approximately. We indirectly estimate fire-size distributions without attempting to establish individual fire perimeters. The slope and intercept of the interval-area function, a power-law relationship between sample area and mean fire-free intervals for that area, provide surrogates for the moments of a fire-size distribution, given a distribution of fire- free intervals. Analogously, by deconstructing variograms that use a binary distance measure (Sorensen's index) for the similarity of the time-series of fires recorded by pairs of recorder trees, we provide estimates of modal fire size. We link both variograms and interval-area functions to fire size distributions by simulating fire size distributions on neutral landscapes with and without right- censoring to represent topographic controls on maximum fire size. From parameters of the two functions produced by simulations we can back-estimate means and variances of fire sizes on real landscapes. This scale-based modeling provides a robust alternative to empirical and heuristic methods and a means to extrapolate estimates of fire-size distributions to unsampled landscapes.

  1. Vertical profile and aerosol size distribution measurements in Iceland (LOAC)

    Science.gov (United States)

    Dagsson Waldhauserova, Pavla; Olafsson, Haraldur; Arnalds, Olafur; Renard, Jean-Baptiste; Vignelles, Damien; Verdier, Nicolas

    2014-05-01

    Cold climate and high latitudes regions contain important dust sources where dust is frequently emitted, foremost from glacially-derived sediments of riverbeds or ice-proximal areas (Arnalds, 2010; Bullard, 2013). Iceland is probably the most active dust source in the arctic/sub-arctic region (Dagsson-Waldhauserova, 2013). The frequency of days with suspended dust exceeds 34 dust days annually. Icelandic dust is of volcanic origin; it is very dark in colour and contains sharp-tipped shards with bubbles. Such properties allow even large particles to be easily transported long distances. Thus, there is a need to better understand the spatial and temporal variability of these dusts. Two launch campaigns of the Light Optical Aerosols Counter (LOAC) were conducted in Iceland with meteorological balloons. LOAC use a new optical design that allows to retrieve the size concentrations in 19 size classes between 0.2 and 100 microm, and to provide an estimate of the main nature of aerosols. Vertical stratification and aerosol composition of the subarctic atmosphere was studied in detail. The July 2011 launch represented clean non-dusty season with low winds while the November 2013 launch was conducted during the high winds after dusty period. For the winter flight (performed from Reykjavik), the nature of aerosols strongly changed with altitude. In particular, a thin layer of volcanic dust was observed at an altitude of 1 km. Further LOAC measurements are needed to understand the implication of Icelandic dust to the Arctic warming and climate change. A new campaign of LAOC launches is planned for May 2014. Reference: Arnalds, O., 2010. Dust sources and deposition of aeolian materials in Iceland. Icelandic Agricultural Sciences 23, 3-21. Bullard, J.E., 2013. Contemporary glacigenic inputs to the dust cycle. Earth Surface Processes and Landforms 38, 71-89. Dagsson-Waldhauserova, P., Arnalds O., Olafsson H. 2013. Long-term frequency and characteristics of dust storm events in

  2. Size-selected genomic libraries: the distribution and size-fractionation of restricted genomic DNA fragments by gel electrophoresis.

    Science.gov (United States)

    Gondo, Y

    1995-02-01

    By using one-dimensional genome scanning, it is possible to directly identify the restricted genomic DNA fragment that reflects the site of genetic change. The subsequent strategies to obtain the molecular clones of the corresponding restriction fragment are usually as follows: (i) the restriction of a mass quantity of an appropriate genomic DNA, (ii) the size-fractionation of the restricted DNA on a preparative electrophoresis gel in order to enrich the corresponding restriction fragment, (iii) the construction of the size-selected libraries from the fractionated genomic DNA, and (iv) the screening of the library to obtain an objective clone which is identified on the analytical genome scanning gel. A knowledge of the size distribution pattern of restriction fragments of the genomic DNA makes it possible to calculate the heterogeneity or complexity of the restriction fragment in each size-fraction. This manuscript first describes the distribution of the restriction fragments with respect to their length. Some examples of the practical application of this theory to genome scanning is then discussed using presumptive genome scanning gels. The way to calculate such DNA complexities in the prepared size-fractionated samples is also demonstrated. Such information should greatly facilitate the design of experimental strategies for the cloning of a certain size of genomic DNA after digestion with restriction enzyme(s) as is the case with genome scanning.

  3. Uncertainty in volcanic ash particle size distribution and implications for infrared remote sensing and airspace management

    Science.gov (United States)

    Western, L.; Watson, M.; Francis, P. N.

    2014-12-01

    Volcanic ash particle size distributions are critical in determining the fate of airborne ash in drifting clouds. A significant amount of global airspace is managed using dispersion models that rely on a single ash particle size distribution, derived from a single source - Hobbs et al., 1991. This is clearly wholly inadequate given the range of magmatic compositions and eruptive styles that volcanoes present. Available measurements of airborne ash lognormal particle size distributions show geometric standard deviation values that range from 1.0 - 2.5, with others showing mainly polymodal distributions. This paucity of data pertaining to airborne sampling of volcanic ash results in large uncertainties both when using an assumed distribution to retrieve mass loadings from satellite observations and when prescribing particle size distributions of ash in dispersion models. Uncertainty in the particle size distribution can yield order of magnitude differences to mass loading retrievals of an ash cloud from satellite observations, a result that can easily reclassify zones of airspace closure. The uncertainty arises from the assumptions made when defining both the geometric particle size and particle single scattering properties in terms of an effective radius. This has significant implications for airspace management and emphasises the need for an improved quantification of airborne volcanic ash particle size distributions.

  4. Fissure formation in coke. 3: Coke size distribution and statistical analysis

    Energy Technology Data Exchange (ETDEWEB)

    D.R. Jenkins; D.E. Shaw; M.R. Mahoney [CSIRO, North Ryde, NSW (Australia). Mathematical and Information Sciences

    2010-07-15

    A model of coke stabilization, based on a fundamental model of fissuring during carbonisation is used to demonstrate the applicability of the fissuring model to actual coke size distributions. The results indicate that the degree of stabilization is important in determining the size distribution. A modified form of the Weibull distribution is shown to provide a better representation of the whole coke size distribution compared to the Rosin-Rammler distribution, which is generally only fitted to the lump coke. A statistical analysis of a large number of experiments in a pilot scale coke oven shows reasonably good prediction of the coke mean size, based on parameters related to blend rank, amount of low rank coal, fluidity and ash. However, the prediction of measures of the spread of the size distribution is more problematic. The fissuring model, the size distribution representation and the statistical analysis together provide a comprehensive capability for understanding and predicting the mean size and distribution of coke lumps produced during carbonisation. 12 refs., 16 figs., 4 tabs.

  5. Estimating Functions of Distributions Defined over Spaces of Unknown Size

    Directory of Open Access Journals (Sweden)

    David H. Wolpert

    2013-10-01

    Full Text Available We consider Bayesian estimation of information-theoretic quantities from data, using a Dirichlet prior. Acknowledging the uncertainty of the event space size m and the Dirichlet prior’s concentration parameter c, we treat both as random variables set by a hyperprior. We show that the associated hyperprior, P(c, m, obeys a simple “Irrelevance of Unseen Variables” (IUV desideratum iff P(c, m = P(cP(m. Thus, requiring IUV greatly reduces the number of degrees of freedom of the hyperprior. Some information-theoretic quantities can be expressed multiple ways, in terms of different event spaces, e.g., mutual information. With all hyperpriors (implicitly used in earlier work, different choices of this event space lead to different posterior expected values of these information-theoretic quantities. We show that there is no such dependence on the choice of event space for a hyperprior that obeys IUV. We also derive a result that allows us to exploit IUV to greatly simplify calculations, like the posterior expected mutual information or posterior expected multi-information. We also use computer experiments to favorably compare an IUV-based estimator of entropy to three alternative methods in common use. We end by discussing how seemingly innocuous changes to the formalization of an estimation problem can substantially affect the resultant estimates of posterior expectations.

  6. Spatial Distribution of the Coefficient of Variation and Bayesian Forecast for the Paleo-Earthquakes in Japan

    Science.gov (United States)

    Nomura, Shunichi; Ogata, Yosihiko

    2016-04-01

    We propose a Bayesian method of probability forecasting for recurrent earthquakes of inland active faults in Japan. Renewal processes with the Brownian Passage Time (BPT) distribution are applied for over a half of active faults in Japan by the Headquarters for Earthquake Research Promotion (HERP) of Japan. Long-term forecast with the BPT distribution needs two parameters; the mean and coefficient of variation (COV) for recurrence intervals. The HERP applies a common COV parameter for all of these faults because most of them have very few specified paleoseismic events, which is not enough to estimate reliable COV values for respective faults. However, different COV estimates are proposed for the same paleoseismic catalog by some related works. It can make critical difference in forecast to apply different COV estimates and so COV should be carefully selected for individual faults. Recurrence intervals on a fault are, on the average, determined by the long-term slip rate caused by the tectonic motion but fluctuated by nearby seismicities which influence surrounding stress field. The COVs of recurrence intervals depend on such stress perturbation and so have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus we introduce a spatial structure on its COV parameter by Bayesian modeling with a Gaussian process prior. The COVs on active faults are correlated and take similar values for closely located faults. It is found that the spatial trends in the estimated COV values coincide with the density of active faults in Japan. We also show Bayesian forecasts by the proposed model using Markov chain Monte Carlo method. Our forecasts are different from HERP's forecast especially on the active faults where HERP's forecasts are very high or low.

  7. Can the Size Distributions of Talus Particles be Predicted from Fracture Spacing Distributions on Adjacent Bedrock Cliffs?

    Science.gov (United States)

    Verdian, J. P.; Sklar, L. S.; Moore, J. R.; Rosenberg, D. J.

    2016-12-01

    What controls the size of sediments produced on hillslopes and supplied to river channels? This is an important but unanswered question in geomorphology and sedimentology. One hypothesis is that the initial size distribution of rock fragments eroded from bedrock is related to the distribution of spacing between pre-existing fractures in the bedrock. Slopes of talus that accumulate below eroding cliffs provide a simple natural experiment to test this hypothesis. We studied talus slopes and cliff faces at more than 20 locations in California, USA, where cliff retreat rates were previously measured by Moore et al., 2009. Rock types included andesite, basalt, granodiorite and meta-sediment. To quantify fracture spacing we measured fracture frequency and orientation along scan lines at the base of the cliff. We also used scaled photographs of the cliff face to characterize the shape, size and surface area of discrete blocks. We measured talus particle size distributions using surface point counts along transects oriented downslope from the cliff face, and mapped facies of distinct size distributions. To explore the effect of chemical weathering on talus size we sampled cliff faces and talus particles for x-ray fluorescence analysis to test for depletion of labile cations relative to source rock. Preliminary results suggest that talus size distributions are strongly correlated with bedrock fracture spacing, although systematic differences do occur. In some cases, talus sizes are larger than the spacing between fractures because the detached particles still retain truncated fractures. In other cases, talus is smaller than cliff fracture spacing, presumably because particle size is reduced by fragmentation on impact and weathering during transport down the talus slope. Further analysis will explore whether cliff retreat rate and extent of chemical weathering, as well as rock type and local climate, can explain between-site differences in the size of particles produced.

  8. Are range-size distributions consistent with species-level heritability?

    DEFF Research Database (Denmark)

    Borregaard, Michael Krabbe; Gotelli, Nicholas; Rahbek, Carsten

    2012-01-01

    been that it is not compatible with the observed shape of present-day species range-size distributions (SRDs), a claim that has never been tested. To assess this claim, we used forward simulation of range-size evolution in clades with varying degrees of range-size heritability, and compared the output...... of three different models to the range-size distribution of the South American avifauna. Although there were differences among the models, a moderate-to-high degree of range-size heritability consistently leads to SRDs that were similar to empirical data. These results suggest that range-size heritability......The concept of species-level heritability is widely contested. Because it is most likely to apply to emergent, species-level traits, one of the central discussions has focused on the potential heritability of geographic range size. However, a central argument against range-size heritability has...

  9. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  10. Magnetic pattern at supergranulation scale: the Void Size Distribution

    CERN Document Server

    Berrilli, Francesco; Del Moro, Dario

    2014-01-01

    The large-scale magnetic pattern of the quiet sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large scale cells of overturning plasma and exhibits voids in magnetic organization. Such voids include internetwork fields, a mixed-polarity sparse field that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern a fast circle packing based algorithm is applied to 511 SOHO/MDI high resolution magnetograms acquired during the outstanding solar activity minimum between 23 and 24 cycles. The computed Void Distribution Function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in such a range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay we have found that the voids reveal departure from a simple exponential decay around 35 Mm.

  11. Methods for determining particle size distribution and growth rates between 1 and 3 nm using the Particle Size Magnifier

    CERN Document Server

    Lehtipalo, Katrianne; Kontkanen, Jenni; Kangasluoma, Juha; Franchin, Alessandro; Wimmer, Daniela; Schobesberger, Siegfried; Junninen, Heikki; Petäjä, Tuukka; Sipilä, Mikko; Mikkilä, Jyri; Vanhanen, Joonas; Worsnop, Douglas R; Kulmala, Markku

    2014-01-01

    The most important parameters describing the atmospheric new particle formation process are the particle formation and growth rates. These together determine the amount of cloud condensation nuclei attributed to secondary particle formation. Due to difficulties in detecting small neutral particles, it has previously not been possible to derive these directly from measurements in the size range below about 3 nm. The Airmodus Particle Size Magnifier has been used at the SMEAR II station in Hyytiälä, southern Finland, and during nucleation experiments in the CLOUD chamber at CERN for measuring particles as small as about 1 nm in mobility diameter. We developed several methods to determine the particle size distribution and growth rates in the size range of 1–3 nm from these data sets. Here we introduce the appearance-time method for calculating initial growth rates. The validity of the method was tested by simulations with the Ion-UHMA aerosol dynamic model.

  12. The 1170 and 1202 CE Dead Sea Rift earthquakes and long-term magnitude distribution of the Dead Sea Fault zone

    Science.gov (United States)

    Hough, S.E.; Avni, R.

    2009-01-01

    In combination with the historical record, paleoseismic investigations have provided a record of large earthquakes in the Dead Sea Rift that extends back over 1500 years. Analysis of macroseismic effects can help refine magnitude estimates for large historical events. In this study we consider the detailed intensity distributions for two large events, in 1170 CE and 1202 CE, as determined from careful reinterpretation of available historical accounts, using the 1927 Jericho earthquake as a guide in their interpretation. In the absence of an intensity attenuation relationship for the Dead Sea region, we use the 1927 Jericho earthquake to develop a preliminary relationship based on a modification of the relationships developed in other regions. Using this relation, we estimate M7.6 for the 1202 earthquake and M6.6 for the 1170 earthquake. The uncertainties for both estimates are large and difficult to quantify with precision. The large uncertainties illustrate the critical need to develop a regional intensity attenuation relation. We further consider the distribution of magnitudes in the historic record and show that it is consistent with a b-value distribution with a b-value of 1. Considering the entire Dead Sea Rift zone, we show that the seismic moment release rate over the past 1500 years is sufficient, within the uncertainties of the data, to account for the plate tectonic strain rate along the plate boundary. The results reveal that an earthquake of M7.8 is expected within the zone on average every 1000 years. ?? 2011 Science From Israel/LPPLtd.

  13. Sifting attacks in finite-size quantum key distribution

    Science.gov (United States)

    Pfister, Corsin; Lütkenhaus, Norbert; Wehner, Stephanie; Coles, Patrick J.

    2016-05-01

    A central assumption in quantum key distribution (QKD) is that Eve has no knowledge about which rounds will be used for parameter estimation or key distillation. Here we show that this assumption is violated for iterative sifting, a sifting procedure that has been employed in some (but not all) of the recently suggested QKD protocols in order to increase their efficiency. We show that iterative sifting leads to two security issues: (1) some rounds are more likely to be key rounds than others, (2) the public communication of past measurement choices changes this bias round by round. We analyze these two previously unnoticed problems, present eavesdropping strategies that exploit them, and find that the two problems are independent. We discuss some sifting protocols in the literature that are immune to these problems. While some of these would be inefficient replacements for iterative sifting, we find that the sifting subroutine of an asymptotically secure protocol suggested by Lo et al (2005 J. Cryptol. 18 133-65), which we call LCA sifting, has an efficiency on par with that of iterative sifting. One of our main results is to show that LCA sifting can be adapted to achieve secure sifting in the finite-key regime. More precisely, we combine LCA sifting with a certain parameter estimation protocol, and we prove the finite-key security of this combination. Hence we propose that LCA sifting should replace iterative sifting in future QKD implementations. More generally, we present two formal criteria for a sifting protocol that guarantee its finite-key security. Our criteria may guide the design of future protocols and inspire a more rigorous QKD analysis, which has neglected sifting-related attacks so far.

  14. Electrostatic Barrier Against Dust Growth in Protoplanetary Disks. I. Classifying the Evolution of Size Distribution

    CERN Document Server

    Okuzumi, Satoshi; Takeuchi, Taku; Sakagami, Masa-aki

    2010-01-01

    Collisional growth of submicron-sized dust grains into macroscopic aggregates is the first step of planet formation in protoplanetary disks. These aggregates are considered to carry nonzero negative charges in the weakly ionized gas disks, but its effect on their collisional growth has not been fully understood so far. In this paper, we investigate how the charging of dust aggregates affects the evolution of their size distribution properly taking into account the charging mechanism in a weakly ionized gas. To clarify the role of the size distribution, we divide our analysis into two steps. First, we analyze the collisional growth of charged aggregates assuming a monodisperse (i.e., narrow) size distribution. We show that the monodisperse growth stalls due to the electrostatic repulsion when a certain condition is met, as is already expected in the previous work. Second, we numerically simulate dust coagulation using Smoluchowski's method to see how the outcome changes when the size distribution is allowed to...

  15. Does the size distribution of mineral dust aerosols depend on the wind speed at emission?

    Directory of Open Access Journals (Sweden)

    J. F. Kok

    2011-07-01

    Full Text Available The size distribution of mineral dust aerosols greatly affects their interactions with clouds, radiation, ecosystems, and other components of the Earth system. Several theoretical dust emission models predict that the dust size distribution depends on the wind speed at emission, with larger wind speeds predicted to produce smaller aerosols. The present study investigates this prediction using a compilation of published measurements of the size-resolved vertical dust flux emitted by eroding soils. Surprisingly, these measurements indicate that the size distribution of naturally emitted dust aerosols is independent of the wind speed. This finding is consistent with the recently formulated brittle fragmentation theory of dust emission, but inconsistent with other theoretical dust emission models. The independence of the emitted dust size distribution with wind speed simplifies both the parameterization of dust emission in atmospheric circulation models as well as the interpretation of geological records of dust deposition.

  16. Does the size distribution of mineral dust aerosols depend on the wind speed at emission?

    CERN Document Server

    Kok, Jasper F

    2011-01-01

    The size distribution of mineral dust aerosols partially determines their interactions with clouds, radiation, ecosystems, and other components of the Earth system. Several theoretical models predict that the dust size distribution depends on the wind speed at emission, with larger wind speeds predicted to produce smaller aerosols. The present study investigates this prediction using a compilation of published measurements of the size-resolved vertical dust flux emitted by eroding soils. Surprisingly, these measurements indicate that the size distribution of naturally emitted dust aerosols is independent of the wind speed. The recently formulated brittle fragmentation theory of dust emission is consistent with this finding, whereas other theoretical dust emission models are not. The independence of the emitted dust size distribution with wind speed simplifies both the interpretation of geological records of dust deposition and the parameterization of dust emission in atmospheric circulation models.

  17. Distribution and Size of Pyroxenite Bodies in the Mantle

    Science.gov (United States)

    Herzberg, C.

    2006-12-01

    lower in pyroxenite-source lavas owing to higher melt fractions. Peridotite-source lavas for the above-mentioned OIB from the Atlantic, Cook-Austral in the Pacific, and Turkana in East Africa have HIMU and FOZO isotopic characteristics, and have low Y/Nb and Zr/Nb. In contrast, peridotite-source lavas from the Caribbean, Ontong Java and North Atlantic display greater isotopic and trace element variability, indicating variable mixing and degradation of subducted crust. Pyroxenite is likely to range in size from grain boundary films to shield volcanoes.

  18. Two-size approximation: a simple way of treating the evolution of grain size distribution in galaxies

    CERN Document Server

    Hirashita, Hiroyuki

    2014-01-01

    Full calculations of the evolution of grain size distribution in galaxies are in general computationally heavy. In this paper, we propose a simple model of dust enrichment in a galaxy with a simplified treatment of grain size distribution by imposing a `two-size approximation'; that is, all the grain population is represented by small (grain radius a 0.03 micron) grains. We include in the model dust supply from stellar ejecta, destruction in supernova shocks, dust growth by accretion, grain growth by coagulation and grain disruption by shattering, considering how these processes work on the small and large grains. We show that this simple framework reproduces the main features found in full calculations of grain size distributions as follows. The dust enrichment starts with the supply of large grains from stars. At a metallicity level referred to as the critical metallicity of accretion, the abundance of the small grains formed by shattering becomes large enough to rapidly increase the grain abundance by acc...

  19. Intensity and degree of segregation in bimodal and multimodal grain size distributions

    Science.gov (United States)

    Katra, Itzhak; Yizhaq, Hezi

    2017-08-01

    The commonly used grain size analysis technique which applies moments (sorting, skewness and kurtosis) is less useful in the case of sediments with bimodal size distributions. Herein we suggest a new simple method for analyzing the degree of grain size segregation in sand-sized sediment that has clear bimodal size distributions. Two main features are used to characterize the bimodal distribution: grain diameter segregation, which is the normalized difference between coarse and fine grain diameters, and the frequency segregation which is the normalized difference in frequencies between two modes. The new defined indices can be calculated from frequency plot curves and can be graphically represented on a two dimensional coordinate system showing the dynamical aspects of the size distribution. The results enable comparison between granular samples from different locations and/or times to shed new light on the dynamic processes involved in grain size segregation of sediments. We demonstrate here the use of this method to analyze bimodal distributions of aeolian granular samples mostly from aeolian megaripples. Six different aeolian cases were analyzed to highlight the method's applicability, which is relevant to wide research themes in the Earth and environmental sciences, and can furthermore be easily adapted to analyze polymodal grain size distributions.

  20. Influence of stress-path on pore size distribution in granular materials

    Directory of Open Access Journals (Sweden)

    Das Arghya

    2017-01-01

    Full Text Available Pore size distribution is an important feature of granular materials in the context of filtration and erosion in soil hydraulic structures. Present study focuses on the evolution characteristics of pore size distribution for numerically simulated granular assemblies while subjected to various compression boundary constrain, namely, conventional drained triaxial compression, one-dimensional or oedometric compression and isotropic compression. We consider the effects initial packing of the granular assembly, loose or dense state. A simplified algorithm based on Delaunay tessellation is used for the estimation of pore size distribution for the deforming granular assemblies at various stress states. The analyses show that, the evolution of pore size is predominantly governed by the current porosity of the granular assembly while the stress path or loading process has minimal influence. Further it has also been observed that pore volume distribution reaches towards a critical distribution at the critical porosity during shear enhanced loading process irrespective of the deformation mechanism either compaction or dilation.

  1. Influence of stress-path on pore size distribution in granular materials

    Science.gov (United States)

    Das, Arghya; Kumar, Abhinav

    2017-06-01

    Pore size distribution is an important feature of granular materials in the context of filtration and erosion in soil hydraulic structures. Present study focuses on the evolution characteristics of pore size distribution for numerically simulated granular assemblies while subjected to various compression boundary constrain, namely, conventional drained triaxial compression, one-dimensional or oedometric compression and isotropic compression. We consider the effects initial packing of the granular assembly, loose or dense state. A simplified algorithm based on Delaunay tessellation is used for the estimation of pore size distribution for the deforming granular assemblies at various stress states. The analyses show that, the evolution of pore size is predominantly governed by the current porosity of the granular assembly while the stress path or loading process has minimal influence. Further it has also been observed that pore volume distribution reaches towards a critical distribution at the critical porosity during shear enhanced loading process irrespective of the deformation mechanism either compaction or dilation.

  2. Have the tsunami and nuclear accident following the Great East Japan Earthquake affected the local distribution of hospital physicians?

    Science.gov (United States)

    Kashima, Saori; Inoue, Kazuo; Matsumoto, Masatoshi

    2017-01-01

    The Great East Japan Earthquake occurred on 11 March 2011 near the northeast coast of the main island, 'Honshu', of Japan. It wreaked enormous damage in two main ways: a giant tsunami and an accident at the Fukushima Daiichi Nuclear Power Plant (FDNPP). This disaster may have affected the distribution of physicians in the region. Here, we evaluate the effect of the disaster on the distribution of hospital physicians in the three most severely affected prefectures (Iwate, Miyagi, and Fukushima). We obtained individual information about physicians from the Physician Census in 2010 (pre-disaster) and 2012 (post-disaster). We examined geographical distributions of physicians in two ways: (1) municipality-based analysis for demographic evaluation; and (2) hospital-based analysis for geographic evaluation. In each analysis, we calculated the rate of change in physician distributions between pre- and post-disaster years at various distances from the tsunami-affected coast, and from the restricted area due to the FDNPP accident. The change in all, hospital, and clinic physicians were 0.2%, 0.7%, and -0.7%, respectively. In the municipality-based analysis, after taking account of the decreased population, physician numbers only decreased within the restricted area. In the hospital-based analysis, hospital physician numbers did not decrease at any distance from the tsunami-affected coast. In contrast, there was a 3.3% and 2.3% decrease in hospital physicians 0-25 km and 25-50 km from the restricted area surrounding the FDNPP, respectively. Additionally, decreases were larger and increases were smaller in areas close to the FDNPP than in areas further away. Our results suggest that the tsunami did not affect the distribution of physicians in the affected regions. However, the FDNPP accident changed physician distribution in areas close to the power plant.

  3. Distribution and features of landslides induced by the 2008 Wengchuan Earthquake, Sichuan, China

    Science.gov (United States)

    Chigira, M.; Xiyong, W.; Inokuchi, T.; Gonghui, W.

    2009-04-01

    2008 Sichuan earthquake with a magnitude of Mw 7.9 induced numerous mass movements around the fault surface ruptures of which maximum separations we observed were 3.6 m vertical and 1.5 m horizontal (right lateral). The affected area was mountainous areas with elevations from 1000 m to 4500 m on the west of the Sichuan Basin. The NE-trending Longmenshan fault zone runs along the boundary between the mountains on the west and the Sichuan basin (He and Tsukuda, 2003), of which Yinghsiuwan-Beichuan fault was the main fault that generated the 2008 earthquake (Xu, 2008). The basement rocks of the mountainous areas range from Precambrian to Cretaceous in age. They are basaltic rocks, granite, phyllite, dolostone, limestone, alternating beds of sandstone and shale, etc. There were several types of landslides ranging from small, shallow rockslide, rockfall, debris slide, deep rockslide, and debris flows. Shallow rockslide, rock fall, and debris slide were most common and occurred on convex slopes or ridge tops. When we approached the epicentral area, first appearing landslides were of this type and the most conspicuous was a failure of isolated ridge-tops, where earthquake shaking would be amplified. As for rock types, slopes of granitic rocks, hornfels, and carbonate rocks failed in wide areas to the most. They are generally hard and their fragments apparently collided and repelled to each other and detached from the slopes. Alternating beds of sandstone and mudstone failed on many slopes near the fault ruptures, including Yinghsiuwan near the epicenter. Many rockfalls occurred on cliffs, which had taluses on their feet. The fallen rocks tumbled down and mostly stopped within the talus surfaces, which is quite reasonable because taluses generally develop by this kind of processes. Many rockslides occurred on slopes of carbonate rocks, in which dolostone or dolomitic limestone prevails. Deep-seated rockslide occurred on outfacing slopes and shallow rockslide and rockfall

  4. [Size distributions of organic carbon (OC) and elemental carbon (EC) in Shanghai atmospheric particles].

    Science.gov (United States)

    Wang, Guang-Hua; Wei, Nan-Nan; Liu, Wei; Lin, Jun; Fan, Xue-Bo; Yao, Jian; Geng, Yan-Hong; Li, Yu-Lan; Li, Yan

    2010-09-01

    Size distributions of organic carbon (OC), elemental carbon (EC) and secondary organic carbon (SOC) in atmospheric particles with size range from 7.20 microm, collected in Jiading District, Shanghai were determined. For estimating size distribution of SOC in these atmospheric particles, a method of determining (OC/EC)(pri) in atmospheric particles with different sizes was discussed and developed, with which SOC was estimated. According to the correlation between OC and EC, main sources of the particles were also estimated roughly. The size distributions of OC and SOC showed a bi-modal with peaks in the particles with size of 3.0 microm, respectively. EC showed both of a bi-modal and tri-modal. Compared with OC, EC was preferably enriched in particles with size of particles (particles. OC and EC were preferably enriched in fine particles (particles with different sizes accounted for 15.7%-79.1% of OC in the particles with corresponding size. Concentrations of SOC in fine aerosols ( 3.00 microm) accounted for 41.4% and 43.5% of corresponding OC. Size distributions of OC, EC and SOC showed time-dependence. The correlation between OC and EC showed that the main contribution to atmospheric particles in Jiading District derived from light petrol vehicles exhaust.

  5. Nanomaterial size distribution analysis via liquid nebulization coupled with ion mobility spectrometry (LN-IMS).

    Science.gov (United States)

    Jeon, Seongho; Oberreit, Derek R; Van Schooneveld, Gary; Hogan, Christopher J

    2016-02-21

    We apply liquid nebulization (LN) in series with ion mobility spectrometry (IMS, using a differential mobility analyzer coupled to a condensation particle counter) to measure the size distribution functions (the number concentration per unit log diameter) of gold nanospheres in the 5-30 nm range, 70 nm × 11.7 nm gold nanorods, and albumin proteins originally in aqueous suspensions. In prior studies, IMS measurements have only been carried out for colloidal nanoparticles in this size range using electrosprays for aerosolization, as traditional nebulizers produce supermicrometer droplets which leave residue particles from non-volatile species. Residue particles mask the size distribution of the particles of interest. Uniquely, the LN employed in this study uses both online dilution (with dilution factors of up to 10(4)) with ultra-high purity water and a ball-impactor to remove droplets larger than 500 nm in diameter. This combination enables hydrosol-to-aerosol conversion preserving the size and morphology of particles, and also enables higher non-volatile residue tolerance than electrospray based aerosolization. Through LN-IMS measurements we show that the size distribution functions of narrowly distributed but similarly sized particles can be distinguished from one another, which is not possible with Nanoparticle Tracking Analysis in the sub-30 nm size range. Through comparison to electron microscopy measurements, we find that the size distribution functions inferred via LN-IMS measurements correspond to the particle sizes coated by surfactants, i.e. as they persist in colloidal suspensions. Finally, we show that the gas phase particle concentrations inferred from IMS size distribution functions are functions of only of the liquid phase particle concentration, and are independent of particle size, shape, and chemical composition. Therefore LN-IMS enables characterization of the size, yield, and polydispersity of sub-30 nm particles.

  6. Classification of Earthquake-triggered Landslide Events - Review of Classical and Particular Cases

    Science.gov (United States)

    Braun, A.; Havenith, H. B.; Schlögel, R.

    2016-12-01

    Seismically induced landslides often contribute to a significant degree to the losses related to earthquakes. The identification of possible extends of landslide affected areas can help to target emergency measures when an earthquake occurs or improve the resilience of inhabited areas and critical infrastructure in zones of high seismic hazard. Moreover, landslide event sizes are an important proxy for the estimation of the intensity and magnitude of past earthquakes in paleoseismic studies, allowing us to improve seismic hazard assessment over longer terms. Not only earthquake intensity, but also factors such as the fault characteristics, topography, climatic conditions and the geological environment have a major impact on the intensity and spatial distribution of earthquake induced landslides. Inspired by classical reviews of earthquake induced landslides, e.g. by Keefer or Jibson, we present here a review of factors contributing to earthquake triggered slope failures based on an `event-by-event' classification approach. The objective of this analysis is to enable the short-term prediction of earthquake triggered landslide event sizes in terms of numbers and size of the affected area right after an earthquake event occurred. Five main factors, `Intensity', `Fault', `Topographic energy', `Climatic conditions' and `Surface geology' were used to establish a relationship to the number and spatial extend of landslides triggered by an earthquake. Based on well-documented recent earthquakes (e.g. Haiti 2010, Wenchuan 2008) and on older events for which reliable extensive information was available (e.g. Northridge 1994, Loma Prieta 1989, Guatemala 1976, Peru 1970) the combination and relative weight of the factors was calibrated. The calibrated factor combination was then applied to more than 20 earthquake events for which landslide distribution characteristics could be crosschecked. We present cases where our prediction model performs well and discuss particular cases

  7. Fault geometry and slip distribution of the 1891 Nobi great earthquake (M = 8.0) with the oldest survey data sets in Japan

    Science.gov (United States)

    Takano, K.; Kimata, F.

    2010-12-01

    This study reexamines the ground deformation and fault slip model of the 1891 Nobi great earthquake (M = 8.0), central Japan. At the earthquake, three faults of Nukumi, Neodani and Umehara ruptured the ground surface with maximum of 8 m in the horizontal direction and 6 m in the vertical direction along the 80 km length [Koto, 1893; Matsuda, 1974]. Additionally, the Gifu-Ichinomiya line stretching toward south from Gifu is discussed as the buried fault of the Nobi earthquake, because of the vertical deformation and the high collapse rates along the line and wave propagation [Mikumo and Ando, 1976; Nakano et al., 2007]. We reevaluate two geodetic data sets of triangulation and leveling around the Umehara fault in 1885-1890 and 1894-1908 that were obtained from the Japanese Imperial Land Survey in the General Staff Office of the Imperial Army (the present Geospatial Information Authority of Japan); these data sets consist of displacements calculated from the net adjustment of triangulation and leveling surveys carried out before and after the Nobi earthquake. Co-seismic displacements are detected as southeastward displacements and uplifts are detected in the southwest block the Umehara fault. The maximum displacements and uplifts are up to 1.7 m and 0.74 m, respectively. We estimated the coseismic slip distribution of the faults by analyzing our data set. The geometry of the fault planes was adopted from the earthquake fault of this area. The remaining parameters are determined using a quasi-Newton nonlinear optimization algorithm. The best fit to the data is obtained from seven segments of the faults along the sections running Nukumi, Neodani and Umehara faults. The estimated uniform-slip elastic dislocation model consists of seven adjacent planes. The fault slips are up to 3.8 m. Because it can suitably explain the coseismic deformation due to seven earthquake source faults, the earthquake source fault is not admitted under the Gifu-Ichinomiya line.

  8. Linear and nonlinear excitations in complex plasmas with nonadiabatic dust charge fluctuation and dust size distribution

    Institute of Scientific and Technical Information of China (English)

    Zhang Li-Ping; Xue Ju-Kui; Li Yan-Long

    2011-01-01

    Both linear and nonlinear excitation in dusty plasmas have been investigated including the nonadiabatic dust charge fluctuation and Gaussian size distribution dust particles.A linear dispersion relation and a Korteweg-de VriesBurgers equation governing the dust acoustic shock waves are obtained.The relevance of the instability of wave and the wave evolution to the dust size distribution and nonadiabatic dust charge fluctuation is illustrated both analytically and numerically.The numerical results show that the Gaussian size distribution of dust particles and the nonadiabatic dust charge fluctuation have strong common influence on the propagation of both linear and nonlinear excitations.

  9. Determination of Size Distributions in Nanocrystalline Powders by TEM, XRD and SAXS

    DEFF Research Database (Denmark)

    Jensen, Henrik; Pedersen, Jørgen Houe; Jørgensen, Jens Erik

    2006-01-01

    available powders showed different morphologies. The SSEC78 powder showed the narrowest sizes distribution while UV100 and TiO2_5nm consisted of the smallest primary particles. SSEC78, UV100, and TiO2_5nm consisted of both primary particles as well as a secondary structure comprised of nanosized primary......Crystallite size distributions and particle size distributions were determined by TEM, XRD, and SAXS for three commercially available TiO2 samples and one homemade. The theoretical Guinier Model was fitted to the experimental data and compared to analytical expressions. Modeling of the XRD spectra...

  10. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study.

    Science.gov (United States)

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2013-11-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin-Rammler-Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a

  11. Grain size distribution of quartz isolated from Chinese loess/paleosol

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Grain size distribution of bulk loess-paleosol and quartz chemically extracted from the loess/paleosol shows that mean size of the bulk samples is always finer than that of the quartz. The original aeolian depositions have been modified to various degrees by post-depositional weathering and pedogenic processes. The grain size distribution of the isolated quartz should be close to that of the primary aeolian sediment because the chemical pretreatment excludes secondary produced minerals. Therefore, the grain size of the quartz may be considered to more clearly reflect the variations of winter monsoon intensity.

  12. Earthquake Source and Ground Motion Characteristics of Great Kanto Earthquakes

    Science.gov (United States)

    Somerville, P. G.; Sato, T.; Wald, D. J.; Graves, R. W.; Dan, K.

    2003-12-01

    This paper describes the derivation of a rupture model of the 1923 Kanto earthquake, and the estimation of ground motions that occurred during that earthquake and that might occur during future great Kanto earthquakes. The rupture model was derived from the joint inversion of geodetic and teleseismic data. The leveling and triangulation data place strong constraints on the distribution and orientation of slip on the fault. The most concentrated slip is in the shallow central and western part of the fault. The location of the hypocenter on the western part of the fault gives rise to strong near fault rupture directivity effects, which are largest toward the east in the Boso Peninsula. To estimate the ground motions caused by this earthquake, we first calibrated 1D and 3D wave propagation path effects using the Odawara earthquake of 5 August 1990 (M 5.1), the first earthquake larger than M 5 in the last 60 years near the hypocenter of the 1923 Kanto earthquake. The simulation of the moderate-sized Odawara earthquake demonstrates that the 3D velocity model works quite well at reproducing the recorded long-period (T > 3.33 sec) strong motions, including basin-generated surface waves, for a number of sites located throughout the Kanto basin region. Using this validated 3D model along with the rupture model described above, we simulated the long-period (T > 4 sec) ground motions in this region for the 1923 Kanto earthquake. The largest ground motions occur east of the epicenter along the central and southern part of the Boso Peninsula. These large motions arise from strong rupture directivity effects and are comprised of relatively simple, source-controlled pulses with a dominant period of about 10 sec. Other rupture models and hypocenter locations generally produce smaller long period ground motion levels in this region that those of the 1923 event. North of the epicentral region, in the Tokyo area, 3D basin-generated phases are quite significant, and these phases

  13. Intensity distribution and effect of the Nov. 9, 1996 earthquake of MS=6.1 in offshore outside the Yangtze River Mouth on the South Korea

    Institute of Scientific and Technical Information of China (English)

    李裕澈; 李德基; 吴锡薰; 尹龙勋

    2003-01-01

    @@ On Nov. 9, 1996 at 21h56min (Beijing Time), an earthquake of MS=6.1 occurred in offshore outside the Yangtze River Mouth (31o43¢N, 123o04¢E). The shock affected Shanghai City and both Jiangsu and Zhejiang Provinces in China mainly. The shock was felt more strongly in the Yangtze River Mouth and Hangzhou Bay area than in the rest of them, particularly in high buildings of Shanghai City. In addition, the earthquake was felt in South Korea and also stronger in apartments or high buildings. LIU, JIN (1998) and LIU, et al (1999) described effect of the shock on the eastern China. The paper describes the effect of the earthquake on South Korea and the whole intensity distribution in South Korea and eastern China.

  14. Mass size distributions and size resolved chemical composition of fine particulate matter at the Pittsburgh supersite

    Science.gov (United States)

    Cabada, Juan C.; Rees, Sarah; Takahama, Satoshi; Khlystov, Andrey; Pandis, Spyros N.; Davidson, Cliff I.; Robinson, Allen L.

    Size-resolved aerosol mass and chemical composition were measured during the Pittsburgh Air Quality Study. Daily samples were collected for 12 months from July 2001 to June 2002. Micro-orifice uniform deposit impactors (MOUDIs) were used to collect aerosol samples of fine particulate matter smaller than 10 μm. Measurements of PM 0.056, PM 0.10, PM 0.18, PM 0.32, PM 0.56, PM 1.0, PM 1.8 and PM 2.5 with the MOUDI are available for the full study period. Seasonal variations in the concentrations are observed for all size cuts. Higher concentrations are observed during the summer and lower during the winter. Comparison between the PM 2.5 measurements by the MOUDI and other integrated PM samplers reveals good agreement. Good correlation is observed for PM 10 between the MOUDI and an integrated sampler but the MOUDI underestimates PM 10 by 20%. Bouncing of particles from higher stages of the MOUDI (>PM 2.5) is not a major problem because of the low concentrations of coarse particles in the area. The main cause of coarse particle losses appears to be losses to the wall of the MOUDI. Samples were collected on aluminum foils for analysis of carbonaceous material and on Teflon filters for analysis of particle mass and inorganic anions and cations. Daily samples were analyzed during the summer (July 2001) and the winter intensives (January 2002). During the summer around 50% of the organic material is lost from the aluminum foils as compared to a filter-based sampler. These losses are due to volatilization and bounce-off from the MOUDI stages. High nitrate losses from the MOUDI are also observed during the summer (above 70%). Good agreement between the gravimetrically determined mass and the sum of the masses of the individual compounds is obtained, if the lost mass from organics and the aerosol water content are included for the summer. For the winter no significant losses of material are detected and there exists reasonable agreement between the gravimetrical mass and the

  15. Subdiffusion of volcanic earthquakes

    CERN Document Server

    Abe, Sumiyoshi

    2016-01-01

    A comparative study is performed on volcanic seismicities at Mt.Eyjafjallajokull in Iceland and Mt. Etna in Sicily, Italy, from the viewpoint of science of complex systems, and the discovery of remarkable similarities between them regarding their exotic spatio-temporal properties is reported. In both of the volcanic seismicities as point processes, the jump probability distributions of earthquakes are found to obey the exponential law, whereas the waiting-time distributions follow the power law. In particular, a careful analysis is made about the finite size effects on the waiting-time distributions, and accordingly, the previously reported results for Mt. Etna [S. Abe and N. Suzuki, EPL 110, 59001 (2015)] are reinterpreted. It is shown that spreads of the volcanic earthquakes are subdiffusive at both of the volcanoes. The aging phenomenon is observed in the "event-time-averaged" mean-squared displacements of the hypocenters. A comment is also made on presence/absence of long term memories in the context of t...

  16. Influence of particle size on the distributions of liposomes to atherosclerotic lesions in mice.

    Science.gov (United States)

    Chono, Sumio; Tauchi, Yoshihiko; Morimoto, Kazuhiro

    2006-01-01

    In order to confirm the efficacy of liposomes as a drug carrier for atherosclerotic therapy, the influence of particle size on the distribution of liposomes to atherosclerotic lesions in mice was investigated. In brief, liposomes of three different particle sizes (500, 200, and 70 nm) were prepared, and the uptake of liposomes by the macrophages and foam cells in vitro and the biodistributions of liposomes administered intravenously to atherogenic mice in vivo were examined. The uptake by the macrophages and foam cells increased with the increase in particle size. Although the elimination rate from the blood circulation and the hepatic and splenic distribution increased with the increase in particle size in atherogenic mice, the aortic distribution was independent of the particle size. The aortic distribution of 200 nm liposomes was the highest in comparison with the other sizes. Surprisingly, the aortic distribution of liposomes in vivo did not correspond with the uptake by macrophages and foam cells in vitro. These results suggest that there is an optimal size for the distribution of liposomes to atherosclerotic lesions.

  17. Effects of grain size distribution on the packing fraction and shear strength of frictionless disk packings

    Science.gov (United States)

    Estrada, Nicolas

    2016-12-01

    Using discrete element methods, the effects of the grain size distribution on the density and the shear strength of frictionless disk packings are analyzed. Specifically, two recent findings on the relationship between the system's grain size distribution and its rheology are revisited, and their validity is tested across a broader range of distributions than what has been used in previous studies. First, the effects of the distribution on the solid fraction are explored. It is found that the distribution that produces the densest packing is not the uniform distribution by volume fractions as suggested in a recent publication. In fact, the maximal packing fraction is obtained when the grading curve follows a power law with an exponent close to 0.5 as suggested by Fuller and Thompson in 1907 and 1919 [Trans Am. Soc. Civ. Eng. 59, 1 (1907) and A Treatise on Concrete, Plain and Reinforced (1919), respectively] while studying mixtures of cement and stone aggregates. Second, the effects of the distribution on the shear strength are analyzed. It is confirmed that these systems exhibit a small shear strength, even if composed of frictionless particles as has been shown recently in several works. It is also found that this shear strength is independent of the grain size distribution. This counterintuitive result has previously been shown for the uniform distribution by volume fractions. In this paper, it is shown that this observation keeps true for different shapes of the grain size distribution.

  18. Measuring coral size-frequency distribution using stereo video technology, a comparison with in situ measurements.

    Science.gov (United States)

    Turner, Joseph A; Polunin, Nicholas V C; Field, Stuart N; Wilson, Shaun K

    2015-05-01

    Coral colony size-frequency distribution data offer valuable information about the ecological status of coral reefs. Such data are usually collected by divers in situ, but stereo video is being increasingly used for monitoring benthic marine communities and may be used to collect size information for coral colonies. This study compared the size-frequency distributions of coral colonies obtained by divers measuring colonies 'in situ' with digital video imagery collected using stereo video and later processed using computer software. The size-frequency distributions of the two methods were similar for corymbose colonies, although distributions were different for massive, branching and all colonies combined. The differences are mainly driven by greater abundance of colonies >50 cm and fewer colonies 5 cm and was able to record measurements on 87% of the colonies detected. However, stereo video only detected 57% of marked colonies coral recruits. Estimates of colony size made with the stereo video were smaller than the in situ technique for all growth forms, particularly for massive morphologies. Despite differences in size distributions, community assessments, which incorporated genera, growth forms and size, were similar between the two techniques. Stereo video is suitable for monitoring coral community demographics and provided data similar to in situ measure for corymbose corals, but the ability to accurately measure massive and branching coral morphologies appeared to decline with increasing colony size.

  19. Cloud particle size distributions measured with an airborne digital in-line holographic instrument

    Directory of Open Access Journals (Sweden)

    J. P. Fugal

    2009-03-01

    Full Text Available Holographic data from the prototype airborne digital holographic instrument HOLODEC (Holographic Detector for Clouds, taken during test flights are digitally reconstructed to obtain the size (equivalent diameters in the range 23 to 1000 μm, three-dimensional position, and two-dimensional profile of ice particles and then ice particle size distributions and number densities are calculated using an automated algorithm with minimal user intervention. The holographic method offers the advantages of a well-defined sample volume size that is not dependent on particle size or airspeed, and offers a unique method of detecting shattered particles. The holographic method also allows the volume sample rate to be increased beyond that of the prototype HOLODEC instrument, limited solely by camera technology.

    HOLODEC size distributions taken in mixed-phase regions of cloud compare well to size distributions from a PMS FSSP probe also onboard the aircraft during the test flights. A conservative algorithm for detecting shattered particles utilizing the particles depth-position along the optical axis eliminates the obvious ice particle shattering events from the data set. In this particular case, the size distributions of non-shattered particles are reduced by approximately a factor of two for particles 15 to 70 μm in equivalent diameter, compared to size distributions of all particles.

  20. Ultrafine particle size distributions near freeways: Effects of differing wind directions on exposure

    Science.gov (United States)

    Kozawa, Kathleen H.; Winer, Arthur M.; Fruin, Scott A.

    2012-12-01

    High ambient ultrafine particle (UFP) concentrations may play an important role in the adverse health effects associated with living near busy roadways. However, UFP size distributions change rapidly as vehicle emissions dilute and age. These size changes can influence UFP lung deposition rates and dose because deposition in the respiratory system is a strong function of particle size. Few studies to date have measured and characterized changes in near-road UFP size distributions in real-time, thus missing transient variations in size distribution due to short-term fluctuations in wind speed, direction, or particle dynamics. In this study we measured important wind direction effects on near-freeway UFP size distributions and gradients using a mobile platform with 5-s time resolution. Compared to more commonly measured perpendicular (downwind) conditions, parallel wind conditions appeared to promote formation of broader and larger size distributions of roughly one-half the particle concentration. Particles during more parallel wind conditions also changed less in size with downwind distance and the fraction of lung-deposited particle number was calculated to be 15% lower than for downwind conditions, giving a combined decrease of about 60%. In addition, a multivariate analysis of several variables found meteorology, particularly wind direction and temperature, to be important in predicting UFP concentrations within 150 m of a freeway (R2 = 0.46, p = 0.014).

  1. Spatial Distribution and Temporal Trend of Anthropogenic Organic Compounds Derived from the 2011 East Japan Earthquake.

    Science.gov (United States)

    Mizukawa, Kaoruko; Hirai, Yasuko; Sakakibara, Hiroyuki; Endo, Satoshi; Okuda, Keiji; Takada, Hideshige; Murakami-Sugihara, Naoko; Shirai, Kotaro; Ogawa, Hiroshi

    2017-08-01

    The tsunami caused by the Great East Japan Earthquake on March 11, 2011 disturbed coastal environments in the eastern Tohoku region in Japan. Numerous terrestrial materials, including anthropogenic organic compounds, were deposited in the coastal zone. To evaluate the impacts of the disaster, we analyzed PCBs, LABs, PAHs, and hopanes in mussels collected from 12 locations in the east of Tohoku during 2011-2015 (series A) by GC-ECD or GC-MS and compared them with results from mussels collected from 22 locations around Japan during 2001-2004 (series B). Early LAB concentrations in series A at some locations were higher than the maximum concentrations in series B but decreased during the 5 years. Because LABs are molecular markers for sewage, these decreases are consistent with the recovery of sewage treatment plants in these areas. Early PAH concentrations at several locations were higher than the maximum concentrations in series B but also decreased. These high concentrations would have been derived from oil spills. The decreases of both LABs and PAHs indicate that these locations were affected by the tsunami but recovered. In contrast, later high concentrations of target compounds were detected sporadically at several locations. This pattern suggests that environmental pollution was caused by human activities, such as reconstruction. To understand the long-term trend of environmental pollution induced by the disaster, continuous monitoring along the Tohoku coast is required.

  2. Spatial Distribution of Ground water Level Changes Induced by the 2006 Hengchun Earthquake Doublet

    Directory of Open Access Journals (Sweden)

    Yeeping Chia

    2009-01-01

    Full Text Available Water-level changes were ob served in 107 wells at 67 monitoring stations in the southern coastal plain of Tai wan during the 2006 Mw 7.1 Hengchun earthquake doublet. Two consecutive coseismic changes induced by the earth quake doublet can be observed from high-frequency data. Obervations from multiple-well stations indicate that the magnitude and direction of coseismic change may vary in wells of different depths. Coseismic rises were dominant on the south east side of the costal plain; whereas, coseismic falls prevailed on the north west side. In the transition zone, rises appeared in shallow wells whilst falls were evident in deep wells. As coseismic ground water level changes can reflect the tectonic strain field, tectonic extension likely dominates the deep subsurface in the transition area, and possibly in the en tire southern coastal plain. The coseismic rises in water level showed a tendency to de crease with distance from the hypocenter, but no clear trend was found for the coseismic falls.

  3. Particle Size Distributions Measured in the Stratospheric Plumes of Three Rockets During the ACCENT Missions

    Science.gov (United States)

    Wiedinmyer, C.; Brock, C. A.; Reeves, J. M.; Ross, M. N.; Schmid, O.; Toohey, D.; Wilson, J. C.

    2001-12-01

    The global impact of particles emitted by rocket engines on stratospheric ozone is not well understood, mainly due to the lack of comprehensive in situ measurements of the size distributions of these emitted particles. During the Atmospheric Chemistry of Combustion Emissions Near the Tropopause (ACCENT) missions in 1999, the NASA WB-57F aircraft carried the University of Denver N-MASS and FCAS instruments into the stratospheric plumes from three rockets. Size distributions of particles with diameters from 4 to approximately 2000 nm were calculated from the instrument measurements using numerical inversion techniques. The data have been averaged over 30-second intervals. The particle size distributions observed in all of the rocket plumes included a dominant mode near 60 nm diameter, probably composed of alumina particles. A smaller mode at approximately 25 nm, possibly composed of soot particles, was seen in only the plumes of rockets that used liquid oxygen and kerosene as a propellant. Aircraft exhaust emitted by the WB-57F was also sampled; the size distributions within these plumes are consistent with prior measurements in aircraft plumes. The size distributions for all rocket intercepts have been fitted to bimodal, lognormal distributions to provide input for global models of the stratosphere. Our data suggest that previous estimates of the solid rocket motor alumina size distributions may underestimate the alumina surface area emission index, and so underestimate the particle surface area available for heterogeneous chlorine activation reactions in the global stratosphere.

  4. Species sensitivity distribution for chlorpyrifos to aquatic organisms: Model choice and sample size.

    Science.gov (United States)

    Zhao, Jinsong; Chen, Boyu

    2016-03-01

    Species sensitivity distribution (SSD) is a widely used model that extrapolates the ecological risk to ecosystem levels from the ecotoxicity of a chemical to individual organisms. However, model choice and sample size significantly affect the development of the SSD model and the estimation of hazardous concentrations at the 5th centile (HC5). To interpret their effects, the SSD model for chlorpyrifos, a widely used organophosphate pesticide, to aquatic organisms is presented with emphases on model choice and sample size. Three subsets of median effective concentration (EC50) with different sample sizes were obtained from ECOTOX and used to build SSD models based on parametric distribution (normal, logistic, and triangle distribution) and nonparametric bootstrap. The SSD models based on the triangle distribution are superior to the normal and logistic distributions according to several goodness-of-fit techniques. Among all parametric SSD models, the one with the largest sample size based on the triangle distribution gives the most strict HC5 with 0.141μmolL(-1). The HC5 derived from the nonparametric bootstrap is 0.159μmol L(-1). The minimum sample size required to build a stable SSD model is 11 based on parametric distribution and 23 based on nonparametric bootstrap. The study suggests that model choice and sample size are important sources of uncertainty for application of the SSD model. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Rupture geometry and slip distribution of the 2016 January 21st Ms6.4 Menyuan, China earthquake inferred from Sentinel-1A InSAR measurements

    Science.gov (United States)

    Zhou, Y.

    2016-12-01

    On 21 January 2016, an Ms6.4 earthquake stroke Menyuan country, Qinghai Province, China. The epicenter of the main shock and locations of its aftershocks indicate that the Menyuan earthquake occurred near the left-lateral Lenglongling fault. However, the focal mechanism suggests that the earthquake should take place on a thrust fault. In addition, field investigation indicates that the earthquake did not rupture the ground surface. Therefore, the rupture geometry is unclear as well as coseismic slip distribution. We processed two pairs of InSAR images acquired by the ESA Sentinel-1A satellite with the ISCE software, and both ascending and descending orbits were included. After subsampling the coseismic InSAR images into about 800 pixels, coseismic displacement data along LOS direction are inverted for earthquake source parameters. We employ an improved mixed linear-nonlinear Bayesian inversion method to infer fault geometric parameters, slip distribution, and the Laplacian smoothing factor simultaneously. This method incorporates a hybrid differential evolution algorithm, which is an efficient global optimization algorithm. The inversion results show that the Menyuan earthquake ruptured a blind thrust fault with a strike of 124°and a dip angle of 41°. This blind fault was never investigated before and intersects with the left-lateral Lenglongling fault, but the strikes of them are nearly parallel. The slip sense is almost pure thrusting, and there is no significant slip within 4km depth. The max slip value is up to 0.3m, and the estimated moment magnitude is Mw5.93, in agreement with the seismic inversion result. The standard error of residuals between InSAR data and model prediction is as small as 0.5cm, verifying the correctness of the inversion results.

  6. Rumen Contents and Ruminal Digesta Particle Size Distribution in Buffalo Steers Fed Three Different Size of Alfalfa

    Directory of Open Access Journals (Sweden)

    A. Teimouri Yansari

    2010-02-01

    Full Text Available This study was conducted to investigate the effects of three sizes of alfalfa and time post-feeding on rumen contents and on particle size distribution of ruminal digesta. Three ruminally fistulated buffalo steers received a diet consisting just alfalfa that was harvested at 15% of flowering and chopped in three sizes. Individual small rectangular bales were chopped with a forage field harvester for theoretical cut length 19 and 10 mm for preparation of long and medium particle size, also the fine particles were prepared by milling. The geometric means and its standard deviation were 8.5, 5.5 and 2.5 mm; and 1.24, 1.16 and 1.06 mm, in coarse, medium and fine, respectively. The experimental design was a repeated 3×3 Latin squares with 21 day periods. The diets were offered twice daily at 09:00 and 21:00 h at ad libitum level. The rumens were evacuated manually at 3, 7.5 and 12h post-feeding and total ruminal contents separated into mat and bailable liquids. Dry matter weight distribution of total recovered particles was determined by a wet-sieving procedure and used to partition ruminal mat and bailable liquids among percentages of large (≥4.0 mm, medium (<4.0mm and ≥1.18 mm, and fine (<1.18 mm and ≥0.05 mm particles. Intake did not influence markedly the distribution of different particle fractions, whereas particle size and time post-feeding had a pronounced effect. With increasing time after feeding, percentage of large and medium particles significantly decreased, whereas the percentage of fine particles significantly increased. The ruminal digesta particle distributions illustrated intensive particle breakdown in the reticulo-rumen for coarse particle more than others. Dry matter contents and the proportion of particulate dry matter in the rumen increased as intake increased, i.e. ruminal mat increased at the expense of bailable liquids. It can be concluded that reduction of forage particle size for buffaloes at maintenance level

  7. Modeling earthquake dynamics

    Science.gov (United States)

    Charpentier, Arthur; Durand, Marilou

    2015-07-01

    In this paper, we investigate questions arising in Parsons and Geist (Bull Seismol Soc Am 102:1-11, 2012). Pseudo causal models connecting magnitudes and waiting times are considered, through generalized regression. We do use conditional model (magnitude given previous waiting time, and conversely) as an extension to joint distribution model described in Nikoloulopoulos and Karlis (Environmetrics 19: 251-269, 2008). On the one hand, we fit a Pareto distribution for earthquake magnitudes, where the tail index is a function of waiting time following previous earthquake; on the other hand, waiting times are modeled using a Gamma or a Weibull distribution, where parameters are functions of the magnitude of the previous earthquake. We use those two models, alternatively, to generate the dynamics of earthquake occurrence, and to estimate the probability of occurrence of several earthquakes within a year or a decade.

  8. Nanoparticles and metrology: a comparison of methods for the determination of particle size distributions

    Science.gov (United States)

    Coleman, Victoria A.; Jämting, Åsa K.; Catchpoole, Heather J.; Roy, Maitreyee; Herrmann, Jan

    2011-10-01

    Nanoparticles and products incorporating nanoparticles are a growing branch of nanotechnology industry. They have found a broad market, including the cosmetic, health care and energy sectors. Accurate and representative determination of particle size distributions in such products is critical at all stages of the product lifecycle, extending from quality control at point of manufacture to environmental fate at the point of disposal. Determination of particle size distributions is non-trivial, and is complicated by the fact that different techniques measure different quantities, leading to differences in the measured size distributions. In this study we use both mono- and multi-modal dispersions of nanoparticle reference materials to compare and contrast traditional and novel methods for particle size distribution determination. The methods investigated include ensemble techniques such as dynamic light scattering (DLS) and differential centrifugal sedimentation (DCS), as well as single particle techniques such as transmission electron microscopy (TEM) and microchannel resonator (ultra high-resolution mass sensor).

  9. PARTICLE SIZE DISTRIBUTIONS FROM SELECT RESIDENCES PARTICIPATING IN THE NERL RTP PM PANEL STUDY

    Science.gov (United States)

    Particle Size Distributions from Select Residences Participating in the NERL RTP PM Panel Study. Alan Vette, Ronald Williams, and Michael Riediker, U.S. Environmental Protection Agency, National Exposure Research Laboratory, Research Triangle Park, NC 27711; Jonathan Thornburg...

  10. Geostatistical modeling of regionalized grain-size distributions using Min/Max Autocorrelation Factors

    National Research Council Canada - National Science Library

    Desbarats, A J

    2001-01-01

    .... Since the number of classes may be large and abundances in adjacent classes may be highly cross-correlated, practical simulation of regionalized grain-size distributions requires an efficient method...

  11. A SURFACTANT-ASSISTED APPROACH FOR PREPARING COLLOIDAL AZO POLYMER SPHERES WITH NARROW SIZE DISTRIBUTION

    Institute of Scientific and Technical Information of China (English)

    Xiao-lan Tong; Yao-bang Li; Ya-ning He; Xiao-gong Wang

    2006-01-01

    A surfactant-assisted method for preparing colloidal spheres with narrow size distribution from a polydispersed azo polymer has been developed in this work. The colloidal spheres were formed through gradual hydrophobic aggregation of the polymeric chains in THF-H2O dispersion media, which was induced by a steady increase in the water content. Results showed that the addition of a small amount of surfactant (SDBS) could significantly narrow the size distribution of the colloidal spheres. The size distribution of the colloidal spheres was determined by the concentrations of azo polymer and the amount of surfactant in the systems. When the concentrations of polymer and surfactant amount were in a proper range,colloidal spheres with narrow size distribution could be obtained. The colloidal spheres formed by this method could be elongated along the polarization direction of the laser beams upon Ar+ laser irradiation. The colloidal spheres are considered to be a new type of the colloid-based functional materials.

  12. MetaZipf. A dynamic meta-analysis of city size distributions

    National Research Council Canada - National Science Library

    Clémentine Cottineau

    2017-01-01

    .... However, little evidence exists as to the factors which influence the level of urban unevenness, as expressed by the slope of the rank-size distribution, partly because the diversity of results found...

  13. Solitary waves in a dusty plasma with charge fluctuation and dust size distribution and vortex like ion distribution

    Energy Technology Data Exchange (ETDEWEB)

    Roy Chowdhury, K. [Department of Physics, J.C.C. College, Kolkata 700 033 (India); Mishra, Amar P. [High Energy Physics Division, Department of Physics Jadavpur University, Kolkata 700 032 (India); Roy Chowdhury, A. [High Energy Physics Division, Department of Physics Jadavpur University, Kolkata 700 032 (India)

    2006-07-15

    A modified KdV equation is derived for the propagation of non-linear waves in a dusty plasma, containing N different dust grains with a size distribution and charge fluctuation with electrons in the background. The ions are assumed to obey a vortex like distribution due to their non-isothermal nature. The standard distribution for the dust size is a power law. The variation of the soliton width is studied with respect to normalized size of the dust grains. A numerical solution of the equation is done by considering the soliton solution of the modified KdV as the initial pulse. It shows considerable broadening of the pulse variation of width with {beta} {sub 1} is shown.

  14. Effect of Lithium Ions on Copper Nanoparticle Size, Shape, and Distribution

    Directory of Open Access Journals (Sweden)

    Kyung-Deok Jang

    2012-01-01

    Full Text Available Copper nanoparticles were synthesized using lithium ions to increase the aqueous electrical conductivity of the solution and precisely control the size, shape, and size distribution of the particles. In this study, the conventional approach of increasing particle size by the concentration of copper ions and PGPPE in a copper chloride solution was compared to increasing the concentration of lithium chloride when the copper chloride concentration was held constant. Particle size and shape were characterized by TEM, and the size distribution of the particles at different concentrations was obtained by particle size analysis. Increasing the concentration of copper ion in the solution greatly increased the aqueous electric conductivity and the size of the particles but led to a wide size distribution ranging from 150 nm to 400 nm and rough particle morphology. The addition of lithium ions increased the size of the particles, but maintains them in a range of 250 nm. In addition the particles exhibited spherical shape as determined by TEM. The addition of lithium ions to the solution has the potential to synthesize nanoparticles with optimal characteristics for printing applications by maintaining a narrow size range and spherical shape.

  15. Calibration of the passive cavity aerosol spectrometer probe for airborne determination of the size distribution

    Directory of Open Access Journals (Sweden)

    Y. Cai

    2013-09-01

    Full Text Available This work describes calibration methods for the particle sizing and particle concentration systems of the passive cavity aerosol spectrometer probe (PCASP. Laboratory calibrations conducted over six years, in support of the deployment of a PCASP on a cloud physics research aircraft, are analyzed. Instead of using the many calibration sizes recommended by the PCASP manufacturer, a relationship between particle diameter and scattered light intensity is established using three sizes of mobility-selected polystyrene latex particles, one for each amplifier gain stage. In addition, studies of two factors influencing the PCASP's determination of the particle size distribution – amplifier baseline and particle shape – are conducted. It is shown that the PCASP-derived size distribution is sensitive to adjustments of the sizing system's baseline voltage, and that for aggregates of spheres, a PCASP-derived particle size and a sphere-equivalent particle size agree within uncertainty dictated by the PCASP's sizing resolution. Robust determinations of aerosol concentration, and size distribution, also require calibration of the PCASP's aerosol flowrate sensor. Sensor calibrations, calibration drift, and the sensor's non-linear response are documented.

  16. Quality of the log-geometric distribution extrapolation for smaller undiscovered oil and gas pool size

    Science.gov (United States)

    Chenglin, L.; Charpentier, R.R.

    2010-01-01

    The U.S. Geological Survey procedure for the estimation of the general form of the parent distribution requires that the parameters of the log-geometric distribution be calculated and analyzed for the sensitivity of these parameters to different conditions. In this study, we derive the shape factor of a log-geometric distribution from the ratio of frequencies between adjacent bins. The shape factor has a log straight-line relationship with the ratio of frequencies. Additionally, the calculation equations of a ratio of the mean size to the lower size-class boundary are deduced. For a specific log-geometric distribution, we find that the ratio of the mean size to the lower size-class boundary is the same. We apply our analysis to simulations based on oil and gas pool distributions from four petroleum systems of Alberta, Canada and four generated distributions. Each petroleum system in Alberta has a different shape factor. Generally, the shape factors in the four petroleum systems stabilize with the increase of discovered pool numbers. For a log-geometric distribution, the shape factor becomes stable when discovered pool numbers exceed 50 and the shape factor is influenced by the exploration efficiency when the exploration efficiency is less than 1. The simulation results show that calculated shape factors increase with those of the parent distributions, and undiscovered oil and gas resources estimated through the log-geometric distribution extrapolation are smaller than the actual values. ?? 2010 International Association for Mathematical Geology.

  17. FRACTAL SCALING OF PARTICLE AND PORE SIZE DISTRIBUTIONS AND ITS RELATION TO SOIL HYDRAULIC CONDUCTIVITY

    Directory of Open Access Journals (Sweden)

    BACCHI O.O.S.

    1996-01-01

    Full Text Available Fractal scaling has been applied to soils, both for void and solid phases, as an approach to characterize the porous arrangement, attempting to relate particle-size distribution to soil water retention and soil water dynamic properties. One important point of such an analysis is the assumption that the void space geometry of soils reflects its solid phase geometry, taking into account that soil pores are lined by the full range of particles, and that their fractal dimension, which expresses their tortuosity, could be evaluated by the fractal scaling of particle-size distribution. Other authors already concluded that although fractal scaling plays an important role in soil water retention and porosity, particle-size distribution alone is not sufficient to evaluate the fractal structure of porosity. It is also recommended to examine the relationship between fractal properties of solids and of voids, and in some special cases, look for an equivalence of both fractal dimensions. In the present paper data of 42 soil samples were analyzed in order to compare fractal dimensions of pore-size distribution, evaluated by soil water retention curves (SWRC of soils, with fractal dimensions of soil particle-size distributions (PSD, taking the hydraulic conductivity as a standard variable for the comparison, due to its relation to tortuosity. A new procedure is proposed to evaluate the fractal dimension of pore-size distribution. Results indicate a better correlation between fractal dimensions of pore-size distribution and the hydraulic conductivity for this set of soils, showing that for most of the soils analyzed there is no equivalence of both fractal dimensions. For most of these soils the fractal dimension of particle-size distribution does not indicate properly the pore trace tortuosity. A better equivalence of both fractal dimensions was found for sandy soils.

  18. Aged boreal biomass-burning aerosol size distributions from BORTAS 2011

    OpenAIRE

    K. M. Sakamoto; Allan, J.D.; Coe, H.; Taylor, J. W.; T. J. Duck; Pierce, J. R.

    2015-01-01

    Biomass-burning aerosols contribute to aerosol radiative forcing on the climate system. The magnitude of this effect is partially determined by aerosol size distributions, which are functions of source fire characteristics (e.g. fuel type, MCE) and in-plume microphysical processing. The uncertainties in biomass-burning emission number–size distributions in climate model inventories lead to uncertainties in the CCN (cloud condensation nuclei) concentrations and forcing estima...

  19. Placement and Sizing of DG Using PSO&HBMO Algorithms in Radial Distribution Networks

    Directory of Open Access Journals (Sweden)

    M. A.Taghikhani

    2012-09-01

    Full Text Available Optimal placement and sizing of DG in distribution network is an optimization problem with continuous and discrete variables. Many researchers have used evolutionary methods for finding the optimal DG placement and sizing. This paper proposes a hybrid algorithm PSO&HBMO for optimal placement and sizing of distributed generation (DG in radial distri-bution system to minimize the total power loss and improve the voltage profile. The proposed method is tested on a standard 13 bus radial distribution system and simulation results carried out using MATLAB software. The simulation results indicate that PSO&HBMO method can obtain better results than the simple heuristic search method and PSO algorithm. The method has a potential to be a tool for identifying the best location and rating of a DG to be installed for improving voltage profile and line losses reduction in an electrical power system. Moreover, current reduction is obtained in distribution system.

  20. Modeling Size-number Distributions of Seeds for Use in Soil Bank Studies

    Institute of Scientific and Technical Information of China (English)

    Hugo Casco; Alexandra Soveral Dias; Luís Silva Dias

    2008-01-01

    Knowledge of soil seed banks is essential to understand the dynamics of plant populations and communities and would greatly benefit from the integration of existing knowledge on ecological correlations of seed size and shape. The present study aims to establish a feasible and meaningful method to describe size-number distributions of seeds in multi-species situations. For that purpose, size-number distributions of seeds with known length, width and thickness were determined by sequential sieving. The most appropriate combination of sieves and seeds dimensions was established, and the adequacy of the power function and the Weibull model to describe size-number distributions of spherical, non.spherical, and all seeds was investigated. We found that the geometric mean of seed length, width and thickness was the most adequate size estimator, providing shape-independent measures of seeds volume directly related to sieves mesh side, and that both the power function and the Weibuli model provide high quality descriptions of size-number distributions of spherical,non-spherical, and all seeds. We also found that, in spite of its slightly lower accuracy, the power function is, at this stage, a more trustworthy model to characterize size-number distributions of seeds in soil banks because in some Weibull equations the estimates of the scale parameter were not acceptable.

  1. Narrow size distributed Ag nanoparticles grown by spin coating and thermal reduction: effect of processing parameters

    Science.gov (United States)

    Ansari, A. A.; Sartale, S. D.

    2016-08-01

    A simple method to grow uniform sized Ag nanoparticles with narrow size distribution on flat support (glass and Si substrates) via spin coating of Ag+ ions (AgNO3) solution followed by thermal reduction in H2 is presented. These grown nanoparticles can be used as model catalytic system to study size dependent oxygen reduction reaction (ORR) activity. Ag nanoparticles formation was confirmed by local surface plasmon resonance and x-ray photoelectron spectroscopy measurements. Influences of process parameters (revolution per minute (rpm), ramp and salt concentration) on grown Ag nanoparticles size, density and size uniformity are studied. With increase in rpm and ramp the size decreases and the particle number density increases, whereas the size dispersion improves. The catalytic activity of the grown Ag particles for ORR is studied and it is found that the catalytic performance is dependent on the size as well as the number density of the grown Ag nanoparticles.

  2. A Statistical Model of Chinese Earthquake Loss Distribution%基于中国地震的损失模型

    Institute of Scientific and Technical Information of China (English)

    蔡铨; 林正炎

    2012-01-01

    对损失分布的估计一直是保险公司的重要问题.有多种参数方法以及非参数方法拟合损失分布.本文作者提出了结合参数和非参数的方法来解决损失分布拟合问题.首先通过超额均值图确定大小损失之间的阈限,再利用广义Pareto分布拟合阈值以上损失,转换后的核密度估计拟合阈值以下损失.最后,通过实证分析将该方法和其他方法进行了误差分析比较,取得了理想的结果.%The estimation of loss distribution is always a big issue for insurance companies.Several parametric or nonparametric methods are introduced to fit loss distributions.In this paper,we propose a method by combining both parametric and nonparametric methods to solve this problem.We first determine the threshold between large and small losses by observing the graph of mean excess function,then use the generalized Pareto distribution,the parametric method,to fit excess data,and use kernel density estimation,the nonparametric method,to fit the distribution below threshold.Finally,we use a data set about Chinese annual earthquake loss to compare this method with other existing methods.

  3. A facile synthesis of Tenanoparticles with binary size distribution by green chemistry

    Science.gov (United States)

    He, Weidong; Krejci, Alex; Lin, Junhao; Osmulski, Max E.; Dickerson, James H.

    2011-04-01

    Our work reports a facile route to colloidal Tenanocrystals with binary uniform size distributions at room temperature. The binary-sized Tenanocrystals were well separated into two size regimes and assembled into films by electrophoretic deposition. The research provides a new platform for nanomaterials to be efficiently synthesized and manipulated.Our work reports a facile route to colloidal Tenanocrystals with binary uniform size distributions at room temperature. The binary-sized Tenanocrystals were well separated into two size regimes and assembled into films by electrophoretic deposition. The research provides a new platform for nanomaterials to be efficiently synthesized and manipulated. Electronic supplementary information (ESI) available: Synthetic procedures, FTIR analysis, ED pattern, AFM image, and EPD current curve. See DOI: 10.1039/c1nr10025d

  4. Maximum size distributions in tropical forest communities: relationships with rainfall and disturbance

    NARCIS (Netherlands)

    Poorter, L.; Hawthorne, W.D.; Sheil, D.; Bongers, F.J.J.M.

    2008-01-01

    The diversity and structure of communities are partly determined by how species partition resource gradients. Plant size is an important indicator of species position along the vertical light gradient in the vegetation. 2. Here, we compared the size distribution of tree species in 44 Ghanaian

  5. Evaluation of 1H NMR relaxometry for the assessment of pore size distribution in soil samples

    NARCIS (Netherlands)

    Jaeger, F.; Bowe, S.; As, van H.; Schaumann, G.E.

    2009-01-01

    1H NMR relaxometry is used in earth science as a non-destructive and time-saving method to determine pore size distributions (PSD) in porous media with pore sizes ranging from nm to mm. This is a broader range than generally reported for results from X-ray computed tomography (X-ray CT) scanning, wh

  6. Aerosol size distribution and classification. (Latest citations from the NTIS bibliographic database). Published Search

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    The bibliography contains citations concerning aerosol particle size distribution and classification pertaining to air pollution detection and health studies. Aerosol size measuring methods, devices, and apparatus are discussed. Studies of atmospheric, industrial, radioactive, and marine aerosols are presented.(Contains 50-250 citations and includes a subject term index and title list.) (Copyright NERAC, Inc. 1995)

  7. How dispersal limitation shapes species-body size distributions in local communities

    NARCIS (Netherlands)

    Etienne, R.S.; Olff, H.

    2004-01-01

    A critical but poorly understood pattern in macroecology is the often unimodal species - body size distribution ( also known as body size - diversity relationship) in a local community ( embedded in a much larger regional species pool). Purely neutral community models that assume functional

  8. A simple algorithm for measuring particle size distributions on an uneven background from TEM images

    DEFF Research Database (Denmark)

    Gontard, Lionel Cervera; Ozkaya, Dogan; Dunin-Borkowski, Rafal E.

    2011-01-01

    Nanoparticles have a wide range of applications in science and technology. Their sizes are often measured using transmission electron microscopy (TEM) or X-ray diffraction. Here, we describe a simple computer algorithm for measuring particle size distributions from TEM images in the presence of a...

  9. Model independent determination of colloidal silica size distributions via analytical ultracentrifugation

    NARCIS (Netherlands)

    Planken, K.L.; Kuipers, B.W.M.; Philipse, A.P.

    2008-01-01

    We report a method to determine the particle size distribution of small colloidal silica spheres via analytical ultracentrifugation and show that the average particle size, variance, standard deviation, and relative polydispersity can be obtained from a single sedimentation velocity (SV) analytical

  10. The dune size distribution and scaling relations of barchan dune fields

    NARCIS (Netherlands)

    Durán, O.; Schwämmle, V.; Lind, P.G.; Herrmann, H.J.

    2009-01-01

    Barchan dunes emerge as a collective phenomena involving the generation of thousands of them in so called barchan dune fields. By measuring the size and position of dunes in Moroccan barchan dune fields, we find that these dunes tend to distribute uniformly in space and follow an unique size distrib

  11. Measurement of particle size distribution of soil and selected aggregate sizes using the hydrometer method and laser diffractometry

    Science.gov (United States)

    Guzmán, G.; Gómez, J. A.; Giráldez, J. V.

    2010-05-01

    Soil particle size distribution has been traditionally determined by the hydrometer or the sieve-pipette methods, both of them time consuming and requiring a relatively large soil sample. This might be a limitation in situations, such as for instance analysis of suspended sediment, when the sample is small. A possible alternative to these methods are the optical techniques such as laser diffractometry. However the literature indicates that the use of this technique as an alternative to traditional methods is still limited, because the difficulty in replicating the results obtained with the standard methods. In this study we present the percentages of soil grain size determined using laser diffractometry within ranges set between 0.04 - 2000 μm. A Beckman-Coulter ® LS-230 with a 750 nm laser beam and software version 3.2 in five soils, representative of southern Spain: Alameda, Benacazón, Conchuela, Lanjarón and Pedrera. In three of the studied soils (Alameda, Benacazón and Conchuela) the particle size distribution of each aggregate size class was also determined. Aggregate size classes were obtained by dry sieve analysis using a Retsch AS 200 basic ®. Two hundred grams of air dried soil were sieved during 150 s, at amplitude 2 mm, getting nine different sizes between 2000 μm and 10 μm. Analyses were performed by triplicate. The soil sample preparation was also adapted to our conditions. A small amount each soil sample (less than 1 g) was transferred to the fluid module full of running water and disaggregated by ultrasonication at energy level 4 and 80 ml of sodium hexametaphosphate solution during 580 seconds. Two replicates of each sample were performed. Each measurement was made for a 90 second reading at a pump speed of 62. After the laser diffractometry analysis, each soil and its aggregate classes were processed calibrating its own optical model fitting the optical parameters that mainly depends on the color and the shape of the analyzed particle. As a

  12. Except in Highly Idealized Cases, Repeating Earthquakes and Laboratory Earthquakes are Neither Time- nor Slip-Predictable

    Science.gov (United States)

    Rubinstein, J. L.; Ellsworth, W. L.; Beeler, N. M.; Chen, K. H.; Lockner, D. A.; Uchida, N.

    2010-12-01

    Sequences of repeating earthquakes in California, Taiwan and Japan are characterized by interevent times that are more regular than expected from a Poisson process, and are better described by a 2-parameter renewal model (mean rate and variability) of independent and identically distributed intervals that only depends on the time of the last event. Using precise measurements of the relative size of earthquakes in each repeating earthquake family we examine the additional predictive power of the time- and slip-predictable models. We find that neither model offers statistically significant predictive power over a renewal model. In a highly idealized laboratory system, we find that earthquakes are both time- and slip-predictable, but with the addition of a small amount of the complexity (e.g., an uneven fault surface) the time- and slip-predictable models offer little or no advantage over a much simpler renewal model that has constant slip or constant recurrence intervals. Given that repeating natural and laboratory earthquakes are not well explained by either time- or slip-predictability, we conclude that these models are too idealized to explain the recurrence behavior of natural earthquakes. These models likely fail because their key assumptions (1 -- constant loading rate, 2 -- constant failure threshold OR constant final stress, and 3 - the fault is locked throughout the loading cycle) are too idealized to apply in a complex, natural system. While the time- and slip-predictable models do not appear to work for natural earthquakes, we do note that moment (slip) scales with recurrence time according to the mean magnitude of each repeating earthquake family in Parkfield, CA, but not in the other locations. While earthquake size and recurrence time are related in Parkfield, the simplest slip-predictable model still doesn’t work because fitting a linear trend to the data predicts a non-zero earthquake size at instantaneous recurrence time. This scaling, its presence

  13. Synthesis of iron oxide nanoparticles of narrow size distribution on polysaccharide templates

    Indian Academy of Sciences (India)

    M Nidhin; R Indumathy; K J Sreeram; Balachandran Unni Nair

    2008-02-01

    We report here the preparation of nanoparticles of iron oxide in the presence of polysaccharide templates. Interaction between iron (II) sulfate and template has been carried out in aqueous phase, followed by the selective and controlled removal of the template to achieve narrow distribution of particle size. Particles of iron oxide obtained have been characterized for their stability in solvent media, size, size distribution and crystallinity and found that when the negative value of the zeta potential increases, particle size decreases. A narrow particle size distribution with 100 = 275 nm was obtained with chitosan and starch templates. SEM measurements further confirm the particle size measurement. Diffuse reflectance UV–vis spectra values show that the template is completely removed from the final iron oxide particles and powder XRD measurements show that the peaks of the diffractogram are in agreement with the theoretical data of hematite. The salient observations of our study shows that there occurs a direct correlation between zeta potential, polydispersity index, bandgap energy and particle size. The crystallite size of the particles was found to be 30–35 nm. A large negative zeta potential was found to be advantageous for achieving lower particle sizes, owing to the particles remaining discrete without agglomeration.

  14. A Possible Divot in the Size Distribution of the Kuiper Belt's Scattering Objects

    Science.gov (United States)

    Shankman, C.; Gladman, B. J.; Kaib, N.; Kavelaars, J. J.; Petit, J. M.

    2013-02-01

    Via joint analysis of a calibrated telescopic survey, which found scattering Kuiper Belt objects, and models of their expected orbital distribution, we explore the scattering-object (SO) size distribution. Although for D > 100 km the number of objects quickly rise as diameters decrease, we find a relative lack of smaller objects, ruling out a single power law at greater than 99% confidence. After studying traditional "knees" in the size distribution, we explore other formulations and find that, surprisingly, our analysis is consistent with a very sudden decrease (a divot) in the number distribution as diameters decrease below 100 km, which then rises again as a power law. Motivated by other dynamically hot populations and the Centaurs, we argue for a divot size distribution where the number of smaller objects rises again as expected via collisional equilibrium. Extrapolation yields enough kilometer-scale SOs to supply the nearby Jupiter-family comets. Our interpretation is that this divot feature is a preserved relic of the size distribution made by planetesimal formation, now "frozen in" to portions of the Kuiper Belt sharing a "hot" orbital inclination distribution, explaining several puzzles in Kuiper Belt science. Additionally, we show that to match today's SO inclination distribution, the supply source that was scattered outward must have already been vertically heated to the of order 10°.

  15. Evaluation of eruptive energy of a pyroclastic deposit applying fractal geometry to fragment size distributions

    Science.gov (United States)

    Paredes Marino, Joali; Morgavi, Daniele; Di Vito, Mauro; de Vita, Sandro; Sansivero, Fabio; Perugini, Diego

    2016-04-01

    Fractal fragmentation theory has been applied to characterize the particle size distribution of pyroclastic deposits generated by volcanic explosions. Recent works have demonstrated that fractal dimension on grain size distributions can be used as a proxy for estimating the energy associated with volcanic eruptions. In this work we seek to establish a preliminary analytical protocol that can be applied to better characterize volcanic fall deposits and derive the potential energy for fragmentation that was stored in the magma prior/during an explosive eruption. The methodology is based on two different techniques for determining the grain-size distribution of the pyroclastic samples: 1) dry manual sieving (particles larger than 297μm), and 2) automatic grain size analysis via a CamSizer-P4®device, the latter measure the distribution of projected area, obtaining a cumulative distribution based on volume fraction for particles up to 30mm. Size distribution data have been analyzed by applying the fractal fragmentation theory estimating the value of Df, i.e. the fractal dimension of fragmentation. In order to test our protocol we studied the Cretaio eruption, Ischia island, Italy. Results indicate that size distributions of pyroclastic fall deposits follow a fractal law, indicating that the fragmentation process of these deposits reflects a scale-invariant fragmentation mechanism. Matching the results from manual and automated techniques allows us to obtain a value of the "fragmentation energy" from the explosive eruptive events that generate the Cretaio deposits. We highlight the importance of these results, based on fractal statistics, as an additional volcanological tool for addressing volcanic risk based on the analyses of grain size distributions of natural pyroclastic deposits. Keywords: eruptive energy, fractal dimension of fragmentation, pyroclastic fallout.

  16. The 3-D aftershock distribution of three recent M5~5.5 earthquakes in the Anza region,California

    Science.gov (United States)

    Zhang, Q.; Wdowinski, S.; Lin, G.

    2011-12-01

    The San Jacinto fault zone (SJFZ) exhibits the highest level of seismicity compared to other regions in southern California. On average, it produces four earthquakes per day, most of them at depth of 10-17 km. Over the past decade, an increasing seismic activity occurred in the Anza region, which included three M5~5.5 events and their aftershock sequences. These events occurred in 2001, 2005, and 2010. In this research we map the 3-D distribution of these three events to evaluate their rupture geometry and better understand the unusual deep seismic pattern along the SJFZ, which was termed "deep creep" (Wdowinski, 2009). We relocated 97,562 events from 1981 to 2011 in Anza region by applying the Source-Specific Station Term (SSST) method (Lin et al., 2006) and used an accurate 1-D velocity model derived from 3-D model of Lin et al (2007) and used In order to separate the aftershock sequence from background seismicity, we characterized each of the three aftershock sequences using Omori's law. Preliminary results show that all three sequences had a similar geometry of deep elongated aftershock distribution. Most aftershocks occurred at depth of 10-17 km and extended over a 70 km long segments of the SJFZ, centered at the mainshock hypocenters. A comparative study of other M5~5.5 mainshocks and their aftershock sequences in southern California reveals very different geometrical pattern, suggesting that the three Anza M5~5.5 events are unique and can be indicative of "deep creep" deformation processes. Reference 1.Lin, G.and Shearer,P.M.,2006, The COMPLOC earthquake location package,Seism. Res. Lett.77, pp.440-444. 2.Lin, G. and Shearer, P.M., Hauksson, E., and Thurber C.H.,2007, A three-dimensional crustal seismic velocity model for southern California from a composite event method,J. Geophys.Res.112, B12306, doi: 10.1029/ 2007JB004977. 3.Wdowinski, S. ,2009, Deep creep as a cause for the excess seismicity along the San Jacinto fault, Nat. Geosci.,doi:10.1038/NGEO684.

  17. Subcritical, Critical and Supercritical Size Distributions in Random Coagulation-Fragmentation Processes

    Institute of Scientific and Technical Information of China (English)

    Dong HAN; Xin Sheng ZHANG; Wei An ZHENG

    2008-01-01

    We consider the asymptotic probability distribution of the size of a reversible random coagula-tion-fragmentation process in the thermodynamic limit.We prove that the distributions of small,medium and the largest clusters converge to Gaussian,Poisson and 0-1 distributions in the supercritical stage (post-gelation),respectively.We show also that the mutually dependent distributions of clusters will become independent after the occurrence of a gelation transition.Furthermore,it is proved that all the number distributions of clusters are mutually independent at the critical stage (gelation),but the distributions of medium and the largest clusters are mutually dependent with positive correlation coe .cient in the supercritical stage.When the fragmentation strength goes to zero,there will exist only two types of clusters in the process,one type consists of the smallest clusters, the other is the largest one which has a size nearly equal to the volume (total number of units).

  18. Effects of transverse electron beam size on transition radiation angular distribution

    Energy Technology Data Exchange (ETDEWEB)

    Chiadroni, E., E-mail: enrica.chiadroni@lnf.infn.it [Laboratori Nazionali di Frascati-INFN, via E. Fermi, 40, 00044 Frascati (Italy); Castellano, M. [Laboratori Nazionali di Frascati-INFN, via E. Fermi, 40, 00044 Frascati (Italy); Cianchi, A. [University of Rome ' Tor Vergata' and INFN-Tor Vergata, Via della Ricerca Scientifica 1, 00133 Rome (Italy); Honkavaara, K.; Kube, G. [Deutsches Elektronen-Synchrotron, Notkestrasse 85, 22607 Hamburg (Germany)

    2012-05-01

    In this paper we consider the effect of the transverse electron beam size on the Optical Transition Radiation (OTR) angular distribution in case of both incoherent and coherent emission. Our results confute the theoretical argumentations presented first in Optics Communications 211, 109 (2002), which predicts a dependence of the incoherent OTR angular distribution on the beam size and emission wavelength. We present here theoretical and experimental data not only to validate the well-established Ginzburg-Frank theory, but also to show the impact of the transverse beam size in case of coherent emission.

  19. Inference of stratospheric aerosol composition and size distribution from SAGE II satellite measurements

    Science.gov (United States)

    Wang, Pi-Huan; Mccormick, M. P.; Fuller, W. H.; Yue, G. K.; Swissler, T. J.; Osborn, M. T.

    1989-01-01

    A method for inferring stratospheric aerosol composition and size distribution from the water vapor concentration and aerosol extinction measurements obtained in the Stratospheric Aerosol and Gas Experiment (SAGE) II and the associated temperature from the NMC. The aerosols are assumed to be sulfuric acid-water droplets. A modified Levenberg-Marquardt algorithm is used to determine model size distribution parameters based on the SAGE II multiwavelength aerosol extinctions. It is found that the best aerosol size information is contained in the aerosol radius range between about 0.25 and 0.80 micron.

  20. Size distribution of aerosol particles: comparison between agricultural and industrial areas in Egypt

    Energy Technology Data Exchange (ETDEWEB)

    Tadros, M.T.Y.; Madkour, M. [Mansoura Univ., Physics Dept., Mansoura (Egypt); Elmetwally, M. [Egyptian Meteorological Authority, Abbasyia-Cairo (Egypt)

    1999-07-01

    Mie theory has been used in this work to obtain a theoretical calculation of the size distribution of aerosol particles by using tabulated mean of the Angstrom wavelength exponent {alpha}{sub o}. Comparison was done between an industrial polluted area (Helwan, which is a neighbor to Cairo city), and an agricultural relatively unpolluted area (Mansoura, about 140 km from Cairo). The results show that the size distribution obeys the Junge power law. The size of particles in the polluted area is larger than that in the unpolluted area. (Author)

  1. Size distribution of a metallic polydispersion through capacitive measurements in a sedimentation experiment

    Science.gov (United States)

    Salazar-Neumann, E.; Nahmad-Molinari, Y.; Ruiz-Suárez, J. C.; Ardisson, P.-L.; Arancibia-Bulnes, C. A.; Rechtman, R.

    2001-07-01

    We present a simple experimental technique to determine size distributions of metallic polydispersions. The particles are first suspended in a viscous fluid-like glycerol and then their sedimentation is followed by measuring the effective dielectric constant in a cylindrical cell at a fixed frequency. Thereafter, an inversion procedure of the data, based on the Maxwell-Garnett effective medium theory and Stokes law, is used to directly obtain the size distribution. The technique is applied to three different stainless steel dispersions and compares very well with a traditional sizing method based in microphotography.

  2. Exact Solution of the Cluster Size Distribution for Multi-polymer Coagulation Process

    Institute of Scientific and Technical Information of China (English)

    KE Jian-Hong; LIN Zhen-Quan; WANG Xiang-Hong

    2003-01-01

    We propose a simple irreversible multi-polymer coagulation model in which m polymers consist of multiple components bond spontaneously to form a larger cluster. We solve the generalized Smoluchowski rate equation with constant reaction rates to obtain the exact solution of the cluster size distribution. The results indicate that the evolution behaviour of the system depends crucially on the polymer number m of the coagulation reaction. The cluster concentrations decay as t~m/(m~l) ; anc; tne typical size S(t) of the m-polymer coagulation system grows as t /'m~1'. On the other hand, the cluster size distribution may approach unusual scaling form in some cases.

  3. Effect of particle size distributions on absorbance spectra of gold nanoparticles

    Science.gov (United States)

    Doak, J.; Gupta, R. K.; Manivannan, K.; Ghosh, K.; Kahol, P. K.

    2010-03-01

    In this paper, a method is developed to calculate the absorbance spectra of nanoparticles solution containing a size distribution of particles using the Mie theory. The standard gold nanoparticles solutions were purchased and characterized with the UV-visible absorption spectroscopy and dynamic light scattering size measurements techniques. Model size distributions were fit to the experimental absorbance spectra using the method described herein. Good semi-quantitative fits were found, which elucidate qualitative differences between “small” and “large” gold nanoparticles.

  4. The temperature and size distribution of large water clusters from a non-equilibrium model

    Energy Technology Data Exchange (ETDEWEB)

    Gimelshein, N. [Gimel, Inc., San Jose, California 95124 (United States); Gimelshein, S., E-mail: gimelshe@usc.edu [University of Southern California, Los Angeles, California 90089 (United States); Pradzynski, C. C.; Zeuch, T., E-mail: tzeuch1@gwdg.de [Institut für Physikalische Chemie, Universität Göttingen, Tammanstr. 6, D-37077 Göttingen (Germany); Buck, U., E-mail: ubuck@gwdg.de [Max-Planck-Institut für Dynamik und Selbstorganisation, Am Faßberg 17, D-37077 Göttingen (Germany)

    2015-06-28

    A hybrid Lagrangian-Eulerian approach is used to examine the properties of water clusters formed in neon-water vapor mixtures expanding through microscale conical nozzles. Experimental size distributions were reliably determined by the sodium doping technique in a molecular beam machine. The comparison of computed size distributions and experimental data shows satisfactory agreement, especially for (H{sub 2}O){sub n} clusters with n larger than 50. Thus validated simulations provide size selected cluster temperature profiles in and outside the nozzle. This information is used for an in-depth analysis of the crystallization and water cluster aggregation dynamics of recently reported supersonic jet expansion experiments.

  5. Valuation of Indonesian catastrophic earthquake bonds with generalized extreme value (GEV) distribution and Cox-Ingersoll-Ross (CIR) interest rate model

    Science.gov (United States)

    Gunardi, Setiawan, Ezra Putranda

    2015-12-01

    Indonesia is a country with high risk of earthquake, because of its position in the border of earth's tectonic plate. An earthquake could raise very high amount of damage, loss, and other economic impacts. So, Indonesia needs a mechanism for transferring the risk of earthquake from the government or the (reinsurance) company, as it could collect enough money for implementing the rehabilitation and reconstruction program. One of the mechanisms is by issuing catastrophe bond, `act-of-God bond', or simply CAT bond. A catastrophe bond issued by a special-purpose-vehicle (SPV) company, and then sold to the investor. The revenue from this transaction is joined with the money (premium) from the sponsor company and then invested in other product. If a catastrophe happened before the time-of-maturity, cash flow from the SPV to the investor will discounted or stopped, and the cash flow is paid to the sponsor company to compensate their loss because of this catastrophe event. When we consider the earthquake only, the amount of discounted cash flow could determine based on the earthquake's magnitude. A case study with Indonesian earthquake magnitude data show that the probability of maximum magnitude can model by generalized extreme value (GEV) distribution. In pricing this catastrophe bond, we assumed stochastic interest rate that following the Cox-Ingersoll-Ross (CIR) interest rate model. We develop formulas for pricing three types of catastrophe bond, namely zero coupon bonds, `coupon only at risk' bond, and `principal and coupon at risk' bond. Relationship between price of the catastrophe bond and CIR model's parameter, GEV's parameter, percentage of coupon, and discounted cash flow rule then explained via Monte Carlo simulation.

  6. Quantification of the evolution of firm size distributions due to mergers and acquisitions

    Science.gov (United States)

    Sornette, Didier

    2017-01-01

    The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company’s own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes. PMID:28841683

  7. Quantification of the evolution of firm size distributions due to mergers and acquisitions.

    Science.gov (United States)

    Lera, Sandro Claudio; Sornette, Didier

    2017-01-01

    The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company's own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes.

  8. Size distributions of major elements in residual ash particles from coal combustion

    Institute of Scientific and Technical Information of China (English)

    YU DunXi; XU MingHou; YAO Hong; LIU XiaoWei

    2009-01-01

    Combustion experiments for three coals of different ranks were conducted in an electrically-heated drop tube furnace. The size distributions of major elements in the residual ash particles (>0.4μm) such as AI, Si, S, P, Na, Mg, K, Ca and Fe were investigated. The experimental results showed that the concentrations of AI and Si in the residual ash particles decreased with decreasing particle size, while the concentrations of S and P increased with decreasing particle size. No consistent size distributions were obtained for Na, Mg, K, Ca and Fe. The established deposition model accounting for trace element distributions was demonstrated to be applicable to some major elements as well. The modeling results indicated that the size distributions of the refractory elements, AI and Si, were mainly influenced by the deposition of vaporized elements on particle surfaces. A dominant fraction of S and P vaporized during coal combustion. Their size distributions were determined by surface condensation, reaction or adsorption. The partitioning mechanisms of Na, Mg, K, Ca and Fe were more complex.

  9. Some comments on the characterization of drop-size distribution in sprays

    Science.gov (United States)

    Chin, J. S.; Lefebvre, A. H.

    An attempt is made to explain and clarify some of the anomalies and misconceptions that are encountered in the literature on drop-size distributions in sprays. The key features and relative merits of the various parameters that have been put forward to describe drop-size distribution, such as the Rosin-Rammler equation, Droplet Uniformity Index, Relative Span Factor, Dispersion Index, and MMD/SMD ratio, are discussed. It is shown that although any suitable diameter may be used as the representative diameter in the Rosin-Rammler distribution function, the Sauter mean diameter (SMD) provides the best indication of the atomization quality of a spray.

  10. A Stochastic Theory for Deep Bed Filtration Accounting for Dispersion and Size Distributions

    DEFF Research Database (Denmark)

    Shapiro, Alexander; Bedrikovetsky, P. G.

    2010-01-01

    We develop a stochastic theory for filtration of suspensions in porous media. The theory takes into account particle and pore size distributions, as well as the random character of the particle motion, which is described in the framework of the theory of continuous-time random walks (CTRW......). In the limit of the infinitely many small walk steps we derive a system of governing equations for the evolution of the particle and pore size distributions. We consider the case of concentrated suspensions, where plugging the pores by particles may change porosity and other parameters of the porous medium....... A procedure for averaging of the derived system of equations is developed for polydisperse suspensions with several distinctive particle sizes. A numerical method for solution of the flow equations is proposed. Sample calculations are applied to compare the roles of the particle size distribution...

  11. Influence of pore size distribution on the adsorption of phenol on PET-based activated carbons.

    Science.gov (United States)

    Lorenc-Grabowska, Ewa; Diez, María A; Gryglewicz, Grazyna

    2016-05-01

    The role of pore size distribution in the adsorption of phenol in aqueous solutions on polyethylene terephthalate (PET)-based activated carbons (ACs) has been analyzed. The ACs were prepared from PET and mixtures of PET with coal-tar pitch (CTP) by means of carbonization and subsequent steam and carbon dioxide activation at 850 and 950 °C, respectively. The resultant ACs were characterized on the basis of similarities in their surface chemical features and differences in their micropore size distributions. The adsorption of phenol was carried out in static conditions at ambient temperature. The pseudo-second order kinetic model and Langmuir model were found to fit the experimental data very well. The different adsorption capacities of the ACs towards phenol were attributed to differences in their micropore size distributions. Adsorption capacity was favoured by the volume of pores with a size smaller than 1.4 nm; but restricted by pores smaller than 0.8 nm.

  12. Control over particle size distribution by autoclaving poloxamer-stabilized trimyristin nanodispersions

    DEFF Research Database (Denmark)

    Göke, Katrin; Roese, Elin; Arnold, Andreas

    2016-01-01

    into the bloodstream. Consequently, small particles with a narrow particle size distribution are desired. Hitherto, there are, however, only limited possibilities for the preparation of monodisperse, pharmaceutically relevant dispersions. In this work, the effect of autoclaving at 121 °C on the particle size...... distribution of lipid nanoemulsions and -suspensions consisting of the pharmaceutically relevant components trimyristin and poloxamer 188 was studied. Additionally, the amount of emulsifier needed to stabilize both untreated and autoclaved particles was assessed. In our study, four dispersions of mean particle...... sizes from 45 to 150 nm were prepared by high-pressure melt homogenization. The particle size distribution before and after autoclaving was characterized using static and dynamic light scattering, differential scanning calorimetry, and transmission electron microscopy. Asymmetrical flow field...

  13. Planar dust-acoustic waves in electron-positron-ion-dust plasmas with dust size distribution

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hong-Yan; Zhang, Kai-Biao [Sichuan University of Science and Engineering, Zigong (China)

    2014-06-15

    Nonlinear dust-acoustic solitary waves which are described with a Kortweg-de vries (KdV) equation by using the reductive perturbation method, are investigated in a planar unmagnetized dusty plasma consisting of electrons, positrons, ions and negatively-charged dust particles of different sizes and masses. The effects of the power-law distribution of dust and other plasma parameters on the dust-acoustic solitary waves are studied. Numerical results show that the dust size distribution has a significant influence on the propagation properties of dust-acoustic solitons. The amplitudes of solitary waves in the case of a power-law distribution is observed to be smaller, but the soliton velocity and width are observed to be larger, than those of mono-sized dust grains with an average dust size. Our results indicate that only compressed solitary waves exist in dusty plasma with different dust species. The relevance of the present investigation to interstellar clouds is discussed.

  14. New method to estimate the sample size for calculation of a proportion assuming binomial distribution.

    Science.gov (United States)

    Vallejo, Adriana; Muniesa, Ana; Ferreira, Chelo; de Blas, Ignacio

    2013-10-01

    Nowadays the formula to calculate the sample size for estimate a proportion (as prevalence) is based on the Normal distribution, however it would be based on a Binomial distribution which confidence interval was possible to be calculated using the Wilson Score method. By comparing the two formulae (Normal and Binomial distributions), the variation of the amplitude of the confidence intervals is relevant in the tails and the center of the curves. In order to calculate the needed sample size we have simulated an iterative sampling procedure, which shows an underestimation of the sample size for values of prevalence closed to 0 or 1, and also an overestimation for values closed to 0.5. Attending to these results we proposed an algorithm based on Wilson Score method that provides similar values for the sample size than empirically obtained by simulation.

  15. Comparison of outdoor activity size distributions of {sup 220}Rn and {sup 222}Rn progeny

    Energy Technology Data Exchange (ETDEWEB)

    Mohamed, A. [Physics Department, Faculty of Science, El-Minia University (Egypt)]. E-mail: amermohamed6@hotmail.com; El-Hussein, A. [Physics Department, Faculty of Science, El-Minia University (Egypt)

    2005-06-01

    Inhalation of {sup 222}Rn and {sup 220}Rn progeny from the domestic environment contributes the greatest fraction of the natural radiation exposure to the public. Dosimetric models are most often used in the assessment of human lung doses due to inhaled radioactivity because of the difficulty in making direct measurements. These models require information about the parameters of activity size distributions of thoron and radon progeny. The present study presents measured data on the attached and unattached activity size distributions of thoron and radon progeny in outdoor air in El-Minia, Egypt. The attached fraction was collected using a low-pressure Berner cascade impactor technique. A screen diffusion battery was used for collecting the unattached fraction. Most of the attached activities for {sup 222}Rn and {sup 220}Rn progeny were associated with aerosol particles of the accumulation mode. The activity size distribution of thoron progeny was found to be shifted to slightly smaller particle size compared to radon progeny.

  16. Detailed mass size distributions of atmospheric aerosol species in the Negev desert, Israel, during ARACHNE-96

    Science.gov (United States)

    Maenhaut, Willy; Ptasinski, Jacek; Cafmeyer, Jan

    1999-04-01

    As part of the 1996 summer intensive of the Aerosol, RAdiation and CHemistry Experiment (ARACHNE-96), the mass size distribution of various airborne particulate elements was studied at a remote site in the Negev Desert, Israel. Aerosol collections were made with 8-stage PIXE International cascade impactors (PCIs) and 12-stage small deposit area low pressure impactors (SDIs) and the samples were analyzed by PIXE for about 20 elements. The mineral elements (Al, Si, Ca, Ti, Fe) exhibited a unimodal size distribution which peaked at about 6 μm, but the contribution of particles larger than 10 μm was clearly more pronounced during the day than during night. Sulphur and Br had a tendency to exhibit two modes in the submicrometer size range, with diameters at about 0.3 and 0.6 μm, respectively. The elements V and Ni, which are indicators of residual fuel burning, showed essentially one fine mode (at 0.3 μm) in addition to a coarse mode which represented the mineral dust contribution. Overall, good agreement was observed between the mass size distributions from the PCI and SDI devices. The PCI was superior to the SDI for studying the size distribution in the coarse size range, but the SDI was clearly superior for unravelling the various modes in the submicrometer size range.

  17. Detailed mass size distributions of atmospheric aerosol species in the Negev desert, Israel, during ARACHNE-96

    Energy Technology Data Exchange (ETDEWEB)

    Maenhaut, Willy E-mail: maenhaut@inwchem.rug.ac.be; Ptasinski, Jacek; Cafmeyer, Jan

    1999-04-02

    As part of the 1996 summer intensive of the Aerosol, RAdiation and CHemistry Experiment (ARACHNE-96), the mass size distribution of various airborne particulate elements was studied at a remote site in the Negev Desert, Israel. Aerosol collections were made with 8-stage PIXE International cascade impactors (PCIs) and 12-stage small deposit area low pressure impactors (SDIs) and the samples were analyzed by PIXE for about 20 elements. The mineral elements (Al, Si, Ca, Ti, Fe) exhibited a unimodal size distribution which peaked at about 6 {mu}m, but the contribution of particles larger than 10 {mu}m was clearly more pronounced during the day than during night. Sulphur and Br had a tendency to exhibit two modes in the submicrometer size range, with diameters at about 0.3 and 0.6 {mu}m, respectively. The elements V and Ni, which are indicators of residual fuel burning, showed essentially one fine mode (at 0.3 {mu}m) in addition to a coarse mode which represented the mineral dust contribution. Overall, good agreement was observed between the mass size distributions from the PCI and SDI devices. The PCI was superior to the SDI for studying the size distribution in the coarse size range, but the SDI was clearly superior for unravelling the various modes in the submicrometer size range.

  18. Deconvolution of the particle size distribution of ProRoot MTA and MTA Angelus.

    Science.gov (United States)

    Ha, William Nguyen; Shakibaie, Fardad; Kahler, Bill; Walsh, Laurence James

    2016-01-01

    Objective Mineral trioxide aggregate (MTA) cements contain two types of particles, namely Portland cement (PC) (nominally 80% w/w) and bismuth oxide (BO) (20%). This study aims to determine the particle size distribution (PSD) of PC and BO found in MTA. Materials and methods The PSDs of ProRoot MTA (MTA-P) and MTA Angelus (MTA-A) powder were determined using laser diffraction, and compared to samples of PC (at three different particle sizes) and BO. The non-linear least squares method was used to deconvolute the PSDs into the constituents. MTA-P and MTA-A powders were also assessed with scanning electron microscopy. Results BO showed a near Gaussian distribution for particle size, with a mode distribution peak at 10.48 μm. PC samples milled to differing degrees of fineness had mode distribution peaks from 19.31 down to 4.88 μm. MTA-P had a complex PSD composed of both fine and large PC particles, with BO at an intermediate size, whereas MTA-A had only small BO particles and large PC particles. Conclusions The PSD of MTA cement products is bimodal or more complex, which has implications for understanding how particle size influences the overall properties of the material. Smaller particles may be reactive PC or unreactive radiopaque agent. Manufacturers should disclose particle size information for PC and radiopaque agents to prevent simplistic conclusions being drawn from statements of average particle size for MTA materials.

  19. The Rule of Dynamic Strain to Near Source Aftershock Distribution of the 2014, Mw 6.0, Napa (California) Earthquake

    Science.gov (United States)

    Emolo, A.; De Matteis, R.; Convertito, V.

    2015-12-01

    The 2014 Napa was recognized as a right-lateral strike-slip fault. About 400 aftershocks occurred, mainly in the near-source range, in the two months after the earthquake. They mostly occurred between 8 and 11 km depth interesting an area of about 10 km2 north-northwest-trending with respect to the mainshock hypocenter. However, the aftershock distribution was not able to constrain the mainshock fault plane. Since Parsons et al. (2014) have shown that Coulomb static stress change does not completely explain near-source aftershock distribution, we explore whether dynamic strain transfer, enhanced by source directivity, contributed to trigger the aftershock sequence. Indeed, dynamic strain transfer triggering attributes enhanced failure probabilities to increased shear stresses or strains, to permeability changes and maybe to fault weakening. In this respect, we observe that a single inverse power law fits the decay of aftershock density as function of distance from the fault plane, suggesting that dynamic stress/strain might have played a role in the aftershocks triggering. To test this hypothesis, we used Peak-Ground Velocities (PGVs) as a proxy for peak-dynamic strain/stress field, accounting for both fault finiteness and source directivity. We first use a point source to retrieve the best parameters of the directivity function from the inversion of the PGVs. Next, the same PGVs are used to jointly infer the surface fault projection and the dominant horizontal rupture direction. Finally, we map the peak-dynamic strain/stress, modified by source geometry an