WorldWideScience

Sample records for random peak model

  1. Random walkers with extreme value memory: modelling the peak-end rule

    Science.gov (United States)

    Harris, Rosemary J.

    2015-05-01

    Motivated by the psychological literature on the ‘peak-end rule’ for remembered experience, we perform an analysis within a random walk framework of a discrete choice model where agents’ future choices depend on the peak memory of their past experiences. In particular, we use this approach to investigate whether increased noise/disruption always leads to more switching between decisions. Here extreme value theory illuminates different classes of dynamics indicating that the long-time behaviour is dependent on the scale used for reflection; this could have implications, for example, in questionnaire design.

  2. Statistics of peaks of Gaussian random fields

    International Nuclear Information System (INIS)

    Bardeen, J.M.; Bond, J.R.; Kaiser, N.; Szalay, A.S.; Stanford Univ., CA; California Univ., Berkeley; Cambridge Univ., England; Fermi National Accelerator Lab., Batavia, IL)

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of upcrossing points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima. 67 references

  3. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    Science.gov (United States)

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  4. Time-frequency peak filtering for random noise attenuation of magnetic resonance sounding signal

    Science.gov (United States)

    Lin, Tingting; Zhang, Yang; Yi, Xiaofeng; Fan, Tiehu; Wan, Ling

    2018-05-01

    When measuring in a geomagnetic field, the method of magnetic resonance sounding (MRS) is often limited because of the notably low signal-to-noise ratio (SNR). Most current studies focus on discarding spiky noise and power-line harmonic noise cancellation. However, the effects of random noise should not be underestimated. The common method for random noise attenuation is stacking, but collecting multiple recordings merely to suppress random noise is time-consuming. Moreover, stacking is insufficient to suppress high-level random noise. Here, we propose the use of time-frequency peak filtering for random noise attenuation, which is performed after the traditional de-spiking and power-line harmonic removal method. By encoding the noisy signal with frequency modulation and estimating the instantaneous frequency using the peak of the time-frequency representation of the encoded signal, the desired MRS signal can be acquired from only one stack. The performance of the proposed method is tested on synthetic envelope signals and field data from different surveys. Good estimations of the signal parameters are obtained at different SNRs. Moreover, an attempt to use the proposed method to handle a single recording provides better results compared to 16 stacks. Our results suggest that the number of stacks can be appropriately reduced to shorten the measurement time and improve the measurement efficiency.

  5. Peak Shaving Considering Streamflow Uncertainties | Iwuagwu ...

    African Journals Online (AJOL)

    The main thrust of this paper is peak shaving with a Stochastic hydro model. In peak sharing, the amount of hydro energy scheduled may be a minimum but it serves to replace less efficient thermal units. The sample system is die Kainji hydro plant and the thermal units of the National Electric Power Authority. The random ...

  6. Coupling hydrologic and hydraulic models to take into consideration retention effects on extreme peak discharges in Switzerland

    Science.gov (United States)

    Felder, Guido; Zischg, Andreas; Weingartner, Rolf

    2015-04-01

    Estimating peak discharges with very low probabilities is still accompanied by large uncertainties. Common estimation methods are usually based on extreme value statistics applied to observed time series or to hydrological model outputs. However, such methods assume the system to be stationary and do not specifically consider non-stationary effects. Observed time series may exclude events where peak discharge is damped by retention effects, as this process does not occur until specific thresholds, possibly beyond those of the highest measured event, are exceeded. Hydrological models can be complemented and parameterized with non-linear functions. However, in such cases calibration depends on observed data and non-stationary behaviour is not deterministically calculated. Our study discusses the option of considering retention effects on extreme peak discharges by coupling hydrological and hydraulic models. This possibility is tested by forcing the semi-distributed deterministic hydrological model PREVAH with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). The procedure ensures that the estimated extreme peak discharge does not exceed the physical limit given by the riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered.

  7. System dynamics model of Hubbert Peak for China's oil

    International Nuclear Information System (INIS)

    Tao Zaipu; Li Mingyu

    2007-01-01

    American geophysicist M. King Hubbert in 1956 first introduced a logistic equation to estimate the peak and lifetime production for oil of USA. Since then, a fierce debate ensued on the so-called Hubbert Peak, including also its methodology. This paper proposes to use the generic STELLA model to simulate Hubbert Peak, particularly for the Chinese oil production. This model is demonstrated as being robust. We used three scenarios to estimate the Chinese oil peak: according to scenario 1 of this model, the Hubbert Peak for China's crude oil production appears to be in 2019 with a value of 199.5 million tonnes, which is about 1.1 times the 2005 output. Before the peak comes, Chinese oil output will grow by about 1-2% annually, after the peak, however, the output will fall. By 2040, the annual production of Chinese crude oil would be equivalent to the level of 1990. During the coming 20 years, the crude oil demand of China will probably grow at the rate of 2-3% annually, and the gap between domestic supply and total demand may be more than half of this demand

  8. Group Elevator Peak Scheduling Based on Robust Optimization Model

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2013-08-01

    Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.

  9. PEAK COVARIANCE STABILITY OF A RANDOM RICCATI EQUATION ARISING FROM KALMAN FILTERING WITH OBSERVATION LOSSES

    Institute of Scientific and Technical Information of China (English)

    Li XIE; Lihua XIE

    2007-01-01

    We consider the stability of a random Riccati equation with a Markovian binary jump coefficient. More specifically, we are concerned with the boundedness of the solution of a random Riccati difference equation arising from Kalman filtering with measurement losses. A sufficient condition for the peak covariance stability is obtained which has a simpler form and is shown to be less conservative in some cases than a very recent result in existing literature. Furthermore, we show that a known sufficient condition is also necessary when the observability index equals one.

  10. A fuzzy-stochastic simulation-optimization model for planning electric power systems with considering peak-electricity demand: A case study of Qingdao, China

    International Nuclear Information System (INIS)

    Yu, L.; Li, Y.P.; Huang, G.H.

    2016-01-01

    In this study, a FSSOM (fuzzy-stochastic simulation-optimization model) is developed for planning EPS (electric power systems) with considering peak demand under uncertainty. FSSOM integrates techniques of SVR (support vector regression), Monte Carlo simulation, and FICMP (fractile interval chance-constrained mixed-integer programming). In FSSOM, uncertainties expressed as fuzzy boundary intervals and random variables can be effectively tackled. In addition, SVR coupled Monte Carlo technique is used for predicting the peak-electricity demand. The FSSOM is applied to planning EPS for the City of Qingdao, China. Solutions of electricity generation pattern to satisfy the city's peak demand under different probability levels and p-necessity levels have been generated. Results reveal that the city's electricity supply from renewable energies would be low (only occupying 8.3% of the total electricity generation). Compared with the energy model without considering peak demand, the FSSOM can better guarantee the city's power supply and thus reduce the system failure risk. The findings can help decision makers not only adjust the existing electricity generation/supply pattern but also coordinate the conflict interaction among system cost, energy supply security, pollutant mitigation, as well as constraint-violation risk. - Highlights: • FSSOM (Fuzzy-stochastic simulation-optimization model) is developed for planning EPS. • It can address uncertainties as fuzzy-boundary intervals and random variables. • FSSOM can satisfy peak-electricity demand and optimize power allocation. • Solutions under different probability levels and p-necessity levels are analyzed. • Results create tradeoff among system cost and peak-electricity demand violation risk.

  11. OccuPeak: ChIP-Seq peak calling based on internal background modelling

    NARCIS (Netherlands)

    de Boer, Bouke A.; van Duijvenboden, Karel; van den Boogaard, Malou; Christoffels, Vincent M.; Barnett, Phil; Ruijter, Jan M.

    2014-01-01

    ChIP-seq has become a major tool for the genome-wide identification of transcription factor binding or histone modification sites. Most peak-calling algorithms require input control datasets to model the occurrence of background reads to account for local sequencing and GC bias. However, the

  12. Modeling the probability distribution of peak discharge for infiltrating hillslopes

    Science.gov (United States)

    Baiamonte, Giorgio; Singh, Vijay P.

    2017-07-01

    Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.

  13. Nominal Range Sensitivity Analysis of peak radionuclide concentrations in randomly heterogeneous aquifers

    International Nuclear Information System (INIS)

    Cadini, F.; De Sanctis, J.; Cherubini, A.; Zio, E.; Riva, M.; Guadagnini, A.

    2012-01-01

    Highlights: ► Uncertainty quantification problem associated with the radionuclide migration. ► Groundwater transport processes simulated within a randomly heterogeneous aquifer. ► Development of an automatic sensitivity analysis for flow and transport parameters. ► Proposal of a Nominal Range Sensitivity Analysis approach. ► Analysis applied to the performance assessment of a nuclear waste repository. - Abstract: We consider the problem of quantification of uncertainty associated with radionuclide transport processes within a randomly heterogeneous aquifer system in the context of performance assessment of a near-surface radioactive waste repository. Radionuclide migration is simulated at the repository scale through a Monte Carlo scheme. The saturated groundwater flow and transport equations are then solved at the aquifer scale for the assessment of the expected radionuclide peak concentration at a location of interest. A procedure is presented to perform the sensitivity analysis of this target environmental variable to key parameters that characterize flow and transport processes in the subsurface. The proposed procedure is exemplified through an application to a realistic case study.

  14. Modeling of Lightning Strokes Using Two-Peaked Channel-Base Currents

    Directory of Open Access Journals (Sweden)

    V. Javor

    2012-01-01

    Full Text Available Lightning electromagnetic field is obtained by using “engineering” models of lightning return strokes and new channel-base current functions and the results are presented in this paper. Experimentally measured channel-base currents are approximated not only with functions having two-peaked waveshapes but also with the one-peaked function so as usually used in the literature. These functions are simple to be applied in any “engineering” or electromagnetic model as well. For the three “engineering” models: transmission line model (without the peak current decay, transmission line model with linear decay, and transmission line model with exponential decay with height, the comparison of electric and magnetic field components at different distances from the lightning channel-base is presented in the case of a perfectly conducting ground. Different heights of lightning channels are also considered. These results enable analysis of advantages/shortages of the used return stroke models according to the electromagnetic field features to be achieved, as obtained by measurements.

  15. Peak-counts blood flow model-errors and limitations

    International Nuclear Information System (INIS)

    Mullani, N.A.; Marani, S.K.; Ekas, R.D.; Gould, K.L.

    1984-01-01

    The peak-counts model has several advantages, but its use may be limited due to the condition that the venous egress may not be negligible at the time of peak-counts. Consequently, blood flow measurements by the peak-counts model will depend on the bolus size, bolus duration, and the minimum transit time of the bolus through the region of interest. The effect of bolus size on the measurement of extraction fraction and blood flow was evaluated by injecting 1 to 30ml of rubidium chloride in the femoral vein of a dog and measuring the myocardial activity with a beta probe over the heart. Regional blood flow measurements were not found to vary with bolus sizes up to 30ml. The effect of bolus duration was studied by injecting a 10cc bolus of tracer at different speeds in the femoral vein of a dog. All intravenous injections undergo a broadening of the bolus duration due to the transit time of the tracer through the lungs and the heart. This transit time was found to range from 4-6 second FWHM and dominates the duration of the bolus to the myocardium for up to 3 second injections. A computer simulation has been carried out in which the different parameters of delay time, extraction fraction, and bolus duration can be changed to assess the errors in the peak-counts model. The results of the simulations show that the error will be greatest for short transit time delays and for low extraction fractions

  16. Hubbert's Oil Peak Revisited by a Simulation Model

    International Nuclear Information System (INIS)

    Giraud, P.N.; Sutter, A.; Denis, T.; Leonard, C.

    2010-01-01

    As conventional oil reserves are declining, the debate on the oil production peak has become a burning issue. An increasing number of papers refer to Hubbert's peak oil theory to forecast the date of the production peak, both at regional and world levels. However, in our views, this theory lacks micro-economic foundations. Notably, it does not assume that exploration and production decisions in the oil industry depend on market prices. In an attempt to overcome these shortcomings, we have built an adaptative model, accounting for the behavior of one agent, standing for the competitive exploration-production industry, subjected to incomplete but improving information on the remaining reserves. Our work yields challenging results on the reasons for an Hubbert type peak oil, lying mainly 'above the ground', both at regional and world levels, and on the shape of the production and marginal cost trajectories. (authors)

  17. Assessment of end-use electricity consumption and peak demand by Townsville's housing stock

    International Nuclear Information System (INIS)

    Ren, Zhengen; Paevere, Phillip; Grozev, George; Egan, Stephen; Anticev, Julia

    2013-01-01

    We have developed a comprehensive model to estimate annual end-use electricity consumption and peak demand of housing stock, considering occupants' use of air conditioning systems and major appliances. The model was applied to analyse private dwellings in Townsville, Australia's largest tropical city. For the financial year (FY) 2010–11 the predicted results agreed with the actual electricity consumption with an error less than 10% for cooling thermostat settings at the standard setting temperature of 26.5 °C and at 1.0 °C higher than the standard setting. The greatest difference in monthly electricity consumption in the summer season between the model and the actual data decreased from 21% to 2% when the thermostat setting was changed from 26.5 °C to 27.5 °C. Our findings also showed that installation of solar panels in Townville houses could reduce electricity demand from the grid and would have a minor impact on the yearly peak demand. A key new feature of the model is that it can be used to predict probability distribution of energy demand considering (a) that appliances may be used randomly and (b) the way people use thermostats. The peak demand for the FY estimated from the probability distribution tracked the actual peak demand at 97% confidence level. - Highlights: • We developed a model to estimate housing stock energy consumption and peak demand. • Appliances used randomly and thermostat settings for space cooling were considered. • On-site installation of solar panels was also considered. • Its' results agree well with the actual electricity consumption and peak demand. • It shows the model could provide the probability distribution of electricity demand

  18. Prediction of peak response values of structures with and without TMD subjected to random pedestrian flows

    Science.gov (United States)

    Lievens, Klaus; Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter

    2016-09-01

    In civil engineering and architecture, the availability of high strength materials and advanced calculation techniques enables the construction of slender footbridges, generally highly sensitive to human-induced excitation. Due to the inherent random character of the human-induced walking load, variability on the pedestrian characteristics must be considered in the response simulation. To assess the vibration serviceability of the footbridge, the statistics of the stochastic dynamic response are evaluated by considering the instantaneous peak responses in a time range. Therefore, a large number of time windows are needed to calculate the mean value and standard deviation of the instantaneous peak values. An alternative method to evaluate the statistics is based on the standard deviation of the response and a characteristic frequency as proposed in wind engineering applications. In this paper, the accuracy of this method is evaluated for human-induced vibrations. The methods are first compared for a group of pedestrians crossing a lightly damped footbridge. Small differences of the instantaneous peak value were found by the method using second order statistics. Afterwards, a TMD tuned to reduce the peak acceleration to a comfort value, was added to the structure. The comparison between both methods in made and the accuracy is verified. It is found that the TMD parameters are tuned sufficiently and good agreements between the two methods are found for the estimation of the instantaneous peak response for a strongly damped structure.

  19. Modeling superhydrophobic surfaces comprised of random roughness

    Science.gov (United States)

    Samaha, M. A.; Tafreshi, H. Vahedi; Gad-El-Hak, M.

    2011-11-01

    We model the performance of superhydrophobic surfaces comprised of randomly distributed roughness that resembles natural surfaces, or those produced via random deposition of hydrophobic particles. Such a fabrication method is far less expensive than ordered-microstructured fabrication. The present numerical simulations are aimed at improving our understanding of the drag reduction effect and the stability of the air-water interface in terms of the microstructure parameters. For comparison and validation, we have also simulated the flow over superhydrophobic surfaces made up of aligned or staggered microposts for channel flows as well as streamwise or spanwise ridge configurations for pipe flows. The present results are compared with other theoretical and experimental studies. The numerical simulations indicate that the random distribution of surface roughness has a favorable effect on drag reduction, as long as the gas fraction is kept the same. The stability of the meniscus, however, is strongly influenced by the average spacing between the roughness peaks, which needs to be carefully examined before a surface can be recommended for fabrication. Financial support from DARPA, contract number W91CRB-10-1-0003, is acknowledged.

  20. Comparisons of methods for calculating retention and separation of chromatographic peaks

    International Nuclear Information System (INIS)

    Pauls, R.E.; Rogers, L.B.

    1976-09-01

    The accuracy and precision of calculating retention times from means and peak maxima have been examined using an exponentially modified Gaussian as a model for tailed chromotographic peaks. At different levels of random noise, retention times could be determined with nearly the same precision using either the mean or maximum. However, the accuracies and precisions of the maxima were affected by the number of points used in the digital smooth and by the number of points recorded per unit of standard deviation. For two peaks of similar shape, consistency in the selection of points should usually permit differences in retention to be determined accurately and with approximately the same precision using maxima, means, or half-heights on the leading side of the peak

  1. Dispersion-convolution model for simulating peaks in a flow injection system.

    Science.gov (United States)

    Pai, Su-Cheng; Lai, Yee-Hwong; Chiao, Ling-Yun; Yu, Tiing

    2007-01-12

    A dispersion-convolution model is proposed for simulating peak shapes in a single-line flow injection system. It is based on the assumption that an injected sample plug is expanded due to a "bulk" dispersion mechanism along the length coordinate, and that after traveling over a distance or a period of time, the sample zone will develop into a Gaussian-like distribution. This spatial pattern is further transformed to a temporal coordinate by a convolution process, and finally a temporal peak image is generated. The feasibility of the proposed model has been examined by experiments with various coil lengths, sample sizes and pumping rates. An empirical dispersion coefficient (D*) can be estimated by using the observed peak position, height and area (tp*, h* and At*) from a recorder. An empirical temporal shift (Phi*) can be further approximated by Phi*=D*/u2, which becomes an important parameter in the restoration of experimental peaks. Also, the dispersion coefficient can be expressed as a second-order polynomial function of the pumping rate Q, for which D*(Q)=delta0+delta1Q+delta2Q2. The optimal dispersion occurs at a pumping rate of Qopt=sqrt[delta0/delta2]. This explains the interesting "Nike-swoosh" relationship between the peak height and pumping rate. The excellent coherence of theoretical and experimental peak shapes confirms that the temporal distortion effect is the dominating reason to explain the peak asymmetry in flow injection analysis.

  2. Modelling of peak temperature during friction stir processing of magnesium alloy AZ91

    Science.gov (United States)

    Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Friction stir processing (FSP) is a solid state processing technique with potential to modify the properties of the material through microstructural modification. The study of heat transfer in FSP aids in the identification of defects like flash, inadequate heat input, poor material flow and mixing etc. In this paper, transient temperature distribution during FSP of magnesium alloy AZ91 was simulated using finite element modelling. The numerical model results were validated using the experimental results from the published literature. The model was used to predict the peak temperature obtained during FSP for various process parameter combinations. The simulated peak temperature results were used to develop a statistical model. The effect of process parameters namely tool rotation speed, tool traverse speed and shoulder diameter of the tool on the peak temperature was investigated using the developed statistical model. It was found that peak temperature was directly proportional to tool rotation speed and shoulder diameter and inversely proportional to tool traverse speed.

  3. Amorphous chalcogenides as random octahedrally bonded solids: I. Implications for the first sharp diffraction peak, photodarkening, and Boson peak

    Science.gov (United States)

    Lukyanov, Alexey; Lubchenko, Vassiliy

    2017-09-01

    We develop a computationally efficient algorithm for generating high-quality structures for amorphous materials exhibiting distorted octahedral coordination. The computationally costly step of equilibrating the simulated melt is relegated to a much more efficient procedure, viz., generation of a random close-packed structure, which is subsequently used to generate parent structures for octahedrally bonded amorphous solids. The sites of the so-obtained lattice are populated by atoms and vacancies according to the desired stoichiometry while allowing one to control the number of homo-nuclear and hetero-nuclear bonds and, hence, effects of the mixing entropy. The resulting parent structure is geometrically optimized using quantum-chemical force fields; by varying the extent of geometric optimization of the parent structure, one can partially control the degree of octahedrality in local coordination and the strength of secondary bonding. The present methodology is applied to the archetypal chalcogenide alloys AsxSe1-x. We find that local coordination in these alloys interpolates between octahedral and tetrahedral bonding but in a non-obvious way; it exhibits bonding motifs that are not characteristic of either extreme. We consistently recover the first sharp diffraction peak (FSDP) in our structures and argue that the corresponding mid-range order stems from the charge density wave formed by regions housing covalent and weak, secondary interactions. The number of secondary interactions is determined by a delicate interplay between octahedrality and tetrahedrality in the covalent bonding; many of these interactions are homonuclear. The present results are consistent with the experimentally observed dependence of the FSDP on arsenic content, pressure, and temperature and its correlation with photodarkening and the Boson peak. They also suggest that the position of the FSDP can be used to infer the effective particle size relevant for the configurational equilibration in

  4. Phase diagrams of a spin-1/2 transverse Ising model with three-peak random field distribution

    International Nuclear Information System (INIS)

    Bassir, A.; Bassir, C.E.; Benyoussef, A.; Ez-Zahraouy, H.

    1996-07-01

    The effect of the transverse magnetic field on the phase diagrams structures of the Ising model in a random longitudinal magnetic field with a trimodal symmetric distribution is investigated within a finite cluster approximation. We find that a small magnetizations ordered phase (small ordered phase) disappears completely for a sufficiently large value of the transverse field or/and large value of the concentration of the disorder of the magnetic field. Multicritical behaviour and reentrant phenomena are discussed. The regions where the tricritical, reentrant phenomena and the small ordered phase persist are delimited as a function of the transverse field and the concentration p. Longitudinal magnetizations are also presented. (author). 33 refs, 6 figs

  5. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  6. Statistics of peaks in cosmological nonlinear density fields

    International Nuclear Information System (INIS)

    Suginohara, Tatsushi; Suto, Yasushi.

    1990-06-01

    Distribution of the high-density peaks in the universe is examined using N-body simulations. Nonlinear evolution of the underlying density field significantly changes the statistical properties of the peaks, compared with the analytic results valid for the random Gaussian field. In particular, the abundances and correlations of the initial density peaks are discussed in the context of biased galaxy formation theory. (author)

  7. Random Intercept and Random Slope 2-Level Multilevel Models

    Directory of Open Access Journals (Sweden)

    Rehan Ahmad Khan

    2012-11-01

    Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.

  8. Distribution of peak expiratory flow variability by age, gender and smoking habits in a random population sample aged 20-70 yrs

    NARCIS (Netherlands)

    Boezen, H M; Schouten, J. P.; Postma, D S; Rijcken, B

    1994-01-01

    Peak expiratory flow (PEF) variability can be considered as an index of bronchial lability. Population studies on PEF variability are few. The purpose of the current paper is to describe the distribution of PEF variability in a random population sample of adults with a wide age range (20-70 yrs),

  9. Tests of peak flow scaling in simulated self-similar river networks

    Science.gov (United States)

    Menabde, M.; Veitzer, S.; Gupta, V.; Sivapalan, M.

    2001-01-01

    The effect of linear flow routing incorporating attenuation and network topology on peak flow scaling exponent is investigated for an instantaneously applied uniform runoff on simulated deterministic and random self-similar channel networks. The flow routing is modelled by a linear mass conservation equation for a discrete set of channel links connected in parallel and series, and having the same topology as the channel network. A quasi-analytical solution for the unit hydrograph is obtained in terms of recursion relations. The analysis of this solution shows that the peak flow has an asymptotically scaling dependence on the drainage area for deterministic Mandelbrot-Vicsek (MV) and Peano networks, as well as for a subclass of random self-similar channel networks. However, the scaling exponent is shown to be different from that predicted by the scaling properties of the maxima of the width functions. ?? 2001 Elsevier Science Ltd. All rights reserved.

  10. Modeling drag reduction and meniscus stability of superhydrophobic surfaces comprised of random roughness

    Science.gov (United States)

    Samaha, Mohamed A.; Tafreshi, Hooman Vahedi; Gad-el-Hak, Mohamed

    2011-01-01

    Previous studies dedicated to modeling drag reduction and stability of the air-water interface on superhydrophobic surfaces were conducted for microfabricated coatings produced by placing hydrophobic microposts/microridges arranged on a flat surface in aligned or staggered configurations. In this paper, we model the performance of superhydrophobic surfaces comprised of randomly distributed roughness (e.g., particles or microposts) that resembles natural superhydrophobic surfaces, or those produced via random deposition of hydrophobic particles. Such fabrication method is far less expensive than microfabrication, making the technology more practical for large submerged bodies such as submarines and ships. The present numerical simulations are aimed at improving our understanding of the drag reduction effect and the stability of the air-water interface in terms of the microstructure parameters. For comparison and validation, we have also simulated the flow over superhydrophobic surfaces made up of aligned or staggered microposts for channel flows as well as streamwise or spanwise ridges configurations for pipe flows. The present results are compared with theoretical and experimental studies reported in the literature. In particular, our simulation results are compared with work of Sbragaglia and Prosperetti, and good agreement has been observed for gas fractions up to about 0.9. The numerical simulations indicate that the random distribution of surface roughness has a favorable effect on drag reduction, as long as the gas fraction is kept the same. This effect peaks at about 30% as the gas fraction increases to 0.98. The stability of the meniscus, however, is strongly influenced by the average spacing between the roughness peaks, which needs to be carefully examined before a surface can be recommended for fabrication. It was found that at a given maximum allowable pressure, surfaces with random post distribution produce less drag reduction than those made up of

  11. A new peak detection algorithm for MALDI mass spectrometry data based on a modified Asymmetric Pseudo-Voigt model.

    Science.gov (United States)

    Wijetunge, Chalini D; Saeed, Isaam; Boughton, Berin A; Roessner, Ute; Halgamuge, Saman K

    2015-01-01

    Mass Spectrometry (MS) is a ubiquitous analytical tool in biological research and is used to measure the mass-to-charge ratio of bio-molecules. Peak detection is the essential first step in MS data analysis. Precise estimation of peak parameters such as peak summit location and peak area are critical to identify underlying bio-molecules and to estimate their abundances accurately. We propose a new method to detect and quantify peaks in mass spectra. It uses dual-tree complex wavelet transformation along with Stein's unbiased risk estimator for spectra smoothing. Then, a new method, based on the modified Asymmetric Pseudo-Voigt (mAPV) model and hierarchical particle swarm optimization, is used for peak parameter estimation. Using simulated data, we demonstrated the benefit of using the mAPV model over Gaussian, Lorentz and Bi-Gaussian functions for MS peak modelling. The proposed mAPV model achieved the best fitting accuracy for asymmetric peaks, with lower percentage errors in peak summit location estimation, which were 0.17% to 4.46% less than that of the other models. It also outperformed the other models in peak area estimation, delivering lower percentage errors, which were about 0.7% less than its closest competitor - the Bi-Gaussian model. In addition, using data generated from a MALDI-TOF computer model, we showed that the proposed overall algorithm outperformed the existing methods mainly in terms of sensitivity. It achieved a sensitivity of 85%, compared to 77% and 71% of the two benchmark algorithms, continuous wavelet transformation based method and Cromwell respectively. The proposed algorithm is particularly useful for peak detection and parameter estimation in MS data with overlapping peak distributions and asymmetric peaks. The algorithm is implemented using MATLAB and the source code is freely available at http://mapv.sourceforge.net.

  12. Feature selection and classifier parameters estimation for EEG signals peak detection using particle swarm optimization.

    Science.gov (United States)

    Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.

  13. Generalization of Random Intercept Multilevel Models

    Directory of Open Access Journals (Sweden)

    Rehan Ahmad Khan

    2013-10-01

    Full Text Available The concept of random intercept models in a multilevel model developed by Goldstein (1986 has been extended for k-levels. The random variation in intercepts at individual level is marginally split into components by incorporating higher levels of hierarchy in the single level model. So, one can control the random variation in intercepts by incorporating the higher levels in the model.

  14. Feature selection using angle modulated simulated Kalman filter for peak classification of EEG signals.

    Science.gov (United States)

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Mubin, Marizan; Saad, Ismail

    2016-01-01

    In the existing electroencephalogram (EEG) signals peak classification research, the existing models, such as Dumpala, Acir, Liu, and Dingle peak models, employ different set of features. However, all these models may not be able to offer good performance for various applications and it is found to be problem dependent. Therefore, the objective of this study is to combine all the associated features from the existing models before selecting the best combination of features. A new optimization algorithm, namely as angle modulated simulated Kalman filter (AMSKF) will be employed as feature selector. Also, the neural network random weight method is utilized in the proposed AMSKF technique as a classifier. In the conducted experiment, 11,781 samples of peak candidate are employed in this study for the validation purpose. The samples are collected from three different peak event-related EEG signals of 30 healthy subjects; (1) single eye blink, (2) double eye blink, and (3) eye movement signals. The experimental results have shown that the proposed AMSKF feature selector is able to find the best combination of features and performs at par with the existing related studies of epileptic EEG events classification.

  15. 'Peak oil' or 'peak demand'?

    International Nuclear Information System (INIS)

    Chevallier, Bruno; Moncomble, Jean-Eudes; Sigonney, Pierre; Vially, Rolland; Bosseboeuf, Didier; Chateau, Bertrand

    2012-01-01

    This article reports a workshop which addressed several energy issues like the objectives and constraints of energy mix scenarios, the differences between the approaches in different countries, the cost of new technologies implemented for this purposes, how these technologies will be developed and marketed, which will be the environmental and societal acceptability of these technical choices. Different aspects and issues have been more precisely presented and discussed: the peak oil, development of shale gases and their cost (will non conventional hydrocarbons modify the peak oil and be socially accepted?), energy efficiency (its benefits, its reality in France and other countries, its position in front of the challenge of energy transition), and strategies in the transport sector (challenges for mobility, evolution towards a model of sustainable mobility)

  16. Effect of Aerobic Training on Peak Oxygen Uptake Among Seniors Aged 70 or Older: A Meta-Analysis of Randomized Controlled Trials.

    Science.gov (United States)

    Bouaziz, Walid; Kanagaratnam, Lukshe; Vogel, Thomas; Schmitt, Elise; Dramé, Moustapha; Kaltenbach, Georges; Geny, Bernard; Lang, Pierre Olivier

    2018-01-02

    Older adults undergo a progressive decline in cardiorespiratory fitness and functional capacity. This lower peak oxygen uptake (VO 2peak ) level is associated with increased risk of frailty, dependency, loss of autonomy, and mortality from all causes. Regular physical activity and particularly aerobic training (AT) have been shown to contribute to better and healthy aging. We conducted a meta-analysis to measure the exact benefit of AT on VO 2peak in seniors aged 70 years or older. A comprehensive, systematic database search for articles was performed in Embase, Medline, PubMed Central, Science Direct, Scopus, and Web of Science using key words. Two reviewers independently assessed interventional studies for potential inclusion. Ten randomized controlled trials (RCTs) were included totaling 348 seniors aged 70 years or older. Across the trials, no high risk of bias was measured and all considered open-label arms for controls. With significant heterogeneity between the RCTs (all p seniors were, respectively, 1.72 (95% CI: 0.34-3.10) and 1.47 (95% CI: 0.60-2.34). This meta-analysis confirms the AT-associated benefits on VO 2peak in healthy and unhealthy seniors.

  17. The Theory of Random Laser Systems

    International Nuclear Information System (INIS)

    Xunya Jiang

    2002-01-01

    Studies of random laser systems are a new direction with promising potential applications and theoretical interest. The research is based on the theories of localization and laser physics. So far, the research shows that there are random lasing modes inside the systems which is quite different from the common laser systems. From the properties of the random lasing modes, they can understand the phenomena observed in the experiments, such as multi-peak and anisotropic spectrum, lasing mode number saturation, mode competition and dynamic processes, etc. To summarize, this dissertation has contributed the following in the study of random laser systems: (1) by comparing the Lamb theory with the Letokhov theory, the general formulas of the threshold length or gain of random laser systems were obtained; (2) they pointed out the vital weakness of previous time-independent methods in random laser research; (3) a new model which includes the FDTD method and the semi-classical laser theory. The solutions of this model provided an explanation of the experimental results of multi-peak and anisotropic emission spectra, predicted the saturation of lasing modes number and the length of localized lasing modes; (4) theoretical (Lamb theory) and numerical (FDTD and transfer-matrix calculation) studies of the origin of localized lasing modes in the random laser systems; and (5) proposal of using random lasing modes as a new path to study wave localization in random systems and prediction of the lasing threshold discontinuity at mobility edge

  18. Statistics of Microstructure, Peak Stress and Interface Damage in Fiber Reinforced Composites

    DEFF Research Database (Denmark)

    Kushch, Volodymyr I.; Shmegera, Sergii V.; Mishnaevsky, Leon

    2009-01-01

    This paper addresses an effect of the fiber arrangement and interactions on the peak interface stress statistics in a fiber reinforced composite material (FRC). The method we apply combines the multipole expansion technique with the representative unit cell model of composite bulk, which is able...... to simulate both the uniform and clustered random fiber arrangements. By averaging over a number of numerical tests, the empirical probability functions have been obtained for the nearest neighbor distance and the peak interface stress. It is shown that the considered statistical parameters are rather...... sensitive to the fiber arrangement, particularly cluster formation. An explicit correspondence between them has been established and an analytical formula linking the microstructure and peak stress statistics in FRCs has been suggested. Application of the statistical theory of extreme values to the local...

  19. PolyaPeak: Detecting Transcription Factor Binding Sites from ChIP-seq Using Peak Shape Information

    Science.gov (United States)

    Wu, Hao; Ji, Hongkai

    2014-01-01

    ChIP-seq is a powerful technology for detecting genomic regions where a protein of interest interacts with DNA. ChIP-seq data for mapping transcription factor binding sites (TFBSs) have a characteristic pattern: around each binding site, sequence reads aligned to the forward and reverse strands of the reference genome form two separate peaks shifted away from each other, and the true binding site is located in between these two peaks. While it has been shown previously that the accuracy and resolution of binding site detection can be improved by modeling the pattern, efficient methods are unavailable to fully utilize that information in TFBS detection procedure. We present PolyaPeak, a new method to improve TFBS detection by incorporating the peak shape information. PolyaPeak describes peak shapes using a flexible Pólya model. The shapes are automatically learnt from the data using Minorization-Maximization (MM) algorithm, then integrated with the read count information via a hierarchical model to distinguish true binding sites from background noises. Extensive real data analyses show that PolyaPeak is capable of robustly improving TFBS detection compared with existing methods. An R package is freely available. PMID:24608116

  20. Multi-model comparison of CO2 emissions peaking in China: Lessons from CEMF01 study

    Directory of Open Access Journals (Sweden)

    Oleg Lugovoy

    2018-03-01

    Full Text Available The paper summarizes results of the China Energy Modeling Forum's (CEMF first study. Carbon emissions peaking scenarios, consistent with China's Paris commitment, have been simulated with seven national and industry-level energy models and compared. The CO2 emission trends in the considered scenarios peak from 2015 to 2030 at the level of 9–11 Gt. Sector-level analysis suggests that total emissions pathways before 2030 will be determined mainly by dynamics of emissions in the electric power industry and transportation sector. Both sectors will experience significant increase in demand, but have low-carbon alternative options for development. Based on a side-by-side comparison of modeling input and results, conclusions have been drawn regarding the sources of emissions projections differences, which include data, views on economic perspectives, or models' structure and theoretical framework. Some suggestions have been made regarding energy models' development priorities for further research. Keywords: Carbon emissions projections, Climate change, CO2 emissions peak, China's Paris commitment, Top-Down energy models, Bottom-Up energy models, Multi model comparative study, China Energy Modeling Forum (CEMF

  1. An adaptive model for vanadium redox flow battery and its application for online peak power estimation

    Science.gov (United States)

    Wei, Zhongbao; Meng, Shujuan; Tseng, King Jet; Lim, Tuti Mariana; Soong, Boon Hee; Skyllas-Kazacos, Maria

    2017-03-01

    An accurate battery model is the prerequisite for reliable state estimate of vanadium redox battery (VRB). As the battery model parameters are time varying with operating condition variation and battery aging, the common methods where model parameters are empirical or prescribed offline lacks accuracy and robustness. To address this issue, this paper proposes to use an online adaptive battery model to reproduce the VRB dynamics accurately. The model parameters are online identified with both the recursive least squares (RLS) and the extended Kalman filter (EKF). Performance comparison shows that the RLS is superior with respect to the modeling accuracy, convergence property, and computational complexity. Based on the online identified battery model, an adaptive peak power estimator which incorporates the constraints of voltage limit, SOC limit and design limit of current is proposed to fully exploit the potential of the VRB. Experiments are conducted on a lab-scale VRB system and the proposed peak power estimator is verified with a specifically designed "two-step verification" method. It is shown that different constraints dominate the allowable peak power at different stages of cycling. The influence of prediction time horizon selection on the peak power is also analyzed.

  2. Multiscale peak detection in wavelet space.

    Science.gov (United States)

    Zhang, Zhi-Min; Tong, Xia; Peng, Ying; Ma, Pan; Zhang, Ming-Jin; Lu, Hong-Mei; Chen, Xiao-Qing; Liang, Yi-Zeng

    2015-12-07

    Accurate peak detection is essential for analyzing high-throughput datasets generated by analytical instruments. Derivatives with noise reduction and matched filtration are frequently used, but they are sensitive to baseline variations, random noise and deviations in the peak shape. A continuous wavelet transform (CWT)-based method is more practical and popular in this situation, which can increase the accuracy and reliability by identifying peaks across scales in wavelet space and implicitly removing noise as well as the baseline. However, its computational load is relatively high and the estimated features of peaks may not be accurate in the case of peaks that are overlapping, dense or weak. In this study, we present multi-scale peak detection (MSPD) by taking full advantage of additional information in wavelet space including ridges, valleys, and zero-crossings. It can achieve a high accuracy by thresholding each detected peak with the maximum of its ridge. It has been comprehensively evaluated with MALDI-TOF spectra in proteomics, the CAMDA 2006 SELDI dataset as well as the Romanian database of Raman spectra, which is particularly suitable for detecting peaks in high-throughput analytical signals. Receiver operating characteristic (ROC) curves show that MSPD can detect more true peaks while keeping the false discovery rate lower than MassSpecWavelet and MALDIquant methods. Superior results in Raman spectra suggest that MSPD seems to be a more universal method for peak detection. MSPD has been designed and implemented efficiently in Python and Cython. It is available as an open source package at .

  3. Modeling Ontario regional electricity system demand using a mixed fixed and random coefficients approach

    Energy Technology Data Exchange (ETDEWEB)

    Hsiao, C.; Mountain, D.C.; Chan, M.W.L.; Tsui, K.Y. (University of Southern California, Los Angeles (USA) McMaster Univ., Hamilton, ON (Canada) Chinese Univ. of Hong Kong, Shatin)

    1989-12-01

    In examining the municipal peak and kilowatt-hour demand for electricity in Ontario, the issue of homogeneity across geographic regions is explored. A common model across municipalities and geographic regions cannot be supported by the data. Considered are various procedures which deal with this heterogeneity and yet reduce the multicollinearity problems associated with regional specific demand formulations. The recommended model controls for regional differences assuming that the coefficients of regional-seasonal specific factors are fixed and different while the coefficients of economic and weather variables are random draws from a common population for any one municipality by combining the information on all municipalities through a Bayes procedure. 8 tabs., 41 refs.

  4. Modeling for the management of peak loads on a radiology image management network

    International Nuclear Information System (INIS)

    Dwyer, S.J.; Cox, G.G.; Templeton, A.W.; Cook, L.T.; Anderson, W.H.; Hensley, K.S.

    1987-01-01

    The design of a radiology image management network for a radiology department can now be assisted by a queueing model. The queueing model requires that the designers specify the following parameters: the number of tasks to be accomplished (acquisition of image data, transmission of data, archiving of data, displaying and manipulation of data, and generation of hard copies); the average times to complete each task; the patient scheduled arrival times; and the number/type of computer nodes interfaced to the network (acquisition nodes, interactive diagnostic display stations, archiving nodes, hard copy nodes, and gateways to hospital systems). The outcomes from the queuering model include mean throughput data rates and identified bottlenecks, and peak throughput data rates and identified bottlenecks. This exhibit presents the queueing model and illustrates its use in managing peak loads on an image management network

  5. Hydroclimatology of Dual Peak Cholera Incidence in Bengal Region: Inferences from a Spatial Explicit Model

    Science.gov (United States)

    Bertuzzo, E.; Mari, L.; Righetto, L.; Casagrandi, R.; Gatto, M.; Rodriguez-Iturbe, I.; Rinaldo, A.

    2010-12-01

    The seasonality of cholera and its relation with environmental drivers are receiving increasing interest and research efforts, yet they remain unsatisfactorily understood. A striking example is the observed annual cycle of cholera incidence in the Bengal region which exhibits two peaks despite the main environmental drivers that have been linked to the disease (air and sea surface temperature, zooplankton density, river discharge) follow a synchronous single-peak annual pattern. A first outbreak, mainly affecting the coastal regions, occurs in spring and it is followed, after a period of low incidence during summer, by a second, usually larger, peak in autumn also involving regions situated farther inland. A hydroclimatological explanation for this unique seasonal cycle has been recently proposed: the low river spring flows favor the intrusion of brackish water (the natural environment of the causative agent of the disease) which, in turn, triggers the first outbreak. The summer rising river discharges have a temporary dilution effect and prompt the repulsion of contaminated water which lowers the disease incidence. However, the monsoon flooding, together with the induced crowding of the population and the failure of the sanitation systems, can possibly facilitate the spatial transmission of the disease and promote the autumn outbreak. We test this hypothesis using a mechanistic, spatially explicit model of cholera epidemic. The framework directly accounts for the role of the river network in transporting and redistributing cholera bacteria among human communities as well as for the annual fluctuation of the river flow. The model is forced with the actual environmental drivers of the region, namely river flow and temperature. Our results show that these two drivers, both having a single peak in the summer, can generate a double peak cholera incidence pattern. Besides temporal patterns, the model is also able to qualitatively reproduce spatial patterns characterized

  6. Diffraction peaks in x-ray spectroscopy: Friend or foe?

    International Nuclear Information System (INIS)

    Tissot, R.G.; Goehner, R.P.

    1992-01-01

    Diffraction peaks can occur as unidentifiable peaks in the energy spectrum of an x-ray spectrometric analysis. Recently, there has been increased interest in oriented polycrystalline films and epitaxial films on single crystal substrates for electronic applications. Since these materials diffract x-rays more efficiently than randomly oriented polycrystalline materials, diffraction peaks are being observed more frequently in x-ray fluorescent spectra. In addition, micro x-ray spectrometric analysis utilizes a small, intense, collimated x-ray beam that can yield well defined diffraction peaks. In some cases these diffraction peaks can occur at the same position as elemental peaks. These diffraction peaks, although a possible problem in qualitative and quantitative elemental analysis, can give very useful information about the crystallographic structure and orientation of the material being analyzed. The observed diffraction peaks are dependent on the geometry of the x-ray spectrometer, the degree of collimation and the distribution of wavelengths (energies) originating from the x-ray tube and striking the sample

  7. Local properties of the large-scale peaks of the CMB temperature

    Energy Technology Data Exchange (ETDEWEB)

    Marcos-Caballero, A.; Martínez-González, E.; Vielva, P., E-mail: marcos@ifca.unican.es, E-mail: martinez@ifca.unican.es, E-mail: vielva@ifca.unican.es [Instituto de Física de Cantabria, CSIC-Universidad de Cantabria, Avda. de los Castros s/n, 39005 Santander (Spain)

    2017-05-01

    In the present work, we study the largest structures of the CMB temperature measured by Planck in terms of the most prominent peaks on the sky, which, in particular, are located in the southern galactic hemisphere. Besides these large-scale features, the well-known Cold Spot anomaly is included in the analysis. All these peaks would contribute significantly to some of the CMB large-scale anomalies, as the parity and hemispherical asymmetries, the dipole modulation, the alignment between the quadrupole and the octopole, or in the case of the Cold Spot, to the non-Gaussianity of the field. The analysis of the peaks is performed by using their multipolar profiles, which characterize the local shape of the peaks in terms of the discrete Fourier transform of the azimuthal angle. In order to quantify the local anisotropy of the peaks, the distribution of the phases of the multipolar profiles is studied by using the Rayleigh random walk methodology. Finally, a direct analysis of the 2-dimensional field around the peaks is performed in order to take into account the effect of the galactic mask. The results of the analysis conclude that, once the peak amplitude and its first and second order derivatives at the centre are conditioned, the rest of the field is compatible with the standard model. In particular, it is observed that the Cold Spot anomaly is caused by the large value of curvature at the centre.

  8. Modelling and short-term forecasting of daily peak power demand in Victoria using two-dimensional wavelet based SDP models

    International Nuclear Information System (INIS)

    Truong, Nguyen-Vu; Wang, Liuping; Wong, Peter K.C.

    2008-01-01

    Power demand forecasting is of vital importance to the management and planning of power system operations which include generation, transmission, distribution, as well as system's security analysis and economic pricing processes. This paper concerns the modeling and short-term forecast of daily peak power demand in the state of Victoria, Australia. In this study, a two-dimensional wavelet based state dependent parameter (SDP) modelling approach is used to produce a compact mathematical model for this complex nonlinear dynamic system. In this approach, a nonlinear system is expressed by a set of linear regressive input and output terms (state variables) multiplied by the respective state dependent parameters that carry the nonlinearities in the form of 2-D wavelet series expansions. This model is identified based on historical data, descriptively representing the relationship and interaction between various components which affect the peak power demand of a certain day. The identified model has been used to forecast daily peak power demand in the state of Victoria, Australia in the time period from the 9th of August 2007 to the 24th of August 2007. With a MAPE (mean absolute prediction error) of 1.9%, it has clearly implied the effectiveness of the identified model. (author)

  9. How would peak rainfall intensity affect runoff predictions using conceptual water balance models?

    Directory of Open Access Journals (Sweden)

    B. Yu

    2015-06-01

    Full Text Available Most hydrological models use continuous daily precipitation and potential evapotranspiration for streamflow estimation. With the projected increase in mean surface temperature, hydrological processes are set to intensify irrespective of the underlying changes to the mean precipitation. The effect of an increase in rainfall intensity on the long-term water balance is, however, not adequately accounted for in the commonly used hydrological models. This study follows from a previous comparative analysis of a non-stationary daily series of stream flow of a forested watershed (River Rimbaud in the French Alps (area = 1.478 km2 (1966–2006. Non-stationarity in the recorded stream flow occurred as a result of a severe wild fire in 1990. Two daily models (AWBM and SimHyd were initially calibrated for each of three distinct phases in relation to the well documented land disturbance. At the daily and monthly time scales, both models performed satisfactorily with the Nash–Sutcliffe coefficient of efficiency (NSE varying from 0.77 to 0.92. When aggregated to the annual time scale, both models underestimated the flow by about 22% with a reduced NSE at about 0.71. Exploratory data analysis was undertaken to relate daily peak hourly rainfall intensity to the discrepancy between the observed and modelled daily runoff amount. Preliminary results show that the effect of peak hourly rainfall intensity on runoff prediction is insignificant, and model performance is unlikely to improve when peak daily precipitation is included. Trend analysis indicated that the large decrease of precipitation when daily precipitation amount exceeded 10–20 mm may have contributed greatly to the decrease in stream flow of this forested watershed.

  10. Individual vision and peak distribution in collective actions

    Science.gov (United States)

    Lu, Peng

    2017-06-01

    People make decisions on whether they should participate as participants or not as free riders in collective actions with heterogeneous visions. Besides of the utility heterogeneity and cost heterogeneity, this work includes and investigates the effect of vision heterogeneity by constructing a decision model, i.e. the revised peak model of participants. In this model, potential participants make decisions under the joint influence of utility, cost, and vision heterogeneities. The outcomes of simulations indicate that vision heterogeneity reduces the values of peaks, and the relative variance of peaks is stable. Under normal distributions of vision heterogeneity and other factors, the peaks of participants are normally distributed as well. Therefore, it is necessary to predict distribution traits of peaks based on distribution traits of related factors such as vision heterogeneity and so on. We predict the distribution of peaks with parameters of both mean and standard deviation, which provides the confident intervals and robust predictions of peaks. Besides, we validate the peak model of via the Yuyuan Incident, a real case in China (2014), and the model works well in explaining the dynamics and predicting the peak of real case.

  11. Peaks, plateaus, canyons, and craters: The complex geometry of simple mid-domain effect models

    DEFF Research Database (Denmark)

    Colwell, Robert K.; Gotelli, Nicholas J.; Rahbek, Carsten

    2009-01-01

    dye algorithm to place assemblages of species of uniform We used a spreading dye algorithm to place assemblages of species of uniform range size in one-dimensional or two-dimensional bounded domains. In some models, we allowed dispersal to introduce range discontinuity. Results: As uniform range size...... increases from small to medium, a flat pattern of species As uniform range size increases from small to medium, a flat pattern of species richness is replaced by a pair of peripheral peaks, separated by a valley (one-dimensional models), or by a cratered ring (two-dimensional models) of species richness...... of a uniform size generate more complex patterns, including peaks, plateaus, canyons, and craters of species richness....

  12. Average current is better than peak current as therapeutic dosage for biphasic waveforms in a ventricular fibrillation pig model of cardiac arrest.

    Science.gov (United States)

    Chen, Bihua; Yu, Tao; Ristagno, Giuseppe; Quan, Weilun; Li, Yongqin

    2014-10-01

    Defibrillation current has been shown to be a clinically more relevant dosing unit than energy. However, the effects of average and peak current in determining shock outcome are still undetermined. The aim of this study was to investigate the relationship between average current, peak current and defibrillation success when different biphasic waveforms were employed. Ventricular fibrillation (VF) was electrically induced in 22 domestic male pigs. Animals were then randomized to receive defibrillation using one of two different biphasic waveforms. A grouped up-and-down defibrillation threshold-testing protocol was used to maintain the average success rate of 50% in the neighborhood. In 14 animals (Study A), defibrillations were accomplished with either biphasic truncated exponential (BTE) or rectilinear biphasic waveforms. In eight animals (Study B), shocks were delivered using two BTE waveforms that had identical peak current but different waveform durations. Both average and peak currents were associated with defibrillation success when BTE and rectilinear waveforms were investigated. However, when pathway impedance was less than 90Ω for the BTE waveform, bivariate correlation coefficient was 0.36 (p=0.001) for the average current, but only 0.21 (p=0.06) for the peak current in Study A. In Study B, a high defibrillation success (67.9% vs. 38.8%, pcurrent (14.9±2.1A vs. 13.5±1.7A, pcurrent unchanged. In this porcine model of VF, average current was better than peak current to be an adequate parameter to describe the therapeutic dosage when biphasic defibrillation waveforms were used. The institutional protocol number: P0805. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  13. Peak regulation right

    International Nuclear Information System (INIS)

    Gao, Z. |; Ren, Z.; Li, Z.; Zhu, R.

    2005-01-01

    A peak regulation right concept and corresponding transaction mechanism for an electricity market was presented. The market was based on a power pool and independent system operator (ISO) model. Peak regulation right (PRR) was defined as a downward regulation capacity purchase option which allowed PRR owners to buy certain quantities of peak regulation capacity (PRC) at a specific price during a specified period from suppliers. The PRR owner also had the right to decide whether or not they would buy PRC from suppliers. It was the power pool's responsibility to provide competitive and fair peak regulation trading markets to participants. The introduction of PRR allowed for unit capacity regulation. The PRR and PRC were rated by the supplier, and transactions proceeded through a bidding process. PRR suppliers obtained profits by selling PRR and PRC, and obtained downward regulation fees regardless of whether purchases are made. It was concluded that the peak regulation mechanism reduced the total cost of the generating system and increased the social surplus. 6 refs., 1 tab., 3 figs

  14. Statistical analysis of the low-temperature dislocation peak of internal friction (Bordoni peak) in nanostructured copper

    International Nuclear Information System (INIS)

    Vatazhuk, E.N.; Natsik, V.D.

    2011-01-01

    The temperature-frequency dependence of internal friction in the nanostructured samples of Cu and fibred composite C-32 vol.%Nb with the sizes of structure fragments approx 200 nm is analyzed. Experiments are used as initial information for such analysis. The characteristic for the heavily deformed copper Bordoni peak, located nearby a temperature 90 K, was recorded on temperature dependence of vibration decrement (frequencies 73-350 kHz) in previous experiments. The peak is due to the resonance interaction of sound with the system of thermal activated relaxators, and its width considerably greater in comparison with the width of standard internal friction peak with the single relaxation time. Statistical analysis of the peak is made in terms of assumption that the reason of broadening is random activation energy dispersion of relaxators as a result of intense distortion of copper crystal structure. Good agreement of experimental data and Seeger theory considers thermal activated paired kinks at linear segments of dislocation lines, placed in potential Peierls relief valley, as relaxators of Bordoni peak, was established. It is shown that the registered peak height in experiment correspond to presence at the average one dislocation segment in the interior of crystalline grain with size of 200 nm. Empirical estimates for the critical Peierls stress σp ∼ 2x10 7 Pa and integrated density of the interior grain dislocations ρ d ∼ 10 13 m -2 are made. Nb fibers in the composite Cu-Nb facilitate to formation of nanostructured copper, but do not influence evidently on the Bordoni peak.

  15. Improvement of Bragg peak shift estimation using dimensionality reduction techniques and predictive linear modeling

    Science.gov (United States)

    Xing, Yafei; Macq, Benoit

    2017-11-01

    With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.

  16. A fluid dynamical flow model for the central peak in the rotation curve of disk galaxies

    International Nuclear Information System (INIS)

    Bhattacharyya, T.; Basu, B.

    1980-01-01

    The rotation curve of the central region in some disk galaxies shows a linear rise, terminating at a peak (primary peak) which is then vollowed by a deep minimum. The curve then again rises to another peak at more or less half-way across the galactic radius. This latter peak is considered as the peak of the rotation curve in all large-scale analysis of galactic structure. The primary peak is usually ignored for the purpose. In this work an attempt has been made to look at the primary peak as the manifestation of the post-explosion flow pattern of gas in the deep central region of galaxies. Solving hydrodynamical equations of motion, a flow model has been derived which imitates very closely the actually observed linear rotational velocity, followed by the falling branch of the curve to minimum. The theoretical flow model has been compared with observed results for nine galaxies. The agreement obtained is extremely encouraging. The distance of the primary peak from the galactic centre has been shown to be correlated with the angular velocity in the linear part of the rotation curve. Here also, agreement is very good between theoretical and observed results. It is concluded that the distance of the primary peak from the centre not only speaks of the time that has elapsed since the explosion occurred in the nucleus, it also speaks of the potential capability of the nucleus of the galaxy for repeating explosions through some efficient process of mass replenishment at the core. (orig.)

  17. Similarities in the dynamical behavior across the classical peak effect and the second magnetization peak in single crystals of 2H-NbSe2

    International Nuclear Information System (INIS)

    Thakur, A.D.; Ramakrishnan, S.; Grover, A.K.; Chandrasekhar Rao, T.V.; Uji, S.; Terashima, T.; Higgins, M.J.

    2005-01-01

    The classical peak effect (CPE) and the second magnetization peak (SMP) are two distinct anomalies in critical current of superconductors. A nascent pinned single crystal sample of 2HNbSe 2 (T c (0) ∼7.2 K) shows only the sharp CPE. In a moderately pinned sample (T c (0) ∼6 K), the sharp CPE broadens with the addition of characteristic structure (stepwise amorphization) between the onset and the peak positions of the CPE. Also, there emerges another anomalous peak akin to SMP prior to the CPE. We have looked at samples of 2H-NbSe 2 with intermediate levels of quenched random pinning (T c (0) ∼ 7.1 K) and successfully explored the two peaks down to 50 mK. (author)

  18. Conditional Monte Carlo randomization tests for regression models.

    Science.gov (United States)

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Infinite Random Graphs as Statistical Mechanical Models

    DEFF Research Database (Denmark)

    Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria

    2011-01-01

    We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe a ...

  20. Redshift space correlations and scale-dependent stochastic biasing of density peaks

    Science.gov (United States)

    Desjacques, Vincent; Sheth, Ravi K.

    2010-01-01

    We calculate the redshift space correlation function and the power spectrum of density peaks of a Gaussian random field. Our derivation, which is valid on linear scales k≲0.1hMpc-1, is based on the peak biasing relation given by Desjacques [Phys. Rev. DPRVDAQ1550-7998, 78, 103503 (2008)10.1103/PhysRevD.78.103503]. In linear theory, the redshift space power spectrum is Ppks(k,μ)=exp⁡(-f2σvel2k2μ2)[bpk(k)+bvel(k)fμ2]2Pδ(k), where μ is the angle with respect to the line of sight, σvel is the one-dimensional velocity dispersion, f is the growth rate, and bpk(k) and bvel(k) are k-dependent linear spatial and velocity bias factors. For peaks, the value of σvel depends upon the functional form of bvel. When the k dependence is absent from the square brackets and bvel is set to unity, the resulting expression is assumed to describe models where the bias is linear and deterministic, but the velocities are unbiased. The peak model is remarkable because it has unbiased velocities in this same sense—peak motions are driven by dark matter flows—but, in order to achieve this, bvel must be k dependent. We speculate that this is true in general: k dependence of the spatial bias will lead to k dependence of bvel even if the biased tracers flow with the dark matter. Because of the k dependence of the linear bias parameters, standard manipulations applied to the peak model will lead to k-dependent estimates of the growth factor that could erroneously be interpreted as a signature of modified dark energy or gravity. We use the Fisher formalism to show that the constraint on the growth rate f is degraded by a factor of 2 if one allows for a k-dependent velocity bias of the peak type. Our analysis also demonstrates that the Gaussian smoothing term is part and parcel of linear theory. We discuss a simple estimate of nonlinear evolution and illustrate the effect of the peak bias on the redshift space multipoles. For k≲0.1hMpc-1, the peak bias is deterministic but k

  1. RMBNToolbox: random models for biochemical networks

    Directory of Open Access Journals (Sweden)

    Niemi Jari

    2007-05-01

    Full Text Available Abstract Background There is an increasing interest to model biochemical and cell biological networks, as well as to the computational analysis of these models. The development of analysis methodologies and related software is rapid in the field. However, the number of available models is still relatively small and the model sizes remain limited. The lack of kinetic information is usually the limiting factor for the construction of detailed simulation models. Results We present a computational toolbox for generating random biochemical network models which mimic real biochemical networks. The toolbox is called Random Models for Biochemical Networks. The toolbox works in the Matlab environment, and it makes it possible to generate various network structures, stoichiometries, kinetic laws for reactions, and parameters therein. The generation can be based on statistical rules and distributions, and more detailed information of real biochemical networks can be used in situations where it is known. The toolbox can be easily extended. The resulting network models can be exported in the format of Systems Biology Markup Language. Conclusion While more information is accumulating on biochemical networks, random networks can be used as an intermediate step towards their better understanding. Random networks make it possible to study the effects of various network characteristics to the overall behavior of the network. Moreover, the construction of artificial network models provides the ground truth data needed in the validation of various computational methods in the fields of parameter estimation and data analysis.

  2. A Structural Modeling Approach to a Multilevel Random Coefficients Model.

    Science.gov (United States)

    Rovine, Michael J.; Molenaar, Peter C. M.

    2000-01-01

    Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)

  3. Modeling the peak of emergence in systems: Design and katachi.

    Science.gov (United States)

    Cardier, Beth; Goranson, H T; Casas, Niccolo; Lundberg, Patric; Erioli, Alessio; Takaki, Ryuji; Nagy, Dénes; Ciavarra, Richard; Sanford, Larry D

    2017-12-01

    It is difficult to model emergence in biological systems using reductionist paradigms. A requirement for computational modeling is that individual entities can be recorded parametrically and related logically, but their transformation into whole systems cannot be captured this way. The problem stems from an inability to formally represent the implicit influences that inform emergent organization, such as context, shifts in causal agency or scale, and self-reference. This lack hampers biological systems modeling and its computational counterpart, indicating a need for new fundamental abstraction frameworks that support system-level characteristics. We develop an approach that formally captures these characteristics, focusing on the way they come together to enable transformation at the 'peak' of the emergent process. An example from virology is presented, in which two seemingly antagonistic systems - the herpes cold sore virus and its host - are capable of altering their basic biological objectives to achieve a new equilibrium. The usual barriers to modeling this process are overcome by incorporating mechanisms from practices centered on its emergent peak: design and katachi. In the Japanese science of form, katachi refers to the emergence of intrinsic structure from real situations, where an optimal balance between implicit influences is achieved. Design indicates how such optimization is guided by principles of flow. These practices leverage qualities of situated abstraction, which we understand through the intuitive method of physicist Kôdi Husimi. Early results indicate that this approach can capture the functional transformations of biological emergence, whilst being reasonably computable. Due to its geometric foundations and narrative-based extension to logic, the method will also generate speculative predictions. This research forms the foundations of a new biomedical modeling platform, which is discussed. Copyright © 2017. Published by Elsevier Ltd.

  4. Gigahertz-peaked Spectra Pulsars and Thermal Absorption Model

    Energy Technology Data Exchange (ETDEWEB)

    Kijak, J.; Basu, R.; Lewandowski, W.; Rożko, K. [Janusz Gil Institute of Astronomy, University of Zielona Góra, ul. Z. Szafrana 2, PL-65-516 Zielona Góra (Poland); Dembska, M., E-mail: jkijak@astro.ia.uz.zgora.pl [DLR Institute of Space Systems, Robert-Hooke-Str. 7 D-28359 Bremen (Germany)

    2017-05-10

    We present the results of our radio interferometric observations of pulsars at 325 and 610 MHz using the Giant Metrewave Radio Telescope. We used the imaging method to estimate the flux densities of several pulsars at these radio frequencies. The analysis of the shapes of the pulsar spectra allowed us to identify five new gigahertz-peaked spectra (GPS) pulsars. Using the hypothesis that the spectral turnovers are caused by thermal free–free absorption in the interstellar medium, we modeled the spectra of all known objects of this kind. Using the model, we were able to put some observational constraints on the physical parameters of the absorbing matter, which allows us to distinguish between the possible sources of absorption. We also discuss the possible effects of the existence of GPS pulsars on future search surveys, showing that the optimal frequency range for finding such objects would be from a few GHz (for regular GPS sources) to possibly 10 GHz for pulsars and radio magnetars exhibiting very strong absorption.

  5. How does economic theory explain the Hubbert peak oil model?

    International Nuclear Information System (INIS)

    Reynes, F.; Okullo, S.; Hofkes, M.

    2010-01-01

    The aim of this paper is to provide an economic foundation for bell shaped oil extraction trajectories, consistent with Hubbert's peak oil model. There are several reasons why it is important to get insight into the economic foundations of peak oil. As production decisions are expected to depend on economic factors, a better comprehension of the economic foundations of oil extraction behaviour is fundamental to predict production and price over the coming years. The investigation made in this paper helps us to get a better understanding of the different mechanisms that may be at work in the case of OPEC and non-OPEC producers. We show that profitability is the main driver behind production plans. Changes in profitability due to divergent trajectories between costs and oil price may give rise to a Hubbert production curve. For this result we do not need to introduce a demand or an exploration effect as is generally assumed in the literature.

  6. Displaced spectra techniques as a tool for peak identification in PSD-analysis

    International Nuclear Information System (INIS)

    Pineyro, J.; Behringer, K.

    1987-10-01

    Sharp peaks in the power spectral density function can be due to periodic components in the noise signal or due to narrowband random contributions. A novel method based on Fourier transform techniques is presented which allows under certain limitations to identify the peak type. (author)

  7. An Ensemble Model for Co-Seismic Landslide Susceptibility Using GIS and Random Forest Method

    Directory of Open Access Journals (Sweden)

    Suchita Shrestha

    2017-11-01

    Full Text Available The Mw 7.8 Gorkha earthquake of 25 April 2015 triggered thousands of landslides in the central part of the Nepal Himalayas. The main goal of this study was to generate an ensemble-based map of co-seismic landslide susceptibility in Sindhupalchowk District using model comparison and combination strands. A total of 2194 co-seismic landslides were identified and were randomly split into 1536 (~70%, to train data for establishing the model, and the remaining 658 (~30% for the validation of the model. Frequency ratio, evidential belief function, and weight of evidence methods were applied and compared using 11 different causative factors (peak ground acceleration, epicenter proximity, fault proximity, geology, elevation, slope, plan curvature, internal relief, drainage proximity, stream power index, and topographic wetness index to prepare the landslide susceptibility map. An ensemble of random forest was then used to overcome the various prediction limitations of the individual models. The success rates and prediction capabilities were critically compared using the area under the curve (AUC of the receiver operating characteristic curve (ROC. By synthesizing the results of the various models into a single score, the ensemble model improved accuracy and provided considerably more realistic prediction capacities (91% than the frequency ratio (81.2%, evidential belief function (83.5% methods, and weight of evidence (80.1%.

  8. The bias of weighted dark matter halos from peak theory

    CERN Document Server

    Verde, Licia; Simpson, Fergus; Alvarez-Gaume, Luis; Heavens, Alan; Matarrese, Sabino

    2014-01-01

    We give an analytical form for the weighted correlation function of peaks in a Gaussian random field. In a cosmological context, this approach strictly describes the formation bias and is the main result here. Nevertheless, we show its validity and applicability to the evolved cosmological density field and halo field, using Gaussian random field realisations and dark matter N-body numerical simulations. Using this result from peak theory we compute the bias of peaks (and dark matter halos) and show that it reproduces results from the simulations at the ${\\mathcal O}(10\\%)$ level. Our analytical formula for the bias predicts a scale-dependent bias with two characteristics: a broad band shape which, however, is most affected by the choice of weighting scheme and evolution bias, and a more robust, narrow feature localised at the BAO scale, an effect that is confirmed in simulations. This scale-dependent bias smooths the BAO feature but, conveniently, does not move it. We provide a simple analytic formula to des...

  9. Numerical Model and Analysis of Peak Temperature Reduction in LiFePO4 Battery Packs Using Phase Change Materials

    DEFF Research Database (Denmark)

    Coman, Paul Tiberiu; Veje, Christian

    2013-01-01

    Numerical model and analysis of peak temperature reduction in LiFePO4 battery packs using phase change materials......Numerical model and analysis of peak temperature reduction in LiFePO4 battery packs using phase change materials...

  10. A new approach for modeling the peak utility impacts from a proposed CUAC standard

    Energy Technology Data Exchange (ETDEWEB)

    LaCommare, Kristina Hamachi; Gumerman, Etan; Marnay, Chris; Chan, Peter; Coughlin, Katie

    2004-08-01

    This report describes a new Berkeley Lab approach for modeling the likely peak electricity load reductions from proposed energy efficiency programs in the National Energy Modeling System (NEMS). This method is presented in the context of the commercial unitary air conditioning (CUAC) energy efficiency standards. A previous report investigating the residential central air conditioning (RCAC) load shapes in NEMS revealed that the peak reduction results were lower than expected. This effect was believed to be due in part to the presence of the squelch, a program algorithm designed to ensure changes in the system load over time are consistent with the input historic trend. The squelch applies a system load-scaling factor that scales any differences between the end-use bottom-up and system loads to maintain consistency with historic trends. To obtain more accurate peak reduction estimates, a new approach for modeling the impact of peaky end uses in NEMS-BT has been developed. The new approach decrements the system load directly, reducing the impact of the squelch on the final results. This report also discusses a number of additional factors, in particular non-coincidence between end-use loads and system loads as represented within NEMS, and their impacts on the peak reductions calculated by NEMS. Using Berkeley Lab's new double-decrement approach reduces the conservation load factor (CLF) on an input load decrement from 25% down to 19% for a SEER 13 CUAC trial standard level, as seen in NEMS-BT output. About 4 GW more in peak capacity reduction results from this new approach as compared to Berkeley Lab's traditional end-use decrement approach, which relied solely on lowering end use energy consumption. The new method has been fully implemented and tested in the Annual Energy Outlook 2003 (AEO2003) version of NEMS and will routinely be applied to future versions. This capability is now available for use in future end-use efficiency or other policy analysis

  11. Entropy Characterization of Random Network Models

    Directory of Open Access Journals (Sweden)

    Pedro J. Zufiria

    2017-06-01

    Full Text Available This paper elaborates on the Random Network Model (RNM as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc. and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.

  12. The peak in anomalous magnetic viscosity

    International Nuclear Information System (INIS)

    Collocott, S.J.; Watterson, P.A.; Tan, X.H.; Xu, H.

    2014-01-01

    Anomalous magnetic viscosity, where the magnetization as a function of time exhibits non-monotonic behaviour, being seen to increase, reach a peak, and then decrease, is observed on recoil lines in bulk amorphous ferromagnets, for certain magnetic prehistories. A simple geometrical approach based on the motion of the state line on the Preisach plane gives a theoretical framework for interpreting non-monotonic behaviour and explains the origin of the peak. This approach gives an expression for the time taken to reach the peak as a function of the applied (or holding) field. The theory is applied to experimental data for bulk amorphous ferromagnet alloys of composition Nd 60−x Fe 30 Al 10 Dy x , x = 0, 1, 2, 3 and 4, and it gives a reasonable description of the observed behaviour. The role played by other key magnetic parameters, such as the intrinsic coercivity and fluctuation field, is also discussed. When the non-monotonic behaviour of the magnetization of a number of alloys is viewed in the context of the model, features of universal behaviour emerge, that are independent of alloy composition. - Highlights: • Development of a simple geometrical model based on the Preisach model which gives a complete explanation of the peak in the magnetic viscosity. • Geometrical approach is extended by considering equations that govern the motion of the state line. • The model is used to deduce the relationship between the holding field and the time it takes to reach the peak. • The model is tested with experimental results for a range of Nd–Fe–Al–Dy bulk amorphous ferromagnets. • There is good agreement between the model and the experimental data

  13. Relationships between electroencephalographic spectral peaks across frequency bands

    Directory of Open Access Journals (Sweden)

    Sacha Jennifer Van Albada

    2013-03-01

    Full Text Available The degree to which electroenencephalographic (EEG spectral peaks are independent, and the relationships between their frequencies have been debated. A novel fitting method was used to determine peak parameters in the range 2–35 Hz from a large sample of eyes-closed spectra, and their interrelationships were investigated. Findings were compared with a mean-field model of thalamocortical activity, which predicts near-harmonic relationships between peaks. The subject set consisted of 1424 healthy subjects from the Brain Resource International Database. Peaks in the theta range occurred on average near half the alpha peak frequency, while peaks in the beta range tended to occur near twice and three times the alpha peak frequency on an individual-subject basis. Moreover, for the majority of subjects, alpha peak frequencies were significantly positively correlated with frequencies of peaks in the theta and low and high beta ranges. Such a harmonic progression agrees semiquantitatively with theoretical predictions from the mean-field model. These findings indicate a common or analogous source for different rhythms, and help to define appropriate individual frequency bands for peak identification.

  14. Relationships between Electroencephalographic Spectral Peaks Across Frequency Bands

    Science.gov (United States)

    van Albada, S. J.; Robinson, P. A.

    2013-01-01

    The degree to which electroencephalographic spectral peaks are independent, and the relationships between their frequencies have been debated. A novel fitting method was used to determine peak parameters in the range 2–35 Hz from a large sample of eyes-closed spectra, and their interrelationships were investigated. Findings were compared with a mean-field model of thalamocortical activity, which predicts near-harmonic relationships between peaks. The subject set consisted of 1424 healthy subjects from the Brain Resource International Database. Peaks in the theta range occurred on average near half the alpha peak frequency, while peaks in the beta range tended to occur near twice and three times the alpha peak frequency on an individual-subject basis. Moreover, for the majority of subjects, alpha peak frequencies were significantly positively correlated with frequencies of peaks in the theta and low and high beta ranges. Such a harmonic progression agrees semiquantitatively with theoretical predictions from the mean-field model. These findings indicate a common or analogous source for different rhythms, and help to define appropriate individual frequency bands for peak identification. PMID:23483663

  15. Gamma processes and peaks-over-threshold distributions for time-dependent reliability

    International Nuclear Information System (INIS)

    Noortwijk, J.M. van; Weide, J.A.M. van der; Kallen, M.J.; Pandey, M.D.

    2007-01-01

    In the evaluation of structural reliability, a failure is defined as the event in which stress exceeds a resistance that is liable to deterioration. This paper presents a method to combine the two stochastic processes of deteriorating resistance and fluctuating load for computing the time-dependent reliability of a structural component. The deterioration process is modelled as a gamma process, which is a stochastic process with independent non-negative increments having a gamma distribution with identical scale parameter. The stochastic process of loads is generated by a Poisson process. The variability of the random loads is modelled by a peaks-over-threshold distribution (such as the generalised Pareto distribution). These stochastic processes of deterioration and load are combined to evaluate the time-dependent reliability

  16. Prediction on the Peak of the CO2 Emissions in China Using the STIRPAT Model

    Directory of Open Access Journals (Sweden)

    Li Li

    2016-01-01

    Full Text Available Climate change has threatened our economic, environmental, and social sustainability seriously. The world has taken active measures in dealing with climate change to mitigate carbon emissions. Predicting the carbon emissions peak has become a global focus, as well as a leading target for China’s low carbon development. China has promised its carbon emissions will have peaked by around 2030, with the intention of peaking earlier. Scholars generally have studied the influencing factors of carbon emissions. However, research on carbon emissions peaks is not extensive. Therefore, by setting a low scenario, a middle scenario, and a high scenario, this paper predicts China’s carbon emissions peak from 2015 to 2035 based on the data from 1998 to 2014 using the Stochastic Impacts by Regression on Population, Affluence, and Technology (STIRPAT model. The results show that in the low, middle, and high scenarios China will reach its carbon emissions peak in 2024, 2027, and 2030, respectively. Thus, this paper puts forward the large-scale application of technology innovation to improve energy efficiency and optimize energy structure and supply and demand. China should use industrial policy and human capital investment to stimulate the rapid development of low carbon industries and modern agriculture and service industries to help China to reach its carbon emissions peak by around 2030 or earlier.

  17. Head-to-head comparison of peak supine bicycle exercise echocardiography and treadmill exercise echocardiography at peak and at post-exercise for the detection of coronary artery disease.

    Science.gov (United States)

    Peteiro, Jesús; Bouzas-Mosquera, Alberto; Estevez, Rodrigo; Pazos, Pablo; Piñeiro, Miriam; Castro-Beiras, Alfonso

    2012-03-01

    Supine bicycle exercise (SBE) echocardiography and treadmill exercise (TME) echocardiography have been used for evaluation of coronary artery disease (CAD). Although peak imaging acquisition has been considered unfeasible with TME, higher sensitivity for the detection of CAD has been recently found with this method compared with post-TME echocardiography. However, peak TME echocardiography has not been previously compared with the more standardized peak SBE echocardiography. The aim of this study was to compare peak TME echocardiography, peak SBE echocardiography, and post-TME echocardiography for the detection of CAD. A series of 116 patients (mean age, 61 ± 10 years) referred for evaluation of CAD underwent SBE (starting at 25 W, with 25-W increments every 2-3 min) and TME with peak and postexercise imaging acquisition, in a random sequence. Digitized images at baseline, at peak TME, after TME, and at peak SBE were interpreted in a random and blinded fashion. All patients underwent coronary angiography. Maximal heart rate was higher during TME, whereas systolic blood pressure was higher during SBE, resulting in similar rate-pressure products. On quantitative angiography, 75 patients had coronary stenosis (≥50%). In these patients, wall motion score indexes at maximal exercise were higher at peak TME (median, 1.45; interquartile range [IQR], 1.13-1.75) than at peak SBE (median, 1.25; IQR, 1.0-1.56) or after TME (median, 1.13; IQR, 1.0-1.38) (P = .002 between peak TME and peak SBE imaging, P peak TME (median, 5; IQR, 2-12) compared with peak SBE (median, 3; IQR, 0-8) or after TME (median, 2; IQR, 0-4) (P peak TME and peak SBE imaging, P peak TME, peak SBE, and post-TME echocardiography for CAD was 84%, 75%, and 60% (P = .001 between post-TME and peak TME echocardiography, P = .055 between post-TME and peak SBE echocardiography), with specificity of 63%, 80%, and 78%, respectively (P = NS) and accuracy of 77%, 77%, and 66%, respectively (P = NS). Peak TME

  18. Peak-locking centroid bias in Shack-Hartmann wavefront sensing

    Science.gov (United States)

    Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.

    2018-05-01

    Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.

  19. Influence of the impurity-scattering on zero-bias conductance peak in ferromagnet/insulator/d-wave superconductor junctions

    CERN Document Server

    Yoshida, N; Itoh, H; Tanaka, Y; Inoue, J I; Kashiwaya, S

    2003-01-01

    Effects of impurity-scattering on a zero-bias conductance peak in ferromagnet/insulator/d-wave superconductor junctions are theoretically studied. The impurities are introduced through the random potential in ferromagnets near the junction interface. As in the case of normal-metal/insulator/d-wave superconductor junctions, the magnitude of zero-bias conductance peak decreases with increasing the degree of disorder. However, when the magnitude of the exchange potential in ferromagnet is sufficiently large, the random potential can enhance the zero-bias conductance peak in ferromagnetic junctions. (author)

  20. The random walk model of intrafraction movement

    International Nuclear Information System (INIS)

    Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

    2013-01-01

    The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction Gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-Gaussian corrections from the random walk model. (paper)

  1. The random walk model of intrafraction movement.

    Science.gov (United States)

    Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

    2013-04-07

    The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-gaussian corrections from the random walk model.

  2. flowPeaks: a fast unsupervised clustering for flow cytometry data via K-means and density peak finding.

    Science.gov (United States)

    Ge, Yongchao; Sealfon, Stuart C

    2012-08-01

    For flow cytometry data, there are two common approaches to the unsupervised clustering problem: one is based on the finite mixture model and the other on spatial exploration of the histograms. The former is computationally slow and has difficulty to identify clusters of irregular shapes. The latter approach cannot be applied directly to high-dimensional data as the computational time and memory become unmanageable and the estimated histogram is unreliable. An algorithm without these two problems would be very useful. In this article, we combine ideas from the finite mixture model and histogram spatial exploration. This new algorithm, which we call flowPeaks, can be applied directly to high-dimensional data and identify irregular shape clusters. The algorithm first uses K-means algorithm with a large K to partition the cell population into many small clusters. These partitioned data allow the generation of a smoothed density function using the finite mixture model. All local peaks are exhaustively searched by exploring the density function and the cells are clustered by the associated local peak. The algorithm flowPeaks is automatic, fast and reliable and robust to cluster shape and outliers. This algorithm has been applied to flow cytometry data and it has been compared with state of the art algorithms, including Misty Mountain, FLOCK, flowMeans, flowMerge and FLAME. The R package flowPeaks is available at https://github.com/yongchao/flowPeaks. yongchao.ge@mssm.edu Supplementary data are available at Bioinformatics online.

  3. Theoretical Derivation of Simplified Evaluation Models for the First Peak of a Criticality Accident in Nuclear Fuel Solution

    International Nuclear Information System (INIS)

    Nomura, Yasushi

    2000-01-01

    In a reprocessing facility where nuclear fuel solutions are processed, one could observe a series of power peaks, with the highest peak right after a criticality accident. The criticality alarm system (CAS) is designed to detect the first power peak and warn workers near the reacting material by sounding alarms immediately. Consequently, exposure of the workers would be minimized by an immediate and effective evacuation. Therefore, in the design and installation of a CAS, it is necessary to estimate the magnitude of the first power peak and to set up the threshold point where the CAS initiates the alarm. Furthermore, it is necessary to estimate the level of potential exposure of workers in the case of accidents so as to decide the appropriateness of installing a CAS for a given compartment.A simplified evaluation model to estimate the minimum scale of the first power peak during a criticality accident is derived by theoretical considerations only for use in the design of a CAS to set up the threshold point triggering the alarm signal. Another simplified evaluation model is derived in the same way to estimate the maximum scale of the first power peak for use in judging the appropriateness for installing a CAS. Both models are shown to have adequate margin in predicting the minimum and maximum scale of criticality accidents by comparing their results with French CRiticality occurring ACcidentally (CRAC) experimental data

  4. A random regret minimization model of travel choice

    NARCIS (Netherlands)

    Chorus, C.G.; Arentze, T.A.; Timmermans, H.J.P.

    2008-01-01

    Abstract This paper presents an alternative to Random Utility-Maximization models of travel choice. Our Random Regret-Minimization model is rooted in Regret Theory and provides several useful features for travel demand analysis. Firstly, it allows for the possibility that choices between travel

  5. Statistics of peak overpressure and shock steepness for linear and nonlinear N-wave propagation in a kinematic turbulence.

    Science.gov (United States)

    Yuldashev, Petr V; Ollivier, Sébastien; Karzova, Maria M; Khokhlova, Vera A; Blanc-Benon, Philippe

    2017-12-01

    Linear and nonlinear propagation of high amplitude acoustic pulses through a turbulent layer in air is investigated using a two-dimensional KZK-type (Khokhlov-Zabolotskaya-Kuznetsov) equation. Initial waves are symmetrical N-waves with shock fronts of finite width. A modified von Kármán spectrum model is used to generate random wind velocity fluctuations associated with the turbulence. Physical parameters in simulations correspond to previous laboratory scale experiments where N-waves with 1.4 cm wavelength propagated through a turbulence layer with the outer scale of about 16 cm. Mean value and standard deviation of peak overpressure and shock steepness, as well as cumulative probabilities to observe amplified peak overpressure and shock steepness, are analyzed. Nonlinear propagation effects are shown to enhance pressure level in random foci for moderate initial amplitudes of N-waves thus increasing the probability to observe highly peaked waveforms. Saturation of the pressure level is observed for stronger nonlinear effects. It is shown that in the linear propagation regime, the turbulence mainly leads to the smearing of shock fronts, thus decreasing the probability to observe high values of steepness, whereas nonlinear effects dramatically increase the probability to observe steep shocks.

  6. Modelling the Peak Elongation of Nylon6 and Fe Powder Based Composite Wire for FDM Feedstock Filament

    Science.gov (United States)

    Garg, Harish Kumar; Singh, Rupinder

    2017-10-01

    In the present work, to increase the application domain of fused deposition modelling (FDM) process, Nylon6-Fe powder based composite wire has been prepared as feed stock filament. Further for smooth functioning of feed stock filament without any change in the hardware and software of the commercial FDM setup, the mechanical properties of the newly prepared composite wire must be comparable/at par to the existing material i.e. ABS, P-430. So, keeping this in consideration; an effort has been made to model the peak elongation of in house developed feedstock filament comprising of Nylon6 and Fe powder (prepared on single screw extrusion process) for commercial FDM setup. The input parameters of single screw extruder (namely: barrel temperature, temperature of the die, speed of the screw, speed of the winding machine) and rheological property of material (melt flow index) has been modelled with peak elongation as the output by using response surface methodology. For validation of model the result of peak elongation obtained from the model equation the comparison was made with the results of actual experimentation which shows the variation of ±1 % only.

  7. Random matrix models for phase diagrams

    International Nuclear Information System (INIS)

    Vanderheyden, B; Jackson, A D

    2011-01-01

    We describe a random matrix approach that can provide generic and readily soluble mean-field descriptions of the phase diagram for a variety of systems ranging from quantum chromodynamics to high-T c materials. Instead of working from specific models, phase diagrams are constructed by averaging over the ensemble of theories that possesses the relevant symmetries of the problem. Although approximate in nature, this approach has a number of advantages. First, it can be useful in distinguishing generic features from model-dependent details. Second, it can help in understanding the 'minimal' number of symmetry constraints required to reproduce specific phase structures. Third, the robustness of predictions can be checked with respect to variations in the detailed description of the interactions. Finally, near critical points, random matrix models bear strong similarities to Ginsburg-Landau theories with the advantage of additional constraints inherited from the symmetries of the underlying interaction. These constraints can be helpful in ruling out certain topologies in the phase diagram. In this Key Issues Review, we illustrate the basic structure of random matrix models, discuss their strengths and weaknesses, and consider the kinds of system to which they can be applied.

  8. Creating, generating and comparing random network models with NetworkRandomizer.

    Science.gov (United States)

    Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni

    2016-01-01

    Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.

  9. Stochastic space interval as a link between quantum randomness and macroscopic randomness?

    Science.gov (United States)

    Haug, Espen Gaarder; Hoff, Harald

    2018-03-01

    For many stochastic phenomena, we observe statistical distributions that have fat-tails and high-peaks compared to the Gaussian distribution. In this paper, we will explain how observable statistical distributions in the macroscopic world could be related to the randomness in the subatomic world. We show that fat-tailed (leptokurtic) phenomena in our everyday macroscopic world are ultimately rooted in Gaussian - or very close to Gaussian-distributed subatomic particle randomness, but they are not, in a strict sense, Gaussian distributions. By running a truly random experiment over a three and a half-year period, we observed a type of random behavior in trillions of photons. Combining our results with simple logic, we find that fat-tailed and high-peaked statistical distributions are exactly what we would expect to observe if the subatomic world is quantized and not continuously divisible. We extend our analysis to the fact that one typically observes fat-tails and high-peaks relative to the Gaussian distribution in stocks and commodity prices and many aspects of the natural world; these instances are all observable and documentable macro phenomena that strongly suggest that the ultimate building blocks of nature are discrete (e.g. they appear in quanta).

  10. Peaking for optimal performance: Research limitations and future directions.

    Science.gov (United States)

    Pyne, David B; Mujika, Iñigo; Reilly, Thomas

    2009-02-01

    A key element of the physical preparation of athletes is the taper period in the weeks immediately preceding competition. Existing research has defined the taper, identified various forms used in contemporary sport, and examined the prescription of training volume, load, intensity, duration, and type (progressive or step). Current limitations include: the lack of studies on team, combative, racquet, and precision (target) sports; the relatively small number of randomized controlled trials; the narrow focus on a single competition (single peak) compared with multiple peaking for weekly, multi-day or multiple events; and limited understanding of the physiological, neuromuscular, and biomechanical basis of the taper. Future research should address these limitations, together with the influence of prior training on optimal tapering strategies, and the interactions between the taper and long-haul travel, heat, and altitude. Practitioners seek information on how to prescribe tapers from season to season during an athlete's career, or a team's progression through a domestic league season, or multi-year Olympic or World Cup cycle. Practical guidelines for planning effective tapers for the Vancouver 2010 and London 2012 Olympics will evolve from both experimental investigations and modelling of successful tapers currently employed in a wide range of sports.

  11. A random spatial network model based on elementary postulates

    Science.gov (United States)

    Karlinger, Michael R.; Troutman, Brent M.

    1989-01-01

    A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.

  12. Error of the modelled peak flow of the hydraulically reconstructed 1907 flood of the Ebro River in Xerta (NE Iberian Peninsula)

    Science.gov (United States)

    Lluís Ruiz-Bellet, Josep; Castelltort, Xavier; Carles Balasch, J.; Tuset, Jordi

    2016-04-01

    The estimation of the uncertainty of the results of the hydraulic modelling has been deeply analysed, but no clear methodological procedures as to its determination have been formulated when applied to historical hydrology. The main objective of this study was to calculate the uncertainty of the resulting peak flow of a typical historical flood reconstruction. The secondary objective was to identify the input variables that influenced the result the most and their contribution to peak flow total error. The uncertainty of 21-23 October 1907 flood of the Ebro River (NE Iberian Peninsula) in the town of Xerta (83,000 km2) was calculated with a series of local sensitivity analyses of the main variables affecting the resulting peak flow. Besides, in order to see to what degree the result depended on the chosen model, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation. The peak flow of 1907 flood in the Ebro River in Xerta, reconstructed with HEC-RAS, was 11500 m3·s-1 and its total error was ±31%. The most influential input variable over HEC-RAS peak flow results was water height; however, the one that contributed the most to peak flow error was Manning's n, because its uncertainty was far greater than water height's. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed. The peak flow was 12000 m3·s-1 when calculated with the 2D model Iber and 11500 m3·s-1 when calculated with the Manning equation.

  13. Modelling the impact of retention-detention units on sewer surcharge and peak and annual runoff reduction.

    Science.gov (United States)

    Locatelli, Luca; Gabriel, Søren; Mark, Ole; Mikkelsen, Peter Steen; Arnbjerg-Nielsen, Karsten; Taylor, Heidi; Bockhorn, Britta; Larsen, Hauge; Kjølby, Morten Just; Blicher, Anne Steensen; Binning, Philip John

    2015-01-01

    Stormwater management using water sensitive urban design is expected to be part of future drainage systems. This paper aims to model the combination of local retention units, such as soakaways, with subsurface detention units. Soakaways are employed to reduce (by storage and infiltration) peak and volume stormwater runoff; however, large retention volumes are required for a significant peak reduction. Peak runoff can therefore be handled by combining detention units with soakaways. This paper models the impact of retrofitting retention-detention units for an existing urbanized catchment in Denmark. The impact of retrofitting a retention-detention unit of 3.3 m³/100 m² (volume/impervious area) was simulated for a small catchment in Copenhagen using MIKE URBAN. The retention-detention unit was shown to prevent flooding from the sewer for a 10-year rainfall event. Statistical analysis of continuous simulations covering 22 years showed that annual stormwater runoff was reduced by 68-87%, and that the retention volume was on average 53% full at the beginning of rain events. The effect of different retention-detention volume combinations was simulated, and results showed that allocating 20-40% of a soakaway volume to detention would significantly increase peak runoff reduction with a small reduction in the annual runoff.

  14. Simulating WTP Values from Random-Coefficient Models

    OpenAIRE

    Maurus Rischatsch

    2009-01-01

    Discrete Choice Experiments (DCEs) designed to estimate willingness-to-pay (WTP) values are very popular in health economics. With increased computation power and advanced simulation techniques, random-coefficient models have gained an increasing importance in applied work as they allow for taste heterogeneity. This paper discusses the parametrical derivation of WTP values from estimated random-coefficient models and shows how these values can be simulated in cases where they do not have a kn...

  15. Random effects coefficient of determination for mixed and meta-analysis models.

    Science.gov (United States)

    Demidenko, Eugene; Sargent, James; Onega, Tracy

    2012-01-01

    The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.

  16. A Generalized Random Regret Minimization Model

    NARCIS (Netherlands)

    Chorus, C.G.

    2013-01-01

    This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM

  17. SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation

    International Nuclear Information System (INIS)

    Yao, W; Farr, J

    2015-01-01

    Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations

  18. Simulation of a directed random-walk model: the effect of pseudo-random-number correlations

    OpenAIRE

    Shchur, L. N.; Heringa, J. R.; Blöte, H. W. J.

    1996-01-01

    We investigate the mechanism that leads to systematic deviations in cluster Monte Carlo simulations when correlated pseudo-random numbers are used. We present a simple model, which enables an analysis of the effects due to correlations in several types of pseudo-random-number sequences. This model provides qualitative understanding of the bias mechanism in a class of cluster Monte Carlo algorithms.

  19. Regression models for predicting peak and continuous three-dimensional spinal loads during symmetric and asymmetric lifting tasks.

    Science.gov (United States)

    Fathallah, F A; Marras, W S; Parnianpour, M

    1999-09-01

    Most biomechanical assessments of spinal loading during industrial work have focused on estimating peak spinal compressive forces under static and sagittally symmetric conditions. The main objective of this study was to explore the potential of feasibly predicting three-dimensional (3D) spinal loading in industry from various combinations of trunk kinematics, kinetics, and subject-load characteristics. The study used spinal loading, predicted by a validated electromyography-assisted model, from 11 male participants who performed a series of symmetric and asymmetric lifts. Three classes of models were developed: (a) models using workplace, subject, and trunk motion parameters as independent variables (kinematic models); (b) models using workplace, subject, and measured moments variables (kinetic models); and (c) models incorporating workplace, subject, trunk motion, and measured moments variables (combined models). The results showed that peak 3D spinal loading during symmetric and asymmetric lifting were predicted equally well using all three types of regression models. Continuous 3D loading was predicted best using the combined models. When the use of such models is infeasible, the kinematic models can provide adequate predictions. Finally, lateral shear forces (peak and continuous) were consistently underestimated using all three types of models. The study demonstrated the feasibility of predicting 3D loads on the spine under specific symmetric and asymmetric lifting tasks without the need for collecting EMG information. However, further validation and development of the models should be conducted to assess and extend their applicability to lifting conditions other than those presented in this study. Actual or potential applications of this research include exposure assessment in epidemiological studies, ergonomic intervention, and laboratory task assessment.

  20. A Note on the Correlated Random Coefficient Model

    DEFF Research Database (Denmark)

    Kolodziejczyk, Christophe

    In this note we derive the bias of the OLS estimator for a correlated random coefficient model with one random coefficient, but which is correlated with a binary variable. We provide set-identification to the parameters of interest of the model. We also show how to reduce the bias of the estimator...

  1. Interdependent demands, regulatory constraint, and peak-load pricing. [Assessment of Bailey's model

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, D T; Macgregor-Reid, G J

    1977-06-01

    A model of a regulated firm which includes an analysis of peak-load pricing has been formulated by E. E. Bailey in which three alternative modes of regulation on a profit-maximizing firm are considered. The main conclusion reached is that under a regulation limiting the rate of return on capital investment, price reductions are received solely by peak-users and that when regulation limiting the profit per unit of output or the return on costs is imposed, there are price reductions for all users. Bailey has expressly assumed that the demands in different periods are interdependent but has somehow failed to derive the correct price and welfare implications of this empirically highly relevant assumption. Her conclusions would have been perfectly correct for marginal revenues but are quite incorrect for prices, even if her assumption that price exceeds marginal revenues in every period holds. This present paper derives fully and rigorously the implications of regulation for prices, outputs, capacity, and social welfare for a profit-maximizing firm with interdependent demands. In section II, Bailey's model is reproduced and the optimal conditions are given. In section III, it is demonstrated that under the conditions of interdependent demands assumed by Bailey herself, her often-quoted conclusion concerning the effects of the return-on-investment regulation on the off-peak price is invalid. In section IV, the effects of the return-on-investment regulation on the optimal prices, outputs, capacity, and social welfare both for the case in which the demands in different periods are substitutes and for the case in which they are complements are examined. In section V, the pricing and welfare implications of the return-on-investment regulation are compared with the two other modes of regulation considered by Bailey. Section VI is a summary of all sections. (MCW)

  2. The Impact of the Twin Peaks Model on the Insurance Industry

    Directory of Open Access Journals (Sweden)

    Daleen Millard

    2017-02-01

    Full Text Available Financial regulation in South Africa changes constantly. In the quest to find the ideal regulatory framework for optimal consumer protection, rules change all the time and international trends have an important influence on lawmakers nationally. The Financial Sector Regulation Bill, also known as the "Twin Peaks" Bill, is the latest invention from the table of the legislature, and some expect this Bill to have far-reaching consequences for the financial services industry. The question is, of course, whether the current dispensation will change so quickly and so dramatically that it will literally be the end of the world as we know it or whether there will be a gradual shift in emphasis away from the so-called silo regulatory approach to an approach that distinguishes between prudential regulation on the one hand and market conduct regulation on the other. A further question is whether insurance as a financial service will change dramatically in the light of the expected twin peak dispensation. The purpose of this paper is to discuss the implications of the FSR Bill for the insurance industry. Instead of analysing the Bill feature for feature, the method that will be used in this enquiry is to identify trends and issues from 2014 and to discuss whether the Twin Peaks model, once implemented, can successfully eradicate similar problems in future. The impact of Twin Peaks will of course have to be tested, but at this point in time it may be very useful to take an educated guess by using recent cases as examples. Recent cases before the courts, the Enforcement Committee and the FAIS Ombud will be discussed not only as examples of the most prevalent issues of the past year or so, but also as examples of how consumer issues and systemic risks are currently being dealt with and how this may change with the implementation of the FSR Bill.

  3. The random field Blume-Capel model revisited

    Science.gov (United States)

    Santos, P. V.; da Costa, F. A.; de Araújo, J. M.

    2018-04-01

    We have revisited the mean-field treatment for the Blume-Capel model under the presence of a discrete random magnetic field as introduced by Kaufman and Kanner (1990). The magnetic field (H) versus temperature (T) phase diagrams for given values of the crystal field D were recovered in accordance to Kaufman and Kanner original work. However, our main goal in the present work was to investigate the distinct structures of the crystal field versus temperature phase diagrams as the random magnetic field is varied because similar models have presented reentrant phenomenon due to randomness. Following previous works we have classified the distinct phase diagrams according to five different topologies. The topological structure of the phase diagrams is maintained for both H - T and D - T cases. Although the phase diagrams exhibit a richness of multicritical phenomena we did not found any reentrant effect as have been seen in similar models.

  4. Random regression models for detection of gene by environment interaction

    Directory of Open Access Journals (Sweden)

    Meuwissen Theo HE

    2007-02-01

    Full Text Available Abstract Two random regression models, where the effect of a putative QTL was regressed on an environmental gradient, are described. The first model estimates the correlation between intercept and slope of the random regression, while the other model restricts this correlation to 1 or -1, which is expected under a bi-allelic QTL model. The random regression models were compared to a model assuming no gene by environment interactions. The comparison was done with regards to the models ability to detect QTL, to position them accurately and to detect possible QTL by environment interactions. A simulation study based on a granddaughter design was conducted, and QTL were assumed, either by assigning an effect independent of the environment or as a linear function of a simulated environmental gradient. It was concluded that the random regression models were suitable for detection of QTL effects, in the presence and absence of interactions with environmental gradients. Fixing the correlation between intercept and slope of the random regression had a positive effect on power when the QTL effects re-ranked between environments.

  5. Peak Oil, Peak Coal and Climate Change

    Science.gov (United States)

    Murray, J. W.

    2009-05-01

    Research on future climate change is driven by the family of scenarios developed for the IPCC assessment reports. These scenarios create projections of future energy demand using different story lines consisting of government policies, population projections, and economic models. None of these scenarios consider resources to be limiting. In many of these scenarios oil production is still increasing to 2100. Resource limitation (in a geological sense) is a real possibility that needs more serious consideration. The concept of 'Peak Oil' has been discussed since M. King Hubbert proposed in 1956 that US oil production would peak in 1970. His prediction was accurate. This concept is about production rate not reserves. For many oil producing countries (and all OPEC countries) reserves are closely guarded state secrets and appear to be overstated. Claims that the reserves are 'proven' cannot be independently verified. Hubbert's Linearization Model can be used to predict when half the ultimate oil will be produced and what the ultimate total cumulative production (Qt) will be. US oil production can be used as an example. This conceptual model shows that 90% of the ultimate US oil production (Qt = 225 billion barrels) will have occurred by 2011. This approach can then be used to suggest that total global production will be about 2200 billion barrels and that the half way point will be reached by about 2010. This amount is about 5 to 7 times less than assumed by the IPCC scenarios. The decline of Non-OPEC oil production appears to have started in 2004. Of the OPEC countries, only Saudi Arabia may have spare capacity, but even that is uncertain, because of lack of data transparency. The concept of 'Peak Coal' is more controversial, but even the US National Academy Report in 2007 concluded only a small fraction of previously estimated reserves in the US are actually minable reserves and that US reserves should be reassessed using modern methods. British coal production can be

  6. Model-based dynamic multi-parameter method for peak power estimation of lithium-ion batteries

    NARCIS (Netherlands)

    Sun, F.; Xiong, R.; He, H.; Li, W.; Aussems, J.E.E.

    2012-01-01

    A model-based dynamic multi-parameter method for peak power estimation is proposed for batteries and battery management systems (BMSs) used in hybrid electric vehicles (HEVs). The available power must be accurately calculated in order to not damage the battery by over charging or over discharging or

  7. Emergent randomness in the Jaynes-Cummings model

    International Nuclear Information System (INIS)

    Garraway, B M; Stenholm, S

    2008-01-01

    We consider the well-known Jaynes-Cummings model and ask if it can display randomness. As a solvable Hamiltonian system, it does not display chaotic behaviour in the ordinary sense. Here, however, we look at the distribution of values taken up during the total time evolution. This evolution is determined by the eigenvalues distributed as the square roots of integers and leads to a seemingly erratic behaviour. That this may display a random Gaussian value distribution is suggested by an exactly provable result by Kac. In order to reach our conclusion we use the Kac model to develop tests for the emergence of a Gaussian. Even if the consequent double limits are difficult to evaluate numerically, we find definite indications that the Jaynes-Cummings case also produces a randomness in its value distributions. Numerical methods do not establish such a result beyond doubt, but our conclusions are definite enough to suggest strongly an unexpected randomness emerging in a dynamic time evolution

  8. Critical Behavior of the Annealed Ising Model on Random Regular Graphs

    Science.gov (United States)

    Can, Van Hao

    2017-11-01

    In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.

  9. The spatial resolution of epidemic peaks.

    Directory of Open Access Journals (Sweden)

    Harriet L Mills

    2014-04-01

    Full Text Available The emergence of novel respiratory pathogens can challenge the capacity of key health care resources, such as intensive care units, that are constrained to serve only specific geographical populations. An ability to predict the magnitude and timing of peak incidence at the scale of a single large population would help to accurately assess the value of interventions designed to reduce that peak. However, current disease-dynamic theory does not provide a clear understanding of the relationship between: epidemic trajectories at the scale of interest (e.g. city; population mobility; and higher resolution spatial effects (e.g. transmission within small neighbourhoods. Here, we used a spatially-explicit stochastic meta-population model of arbitrary spatial resolution to determine the effect of resolution on model-derived epidemic trajectories. We simulated an influenza-like pathogen spreading across theoretical and actual population densities and varied our assumptions about mobility using Latin-Hypercube sampling. Even though, by design, cumulative attack rates were the same for all resolutions and mobilities, peak incidences were different. Clear thresholds existed for all tested populations, such that models with resolutions lower than the threshold substantially overestimated population-wide peak incidence. The effect of resolution was most important in populations which were of lower density and lower mobility. With the expectation of accurate spatial incidence datasets in the near future, our objective was to provide a framework for how to use these data correctly in a spatial meta-population model. Our results suggest that there is a fundamental spatial resolution for any pathogen-population pair. If underlying interactions between pathogens and spatially heterogeneous populations are represented at this resolution or higher, accurate predictions of peak incidence for city-scale epidemics are feasible.

  10. A random effects meta-analysis model with Box-Cox transformation.

    Science.gov (United States)

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The

  11. A random effects meta-analysis model with Box-Cox transformation

    Directory of Open Access Journals (Sweden)

    Yusuke Yamaguchi

    2017-07-01

    Full Text Available Abstract Background In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. Methods We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. Results A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and

  12. Approximating prediction uncertainty for random forest regression models

    Science.gov (United States)

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  13. Compensatory and non-compensatory multidimensional randomized item response models

    NARCIS (Netherlands)

    Fox, J.P.; Entink, R.K.; Avetisyan, M.

    2014-01-01

    Randomized response (RR) models are often used for analysing univariate randomized response data and measuring population prevalence of sensitive behaviours. There is much empirical support for the belief that RR methods improve the cooperation of the respondents. Recently, RR models have been

  14. Hydrological model calibration for derived flood frequency analysis using stochastic rainfall and probability distributions of peak flows

    Science.gov (United States)

    Haberlandt, U.; Radtke, I.

    2014-01-01

    Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the

  15. Probabilistic model for fluences and peak fluxes of solar energetic particles

    International Nuclear Information System (INIS)

    Nymmik, R.A.

    1999-01-01

    The model is intended for calculating the probability for solar energetic particles (SEP), i.e., protons and Z=2-28 ions, to have an effect on hardware and on biological and other objects in the space. The model describes the probability for the ≥10 MeV/nucleon SEP fluences and peak fluxes to occur in the near-Earth space beyond the Earth magnetosphere under varying solar activity. The physical prerequisites of the model are as follows. The occurrence of SEP is a probabilistic process. The mean SEP occurrence frequency is a power-law function of solar activity (sunspot number). The SEP size (taken to be the ≥30 MeV proton fluence size) distribution is a power-law function within a 10 5 -10 11 proton/cm 2 range. The SEP event particle energy spectra are described by a common function whose parameters are distributed log-normally. The SEP mean composition is energy-dependent and suffers fluctuations described by log-normal functions in separate events

  16. A cluster expansion approach to exponential random graph models

    International Nuclear Information System (INIS)

    Yin, Mei

    2012-01-01

    The exponential family of random graphs are among the most widely studied network models. We show that any exponential random graph model may alternatively be viewed as a lattice gas model with a finite Banach space norm. The system may then be treated using cluster expansion methods from statistical mechanics. In particular, we derive a convergent power series expansion for the limiting free energy in the case of small parameters. Since the free energy is the generating function for the expectations of other random variables, this characterizes the structure and behavior of the limiting network in this parameter region

  17. Modeling of GE Appliances in GridLAB-D: Peak Demand Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Fuller, Jason C.; Vyakaranam, Bharat GNVSR; Prakash Kumar, Nirupama; Leistritz, Sean M.; Parker, Graham B.

    2012-04-29

    The widespread adoption of demand response enabled appliances and thermostats can result in significant reduction to peak electrical demand and provide potential grid stabilization benefits. GE has developed a line of appliances that will have the capability of offering several levels of demand reduction actions based on information from the utility grid, often in the form of price. However due to a number of factors, including the number of demand response enabled appliances available at any given time, the reduction of diversity factor due to the synchronizing control signal, and the percentage of consumers who may override the utility signal, it can be difficult to predict the aggregate response of a large number of residences. The effects of these behaviors can be modeled and simulated in open-source software, GridLAB-D, including evaluation of appliance controls, improvement to current algorithms, and development of aggregate control methodologies. This report is the first in a series of three reports describing the potential of GE's demand response enabled appliances to provide benefits to the utility grid. The first report will describe the modeling methodology used to represent the GE appliances in the GridLAB-D simulation environment and the estimated potential for peak demand reduction at various deployment levels. The second and third reports will explore the potential of aggregated group actions to positively impact grid stability, including frequency and voltage regulation and spinning reserves, and the impacts on distribution feeder voltage regulation, including mitigation of fluctuations caused by high penetration of photovoltaic distributed generation and the effects on volt-var control schemes.

  18. A theoretical model for predicting the Peak Cutting Force of conical picks

    Directory of Open Access Journals (Sweden)

    Gao Kuidong

    2014-01-01

    Full Text Available In order to predict the PCF (Peak Cutting Force of conical pick in rock cutting process, a theoretical model is established based on elastic fracture mechanics theory. The vertical fracture model of rock cutting fragment is also established based on the maximum tensile criterion. The relation between vertical fracture angle and associated parameters (cutting parameter  and ratio B of rock compressive strength to tensile strength is obtained by numerical analysis method and polynomial regression method, and the correctness of rock vertical fracture model is verified through experiments. Linear regression coefficient between the PCF of prediction and experiments is 0.81, and significance level less than 0.05 shows that the model for predicting the PCF is correct and reliable. A comparative analysis between the PCF obtained from this model and Evans model reveals that the result of this prediction model is more reliable and accurate. The results of this work could provide some guidance for studying the rock cutting theory of conical pick and designing the cutting mechanism.

  19. Computer simulations of the random barrier model

    DEFF Research Database (Denmark)

    Schrøder, Thomas; Dyre, Jeppe

    2002-01-01

    A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...

  20. Randomized Item Response Theory Models

    NARCIS (Netherlands)

    Fox, Gerardus J.A.

    2005-01-01

    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by

  1. Characterizing the Peak in the Cosmic Microwave Background Angular Power Spectrum

    Science.gov (United States)

    Knox, Lloyd; Page, Lyman

    2000-08-01

    A peak has been unambiguously detected in the cosmic microwave background angular spectrum. Here we characterize its properties with fits to phenomenological models. We find that the TOCO and BOOM/NA data determine the peak location to be in the range 175-243 and 151-259, respectively (at 95% confidence) and determine the peak amplitude to be between ~70 and 90 μK. The peak shape is consistent with inflation-inspired flat, cold dark matter plus cosmological constant models of structure formation with adiabatic, nearly scale invariant initial conditions. It is inconsistent with open models and presents a great challenge to defect models.

  2. The critical role of the routing scheme in simulating peak river discharge in global hydrological models

    Science.gov (United States)

    Zhao, F.; Veldkamp, T.; Frieler, K.; Schewe, J.; Ostberg, S.; Willner, S. N.; Schauberger, B.; Gosling, S.; Mueller Schmied, H.; Portmann, F. T.; Leng, G.; Huang, M.; Liu, X.; Tang, Q.; Hanasaki, N.; Biemans, H.; Gerten, D.; Satoh, Y.; Pokhrel, Y. N.; Stacke, T.; Ciais, P.; Chang, J.; Ducharne, A.; Guimberteau, M.; Wada, Y.; Kim, H.; Yamazaki, D.

    2017-12-01

    Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge—which is crucial in flood simulations—has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a project. The runoff simulations were used as input for the global river routing model CaMa-Flood. The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about 2/3 of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.

  3. 93-106, 2015 93 Multilevel random effect and marginal models

    African Journals Online (AJOL)

    Multilevel random effect and marginal models for longitudinal data ... and random effect models that take the correlation among measurements of the same subject ... comparing the level of redness, pain and irritability ... clinical trial evaluating the safety profile of a new .... likelihood-based methods to compare models and.

  4. [A peak recognition algorithm designed for chromatographic peaks of transformer oil].

    Science.gov (United States)

    Ou, Linjun; Cao, Jian

    2014-09-01

    In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.

  5. Ising model of a randomly triangulated random surface as a definition of fermionic string theory

    International Nuclear Information System (INIS)

    Bershadsky, M.A.; Migdal, A.A.

    1986-01-01

    Fermionic degrees of freedom are added to randomly triangulated planar random surfaces. It is shown that the Ising model on a fixed graph is equivalent to a certain Majorana fermion theory on the dual graph. (orig.)

  6. MEASURING PRIMORDIAL NON-GAUSSIANITY THROUGH WEAK-LENSING PEAK COUNTS

    International Nuclear Information System (INIS)

    Marian, Laura; Hilbert, Stefan; Smith, Robert E.; Schneider, Peter; Desjacques, Vincent

    2011-01-01

    We explore the possibility of detecting primordial non-Gaussianity of the local type using weak-lensing peak counts. We measure the peak abundance in sets of simulated weak-lensing maps corresponding to three models f NL = 0, - 100, and 100. Using survey specifications similar to those of EUCLID and without assuming any knowledge of the lens and source redshifts, we find the peak functions of the non-Gaussian models with f NL = ±100 to differ by up to 15% from the Gaussian peak function at the high-mass end. For the assumed survey parameters, the probability of fitting an f NL = 0 peak function to the f NL = ±100 peak functions is less than 0.1%. Assuming the other cosmological parameters are known, f NL can be measured with an error Δf NL ∼ 13. It is therefore possible that future weak-lensing surveys like EUCLID and LSST may detect primordial non-Gaussianity from the abundance of peak counts, and provide information complementary to that obtained from the cosmic microwave background.

  7. Random matrices and random difference equations

    International Nuclear Information System (INIS)

    Uppuluri, V.R.R.

    1975-01-01

    Mathematical models leading to products of random matrices and random difference equations are discussed. A one-compartment model with random behavior is introduced, and it is shown how the average concentration in the discrete time model converges to the exponential function. This is of relevance to understanding how radioactivity gets trapped in bone structure in blood--bone systems. The ideas are then generalized to two-compartment models and mammillary systems, where products of random matrices appear in a natural way. The appearance of products of random matrices in applications in demography and control theory is considered. Then random sequences motivated from the following problems are studied: constant pulsing and random decay models, random pulsing and constant decay models, and random pulsing and random decay models

  8. Estimation of the peak factor based on watershed characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Gauthier, Jean; Nolin, Simon; Ruest, Benoit [BPR Inc., Quebec, (Canada)

    2010-07-01

    Hydraulic modeling and dam structure design require the river flood flow as a primary input. For a given flood event, the ratio of peak flow over mean daily flow defines the peak factor. The peak factor value is dependent on the watershed and location along the river. The main goal of this study consisted in finding a relationship between watershed characteristics and this peak factor. Regression analyses were carried out on 53 natural watersheds located in the southern part of the province of Quebec using data from the Centre d'expertise hydrique du Quebec (CEHQ). The watershed characteristics included in the analyses were the watershed area, the maximum flow length, the mean slope, the lake proportion and the mean elevation. The results showed that watershed area and length are the major parameters influencing the peak factor. Nine natural watersheds were also used to test the use of a multivariable model in order to determine the peak factor for ungauged watersheds.

  9. Characterizing the Peak in the Cosmic Microwave Background Angular Power Spectrum

    International Nuclear Information System (INIS)

    Knox, Lloyd; Page, Lyman

    2000-01-01

    A peak has been unambiguously detected in the cosmic microwave background angular spectrum. Here we characterize its properties with fits to phenomenological models. We find that the TOCO and BOOM/NA data determine the peak location to be in the range 175-243 and 151-259, respectively (at 95% confidence) and determine the peak amplitude to be between ≅70 and 90 μK . The peak shape is consistent with inflation-inspired flat, cold dark matter plus cosmological constant models of structure formation with adiabatic, nearly scale invariant initial conditions. It is inconsistent with open models and presents a great challenge to defect models. (c) 2000 The American Physical Society

  10. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    Science.gov (United States)

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  11. The hard-core model on random graphs revisited

    International Nuclear Information System (INIS)

    Barbier, Jean; Krzakala, Florent; Zhang, Pan; Zdeborová, Lenka

    2013-01-01

    We revisit the classical hard-core model, also known as independent set and dual to vertex cover problem, where one puts particles with a first-neighbor hard-core repulsion on the vertices of a random graph. Although the case of random graphs with small and very large average degrees respectively are quite well understood, they yield qualitatively different results and our aim here is to reconciliate these two cases. We revisit results that can be obtained using the (heuristic) cavity method and show that it provides a closed-form conjecture for the exact density of the densest packing on random regular graphs with degree K ≥ 20, and that for K > 16 the nature of the phase transition is the same as for large K. This also shows that the hard-code model is the simplest mean-field lattice model for structural glasses and jamming

  12. Application of Poisson random effect models for highway network screening.

    Science.gov (United States)

    Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

    2014-02-01

    In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Random matrix model for disordered conductors

    Indian Academy of Sciences (India)

    In the interpretation of transport properties of mesoscopic systems, the multichannel ... One defines the random matrix model with N eigenvalues 0. λТ ..... With heuristic arguments, using the ideas pertaining to Dyson Coulomb gas analogy,.

  14. CONSTRAINING MODELS OF TWIN-PEAK QUASI-PERIODIC OSCILLATIONS WITH REALISTIC NEUTRON STAR EQUATIONS OF STATE

    Energy Technology Data Exchange (ETDEWEB)

    Török, Gabriel; Goluchová, Katerina; Urbanec, Martin, E-mail: gabriel.torok@gmail.com, E-mail: katka.g@seznam.cz, E-mail: martin.urbanec@physics.cz [Research Centre for Computational Physics and Data Processing, Institute of Physics, Faculty of Philosophy and Science, Silesian University in Opava, Bezručovo nám. 13, CZ-746, 01 Opava (Czech Republic); and others

    2016-12-20

    Twin-peak quasi-periodic oscillations (QPOs) are observed in the X-ray power-density spectra of several accreting low-mass neutron star (NS) binaries. In our previous work we have considered several QPO models. We have identified and explored mass–angular-momentum relations implied by individual QPO models for the atoll source 4U 1636-53. In this paper we extend our study and confront QPO models with various NS equations of state (EoS). We start with simplified calculations assuming Kerr background geometry and then present results of detailed calculations considering the influence of NS quadrupole moment (related to rotationally induced NS oblateness) assuming Hartle–Thorne spacetimes. We show that the application of concrete EoS together with a particular QPO model yields a specific mass–angular-momentum relation. However, we demonstrate that the degeneracy in mass and angular momentum can be removed when the NS spin frequency inferred from the X-ray burst observations is considered. We inspect a large set of EoS and discuss their compatibility with the considered QPO models. We conclude that when the NS spin frequency in 4U 1636-53 is close to 580 Hz, we can exclude 51 of the 90 considered combinations of EoS and QPO models. We also discuss additional restrictions that may exclude even more combinations. Namely, 13 EOS are compatible with the observed twin-peak QPOs and the relativistic precession model. However, when considering the low-frequency QPOs and Lense–Thirring precession, only 5 EOS are compatible with the model.

  15. Analog model for quantum gravity effects: phonons in random fluids.

    Science.gov (United States)

    Krein, G; Menezes, G; Svaiter, N F

    2010-09-24

    We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.

  16. Modeling random combustion of lycopodium particles and gas

    Directory of Open Access Journals (Sweden)

    M Bidabadi

    2016-06-01

    Full Text Available The random modeling combustion of lycopodium particles has been researched by many authors. In this paper, we extend this model and we also generate a different method by analyzing the effect of random distributed sources of combustible mixture. The flame structure is assumed to consist of a preheat-vaporization zone, a reaction zone and finally a post flame zone. We divide the preheat zone to different parts. We assumed that there is different distribution of particles in sections which are really random. Meanwhile, it is presumed that the fuel particles vaporize first to yield gaseous fuel. In other words, most of the fuel particles are vaporized at the end of the preheat zone. It is assumed that the Zel’dovich number is large; therefore, the reaction term in preheat zone is negligible. In this work, the effect of random distribution of particles in the preheat zone on combustion characteristics such as burning velocity, flame temperature for different particle radius is obtained.

  17. METing SUSY on the Z peak

    Energy Technology Data Exchange (ETDEWEB)

    Barenboim, G.; Bernabeu, J.; Vives, O. [Universitat de Valencia, Departament de Fisica Teorica, Burjassot (Spain); Universitat de Valencia-CSIC, Parc Cientific U.V., IFIC, Paterna (Spain); Mitsou, V.A.; Romero, E. [Universitat de Valencia-CSIC, Parc Cientific U.V., IFIC, Paterna (Spain)

    2016-02-15

    Recently the ATLAS experiment announced a 3 σ excess at the Z-peak consisting of 29 pairs of leptons together with two or more jets, E{sub T}{sup miss} > 225 GeV and HT > 600 GeV, to be compared with 10.6 ± 3.2 expected lepton pairs in the Standard Model. No excess outside the Z-peak was observed. By trying to explain this signal with SUSY we find that only relatively light gluinos, m{sub g} or similar 400 GeV decaying predominantly to Z-boson plus a light gravitino, such that nearly every gluino produces at least one Z-boson in its decay chain, could reproduce the excess. We construct an explicit general gauge mediation model able to reproduce the observed signal overcoming all the experimental limits. Needless to say, more sophisticated models could also reproduce the signal, however, any model would have to exhibit the following features: light gluinos, or heavy particles with a strong production cross section, producing at least one Z-boson in its decay chain. The implications of our findings for the Run II at LHC with the scaling on the Z peak, as well as for the direct search of gluinos and other SUSY particles, are pointed out. (orig.)

  18. METing SUSY on the Z peak

    International Nuclear Information System (INIS)

    Barenboim, G.; Bernabeu, J.; Vives, O.; Mitsou, V.A.; Romero, E.

    2016-01-01

    Recently the ATLAS experiment announced a 3 σ excess at the Z-peak consisting of 29 pairs of leptons together with two or more jets, E T miss > 225 GeV and HT > 600 GeV, to be compared with 10.6 ± 3.2 expected lepton pairs in the Standard Model. No excess outside the Z-peak was observed. By trying to explain this signal with SUSY we find that only relatively light gluinos, m g or similar 400 GeV decaying predominantly to Z-boson plus a light gravitino, such that nearly every gluino produces at least one Z-boson in its decay chain, could reproduce the excess. We construct an explicit general gauge mediation model able to reproduce the observed signal overcoming all the experimental limits. Needless to say, more sophisticated models could also reproduce the signal, however, any model would have to exhibit the following features: light gluinos, or heavy particles with a strong production cross section, producing at least one Z-boson in its decay chain. The implications of our findings for the Run II at LHC with the scaling on the Z peak, as well as for the direct search of gluinos and other SUSY particles, are pointed out. (orig.)

  19. A generalized model via random walks for information filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)

    2016-08-06

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  20. A generalized model via random walks for information filtering

    International Nuclear Information System (INIS)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-01-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  1. Forecasting Ebola with a regression transmission model

    OpenAIRE

    Asher, Jason

    2017-01-01

    We describe a relatively simple stochastic model of Ebola transmission that was used to produce forecasts with the lowest mean absolute error among Ebola Forecasting Challenge participants. The model enabled prediction of peak incidence, the timing of this peak, and final size of the outbreak. The underlying discrete-time compartmental model used a time-varying reproductive rate modeled as a multiplicative random walk driven by the number of infectious individuals. This structure generalizes ...

  2. A random energy model for size dependence : recurrence vs. transience

    NARCIS (Netherlands)

    Külske, Christof

    1998-01-01

    We investigate the size dependence of disordered spin models having an infinite number of Gibbs measures in the framework of a simplified 'random energy model for size dependence'. We introduce two versions (involving either independent random walks or branching processes), that can be seen as

  3. Hydrological model calibration for flood prediction in current and future climates using probability distributions of observed peak flows and model based rainfall

    Science.gov (United States)

    Haberlandt, Uwe; Wallner, Markus; Radtke, Imke

    2013-04-01

    Derived flood frequency analysis based on continuous hydrological modelling is very demanding regarding the required length and temporal resolution of precipitation input data. Often such flood predictions are obtained using long precipitation time series from stochastic approaches or from regional climate models as input. However, the calibration of the hydrological model is usually done using short time series of observed data. This inconsistent employment of different data types for calibration and application of a hydrological model increases its uncertainty. Here, it is proposed to calibrate a hydrological model directly on probability distributions of observed peak flows using model based rainfall in line with its later application. Two examples are given to illustrate the idea. The first one deals with classical derived flood frequency analysis using input data from an hourly stochastic rainfall model. The second one concerns a climate impact analysis using hourly precipitation from a regional climate model. The results show that: (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated on extreme conditions works quite well for average conditions but not vice versa, (III) the calibration of the hydrological model using regional climate model data works as an implicit bias correction method and (IV) the best performance for flood estimation is usually obtained when model based precipitation and observed probability distribution of peak flows are used for model calibration.

  4. Empirical model for the electron density peak height disturbance in response to solar wind conditions

    Science.gov (United States)

    Blanch, E.; Altadill, D.

    2009-04-01

    Geomagnetic storms disturb the quiet behaviour of the ionosphere, its electron density and the electron density peak height, hmF2. Many works have been done to predict the variations of the electron density but few efforts have been dedicated to predict the variations the hmF2 under disturbed helio-geomagnetic conditions. We present the results of the analyses of the F2 layer peak height disturbances occurred during intense geomagnetic storms for one solar cycle. The results systematically show a significant peak height increase about 2 hours after the beginning of the main phase of the geomagnetic storm, independently of both the local time position of the station at the onset of the storm and the intensity of the storm. An additional uplift is observed in the post sunset sector. The duration of the uplift and the height increase are dependent of the intensity of the geomagnetic storm, the season and the local time position of the station at the onset of the storm. An empirical model has been developed to predict the electron density peak height disturbances in response to solar wind conditions and local time which can be used for nowcasting and forecasting the hmF2 disturbances for the middle latitude ionosphere. This being an important output for EURIPOS project operational purposes.

  5. Scaling of peak flows with constant flow velocity in random self-similar networks

    Science.gov (United States)

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2011-01-01

    A methodology is presented to understand the role of the statistical self-similar topology of real river networks on scaling, or power law, in peak flows for rainfall-runoff events. We created Monte Carlo generated sets of ensembles of 1000 random self-similar networks (RSNs) with geometrically distributed interior and exterior generators having parameters pi and pe, respectively. The parameter values were chosen to replicate the observed topology of real river networks. We calculated flow hydrographs in each of these networks by numerically solving the link-based mass and momentum conservation equation under the assumption of constant flow velocity. From these simulated RSNs and hydrographs, the scaling exponents β and φ characterizing power laws with respect to drainage area, and corresponding to the width functions and flow hydrographs respectively, were estimated. We found that, in general, φ > β, which supports a similar finding first reported for simulations in the river network of the Walnut Gulch basin, Arizona. Theoretical estimation of β and φ in RSNs is a complex open problem. Therefore, using results for a simpler problem associated with the expected width function and expected hydrograph for an ensemble of RSNs, we give heuristic arguments for theoretical derivations of the scaling exponents β(E) and φ(E) that depend on the Horton ratios for stream lengths and areas. These ratios in turn have a known dependence on the parameters of the geometric distributions of RSN generators. Good agreement was found between the analytically conjectured values of β(E) and φ(E) and the values estimated by the simulated ensembles of RSNs and hydrographs. The independence of the scaling exponents φ(E) and φ with respect to the value of flow velocity and runoff intensity implies an interesting connection between unit hydrograph theory and flow dynamics. Our results provide a reference framework to study scaling exponents under more complex scenarios

  6. Bayesian Peak Picking for NMR Spectra

    KAUST Repository

    Cheng, Yichen

    2014-02-01

    Protein structure determination is a very important topic in structural genomics, which helps people to understand varieties of biological functions such as protein-protein interactions, protein–DNA interactions and so on. Nowadays, nuclear magnetic resonance (NMR) has often been used to determine the three-dimensional structures of protein in vivo. This study aims to automate the peak picking step, the most important and tricky step in NMR structure determination. We propose to model the NMR spectrum by a mixture of bivariate Gaussian densities and use the stochastic approximation Monte Carlo algorithm as the computational tool to solve the problem. Under the Bayesian framework, the peak picking problem is casted as a variable selection problem. The proposed method can automatically distinguish true peaks from false ones without preprocessing the data. To the best of our knowledge, this is the first effort in the literature that tackles the peak picking problem for NMR spectrum data using Bayesian method.

  7. Effects of random noise in a dynamical model of love

    Energy Technology Data Exchange (ETDEWEB)

    Xu Yong, E-mail: hsux3@nwpu.edu.cn [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Gu Rencai; Zhang Huiqing [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)

    2011-07-15

    Highlights: > We model the complexity and unpredictability of psychology as Gaussian white noise. > The stochastic system of love is considered including bifurcation and chaos. > We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.

  8. Effects of random noise in a dynamical model of love

    International Nuclear Information System (INIS)

    Xu Yong; Gu Rencai; Zhang Huiqing

    2011-01-01

    Highlights: → We model the complexity and unpredictability of psychology as Gaussian white noise. → The stochastic system of love is considered including bifurcation and chaos. → We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.

  9. The Research of Indoor Positioning Based on Double-peak Gaussian Model

    Directory of Open Access Journals (Sweden)

    Lina Chen

    2014-04-01

    Full Text Available Location fingerprinting using Wi-Fi signals has been very popular and is a well accepted indoor positioning method. The key issue of the fingerprinting approach is generating the fingerprint radio map. Limited by the practical workload, only a few samples of the received signal strength are collected at each reference point. Unfortunately, fewer samples cannot accurately represent the actual distribution of the signal strength from each access point. This study finds most Wi- Fi signals have two peaks. According to the new finding, a double-peak Gaussian arithmetic is proposed to generate a fingerprint radio map. This approach requires little time to receive WiFi signals and it easy to estimate the parameters of the double-peak Gaussian function. Compared to the Gaussian function and histogram method to generate a fingerprint radio map, this method better approximates the occurrence signal distribution. This paper also compared the positioning accuracy using K-Nearest Neighbour theory for three radio maps, the test results show that the positioning distance error utilizing the double-peak Gaussian function is better than the other two methods.

  10. Using Random Forest Models to Predict Organizational Violence

    Science.gov (United States)

    Levine, Burton; Bobashev, Georgly

    2012-01-01

    We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"

  11. Statistical mechanical lattice model of the dual-peak electrocaloric effect in ferroelectric relaxors and the role of pressure

    International Nuclear Information System (INIS)

    Dunne, Lawrence J; Axelsson, Anna-Karin; Alford, Neil McN; Valant, Matjaz; Manos, George

    2011-01-01

    Despite considerable effort, the microscopic origin of the electrocaloric (EC) effect in ferroelectric relaxors is still intensely discussed. Ferroelectric relaxors typically display a dual-peak EC effect, whose origin is uncertain. Here we present an exact statistical mechanical matrix treatment of a lattice model of polar nanoregions forming in a neutral background and use this approach to study the characteristics of the EC effect in ferroelectric relaxors under varying electric field and pressure. The dual peaks seen in the EC properties of ferroelectric relaxors are due to the formation and ordering of polar nanoregions. The model predicts significant enhancement of the EC temperature rise with pressure which may have some contribution to the giant EC effect.

  12. Alternative model of random surfaces

    International Nuclear Information System (INIS)

    Ambartzumian, R.V.; Sukiasian, G.S.; Savvidy, G.K.; Savvidy, K.G.

    1992-01-01

    We analyse models of triangulated random surfaces and demand that geometrically nearby configurations of these surfaces must have close actions. The inclusion of this principle drives us to suggest a new action, which is a modified Steiner functional. General arguments, based on the Minkowski inequality, shows that the maximal distribution to the partition function comes from surfaces close to the sphere. (orig.)

  13. MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH

    International Nuclear Information System (INIS)

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-01-01

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance

  14. Enhanced Isotopic Ratio Outlier Analysis (IROA Peak Detection and Identification with Ultra-High Resolution GC-Orbitrap/MS: Potential Application for Investigation of Model Organism Metabolomes

    Directory of Open Access Journals (Sweden)

    Yunping Qiu

    2018-01-01

    Full Text Available Identifying non-annotated peaks may have a significant impact on the understanding of biological systems. In silico methodologies have focused on ESI LC/MS/MS for identifying non-annotated MS peaks. In this study, we employed in silico methodology to develop an Isotopic Ratio Outlier Analysis (IROA workflow using enhanced mass spectrometric data acquired with the ultra-high resolution GC-Orbitrap/MS to determine the identity of non-annotated metabolites. The higher resolution of the GC-Orbitrap/MS, together with its wide dynamic range, resulted in more IROA peak pairs detected, and increased reliability of chemical formulae generation (CFG. IROA uses two different 13C-enriched carbon sources (randomized 95% 12C and 95% 13C to produce mirror image isotopologue pairs, whose mass difference reveals the carbon chain length (n, which aids in the identification of endogenous metabolites. Accurate m/z, n, and derivatization information are obtained from our GC/MS workflow for unknown metabolite identification, and aids in silico methodologies for identifying isomeric and non-annotated metabolites. We were able to mine more mass spectral information using the same Saccharomyces cerevisiae growth protocol (Qiu et al. Anal. Chem 2016 with the ultra-high resolution GC-Orbitrap/MS, using 10% ammonia in methane as the CI reagent gas. We identified 244 IROA peaks pairs, which significantly increased IROA detection capability compared with our previous report (126 IROA peak pairs using a GC-TOF/MS machine. For 55 selected metabolites identified from matched IROA CI and EI spectra, using the GC-Orbitrap/MS vs. GC-TOF/MS, the average mass deviation for GC-Orbitrap/MS was 1.48 ppm, however, the average mass deviation was 32.2 ppm for the GC-TOF/MS machine. In summary, the higher resolution and wider dynamic range of the GC-Orbitrap/MS enabled more accurate CFG, and the coupling of accurate mass GC/MS IROA methodology with in silico fragmentation has great

  15. Enhanced Isotopic Ratio Outlier Analysis (IROA) Peak Detection and Identification with Ultra-High Resolution GC-Orbitrap/MS: Potential Application for Investigation of Model Organism Metabolomes.

    Science.gov (United States)

    Qiu, Yunping; Moir, Robyn D; Willis, Ian M; Seethapathy, Suresh; Biniakewitz, Robert C; Kurland, Irwin J

    2018-01-18

    Identifying non-annotated peaks may have a significant impact on the understanding of biological systems. In silico methodologies have focused on ESI LC/MS/MS for identifying non-annotated MS peaks. In this study, we employed in silico methodology to develop an Isotopic Ratio Outlier Analysis (IROA) workflow using enhanced mass spectrometric data acquired with the ultra-high resolution GC-Orbitrap/MS to determine the identity of non-annotated metabolites. The higher resolution of the GC-Orbitrap/MS, together with its wide dynamic range, resulted in more IROA peak pairs detected, and increased reliability of chemical formulae generation (CFG). IROA uses two different 13 C-enriched carbon sources (randomized 95% 12 C and 95% 13 C) to produce mirror image isotopologue pairs, whose mass difference reveals the carbon chain length (n), which aids in the identification of endogenous metabolites. Accurate m/z, n, and derivatization information are obtained from our GC/MS workflow for unknown metabolite identification, and aids in silico methodologies for identifying isomeric and non-annotated metabolites. We were able to mine more mass spectral information using the same Saccharomyces cerevisiae growth protocol (Qiu et al. Anal. Chem 2016) with the ultra-high resolution GC-Orbitrap/MS, using 10% ammonia in methane as the CI reagent gas. We identified 244 IROA peaks pairs, which significantly increased IROA detection capability compared with our previous report (126 IROA peak pairs using a GC-TOF/MS machine). For 55 selected metabolites identified from matched IROA CI and EI spectra, using the GC-Orbitrap/MS vs. GC-TOF/MS, the average mass deviation for GC-Orbitrap/MS was 1.48 ppm, however, the average mass deviation was 32.2 ppm for the GC-TOF/MS machine. In summary, the higher resolution and wider dynamic range of the GC-Orbitrap/MS enabled more accurate CFG, and the coupling of accurate mass GC/MS IROA methodology with in silico fragmentation has great potential in

  16. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    Science.gov (United States)

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  17. Mask effects on cosmological studies with weak-lensing peak statistics

    International Nuclear Information System (INIS)

    Liu, Xiangkun; Pan, Chuzhong; Fan, Zuhui; Wang, Qiao

    2014-01-01

    With numerical simulations, we analyze in detail how the bad data removal, i.e., the mask effect, can influence the peak statistics of the weak-lensing convergence field reconstructed from the shear measurement of background galaxies. It is found that high peak fractions are systematically enhanced because of the presence of masks; the larger the masked area is, the higher the enhancement is. In the case where the total masked area is about 13% of the survey area, the fraction of peaks with signal-to-noise ratio ν ≥ 3 is ∼11% of the total number of peaks, compared with ∼7% of the mask-free case in our considered cosmological model. This can have significant effects on cosmological studies with weak-lensing convergence peak statistics, inducing a large bias in the parameter constraints if the effects are not taken into account properly. Even for a survey area of 9 deg 2 , the bias in (Ω m , σ 8 ) is already intolerably large and close to 3σ. It is noted that most of the affected peaks are close to the masked regions. Therefore, excluding peaks in those regions in the peak statistics can reduce the bias effect but at the expense of losing usable survey areas. Further investigations find that the enhancement of the number of high peaks around the masked regions can be largely attributed to the smaller number of galaxies usable in the weak-lensing convergence reconstruction, leading to higher noise than that of the areas away from the masks. We thus develop a model in which we exclude only those very large masks with radius larger than 3' but keep all the other masked regions in peak counting statistics. For the remaining part, we treat the areas close to and away from the masked regions separately with different noise levels. It is shown that this two-noise-level model can account for the mask effect on peak statistics very well, and the bias in cosmological parameters is significantly reduced if this model is applied in the parameter fitting.

  18. Premium Pricing of Liability Insurance Using Random Sum Model

    Directory of Open Access Journals (Sweden)

    Mujiati Dwi Kartikasari

    2017-03-01

    Full Text Available Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we analyze premium pricing using random sum model based on compound distribution

  19. Random matrix approach to plasmon resonances in the random impedance network model of disordered nanocomposites

    Science.gov (United States)

    Olekhno, N. A.; Beltukov, Y. M.

    2018-05-01

    Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0

  20. Peak-interviewet

    DEFF Research Database (Denmark)

    Raalskov, Jesper; Warming-Rasmussen, Bent

    Peak-interviewet er en særlig effektiv metode til at gøre ubevidste menneskelige ressourcer bevidste. Fokuspersonen (den interviewede) interviewes om en selvvalgt, personlig succesoplevelse. Terapeuten/coachen (intervieweren) spørger ind til processen, som ledte hen til denne succes. Herved afdæk...... fokuspersonen ønsker at tage op (nye mål eller nye processer). Nærværende workingpaper beskriver, hvad der menes med et peak-interview, peakinterviwets teoretiske fundament samt metodikken til at foretage et tillidsfuldt og effektiv peak-interview....

  1. Modelling of electric characteristics of 150-watt peak solar panel using Boltzmann sigmoid function under various temperature and irradiance

    Science.gov (United States)

    Sapteka, A. A. N. G.; Narottama, A. A. N. M.; Winarta, A.; Amerta Yasa, K.; Priambodo, P. S.; Putra, N.

    2018-01-01

    Solar energy utilized with solar panel is a renewable energy that needs to be studied further. The site nearest to the equator, it is not surprising, receives the highest solar energy. In this paper, a modelling of electrical characteristics of 150-Watt peak solar panels using Boltzmann sigmoid function under various temperature and irradiance is reported. Current, voltage, temperature and irradiance data in Denpasar, a city located at just south of equator, was collected. Solar power meter is used to measure irradiance level, meanwhile digital thermometer is used to measure temperature of front and back panels. Short circuit current and open circuit voltage data was also collected at different temperature and irradiance level. Statistically, the electrical characteristics of 150-Watt peak solar panel can be modelled using Boltzmann sigmoid function with good fit. Therefore, it can be concluded that Boltzmann sigmoid function might be used to determine current and voltage characteristics of 150-Watt peak solar panel under various temperature and irradiance.

  2. Automated Peak Picking and Peak Integration in Macromolecular NMR Spectra Using AUTOPSY

    Science.gov (United States)

    Koradi, Reto; Billeter, Martin; Engeli, Max; Güntert, Peter; Wüthrich, Kurt

    1998-12-01

    A new approach for automated peak picking of multidimensional protein NMR spectra with strong overlap is introduced, which makes use of the program AUTOPSY (automatedpeak picking for NMRspectroscopy). The main elements of this program are a novel function for local noise level calculation, the use of symmetry considerations, and the use of lineshapes extracted from well-separated peaks for resolving groups of strongly overlapping peaks. The algorithm generates peak lists with precise chemical shift and integral intensities, and a reliability measure for the recognition of each peak. The results of automated peak picking of NOESY spectra with AUTOPSY were tested in combination with the combined automated NOESY cross peak assignment and structure calculation routine NOAH implemented in the program DYANA. The quality of the resulting structures was found to be comparable with those from corresponding data obtained with manual peak picking.

  3. A New-Trend Model-Based to Solve the Peak Power Problems in OFDM Systems

    Directory of Open Access Journals (Sweden)

    Ashraf A. Eltholth

    2008-01-01

    Full Text Available The high peak to average power ration (PAR levels of orthogonal frequency division multiplexing (OFDM signals attract the attention of many researchers during the past decade. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exists to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this paper, we propose a new trend in mitigating the peak power problem in OFDM system based on modeling the effects of clipping and amplifier nonlinearities in an OFDM system. We showed that the distortion due to these effects is highly related to the dynamic range itself rather than the clipping level or the saturation level of the nonlinear amplifier, and thus we propose two criteria to reduce the dynamic range of the OFDM, namely, the use of MSK modulation and the use of Hadamard transform. Computer simulations of the OFDM system using Matlab are completely matched with the deduced model in terms of OFDM signal quality metrics such as BER, ACPR, and EVM. Also simulation results show that even the reduction of PAR using the two proposed criteria is not significat, and the reduction in the amount of distortion due to HPA is truley delightful.

  4. Households' hourly electricity consumption and peak demand in Denmark

    DEFF Research Database (Denmark)

    Møller Andersen, Frits; Baldini, Mattia; Hansen, Lars Gårn

    2017-01-01

    consumption, we analyse the contribution of appliances and new services, such as individual heat pumps and electric vehicles, to peak consumption and the need for demand response incentives to reduce the peak.Initially, the paper presents a new model that represents the hourly electricity consumption profile...... of households in Denmark. The model considers hourly consumption profiles for different household appliances and their contribution to annual household electricity consumption. When applying the model to an official scenario for annual electricity consumption, assuming non-flexible consumption due...... to a considerable introduction of electric vehicles and individual heat pumps, household consumption is expected to increase considerably, especially peak hour consumption is expected to increase.Next the paper presents results from a new experiment where household customers are given economic and/or environmental...

  5. Electric peak power forecasting by year 2025

    International Nuclear Information System (INIS)

    Alsayegh, O.A.; Al-Matar, O.A.; Fairouz, F.A.; Al-Mulla Ali, A.

    2005-01-01

    Peak power demand in Kuwait up to the year 2025 was predicted using an artificial neural network (ANN) model. The aim of the study was to investigate the effect of air conditioning (A/C) units on long-term power demand. Five socio-economic factors were selected as inputs for the simulation: (1) gross national product, (2) population, (3) number of buildings, (4) imports of A/C units, and (5) index of industrial production. The study used socio-economic data from 1978 to 2000. Historical data of the first 10 years of the studied time period were used to train the ANN. The electrical network was then simulated to forecast peak power for the following 11 years. The calculated error was then used for years in which power consumption data were not available. The study demonstrated that average peak power rates increased by 4100 MW every 5 years. Various scenarios related to changes in population, the number of buildings, and the quantity of A/C units were then modelled to estimate long-term peak power demand. Results of the study demonstrated that population had the strongest impact on future power demand, while the number of buildings had the smallest impact. It was concluded that peak power growth can be controlled through the use of different immigration policies, increased A/C efficiency, and the use of vertical housing. 7 refs., 2 tabs., 6 figs

  6. Electric peak power forecasting by year 2025

    Energy Technology Data Exchange (ETDEWEB)

    Alsayegh, O.A.; Al-Matar, O.A.; Fairouz, F.A.; Al-Mulla Ali, A. [Kuwait Inst. for Scientific Research, Kuwait City (Kuwait). Div. of Environment and Urban Development

    2005-07-01

    Peak power demand in Kuwait up to the year 2025 was predicted using an artificial neural network (ANN) model. The aim of the study was to investigate the effect of air conditioning (A/C) units on long-term power demand. Five socio-economic factors were selected as inputs for the simulation: (1) gross national product, (2) population, (3) number of buildings, (4) imports of A/C units, and (5) index of industrial production. The study used socio-economic data from 1978 to 2000. Historical data of the first 10 years of the studied time period were used to train the ANN. The electrical network was then simulated to forecast peak power for the following 11 years. The calculated error was then used for years in which power consumption data were not available. The study demonstrated that average peak power rates increased by 4100 MW every 5 years. Various scenarios related to changes in population, the number of buildings, and the quantity of A/C units were then modelled to estimate long-term peak power demand. Results of the study demonstrated that population had the strongest impact on future power demand, while the number of buildings had the smallest impact. It was concluded that peak power growth can be controlled through the use of different immigration policies, increased A/C efficiency, and the use of vertical housing. 7 refs., 2 tabs., 6 figs.

  7. Probabilistic peak detection for first-order chromatographic data.

    Science.gov (United States)

    Lopatka, M; Vivó-Truyols, G; Sjerps, M J

    2014-03-19

    We present a novel algorithm for probabilistic peak detection in first-order chromatographic data. Unlike conventional methods that deliver a binary answer pertaining to the expected presence or absence of a chromatographic peak, our method calculates the probability of a point being affected by such a peak. The algorithm makes use of chromatographic information (i.e. the expected width of a single peak and the standard deviation of baseline noise). As prior information of the existence of a peak in a chromatographic run, we make use of the statistical overlap theory. We formulate an exhaustive set of mutually exclusive hypotheses concerning presence or absence of different peak configurations. These models are evaluated by fitting a segment of chromatographic data by least-squares. The evaluation of these competing hypotheses can be performed as a Bayesian inferential task. We outline the potential advantages of adopting this approach for peak detection and provide several examples of both improved performance and increased flexibility afforded by our approach. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Fast image interpolation via random forests.

    Science.gov (United States)

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  9. Recent developments in exponential random graph (p*) models for social networks

    NARCIS (Netherlands)

    Robins, Garry; Snijders, Tom; Wang, Peng; Handcock, Mark; Pattison, Philippa

    This article reviews new specifications for exponential random graph models proposed by Snijders et al. [Snijders, T.A.B., Pattison, P., Robins, G.L., Handcock, M., 2006. New specifications for exponential random graph models. Sociological Methodology] and demonstrates their improvement over

  10. The sharp peak-flat trough pattern and critical speculation

    OpenAIRE

    Roehner, B. M.; Sornette, D.

    1998-01-01

    We find empirically a characteristic sharp peak-flat trough pattern in a large set of commodity prices. We argue that the sharp peak structure reflects an endogenous inter-market organization, and that peaks may be seen as local ``singularities'' resulting from imitation and herding. These findings impose a novel stringent constraint on the construction of models. Intermittent amplification is not sufficient and nonlinear effects seem necessary to account for the observations.

  11. Evolution of the concentration PDF in random environments modeled by global random walk

    Science.gov (United States)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and

  12. A Framework for Understanding and Generating Integrated Solutions for Residential Peak Energy Demand

    Science.gov (United States)

    Buys, Laurie; Vine, Desley; Ledwich, Gerard; Bell, John; Mengersen, Kerrie; Morris, Peter; Lewis, Jim

    2015-01-01

    Supplying peak energy demand in a cost effective, reliable manner is a critical focus for utilities internationally. Successfully addressing peak energy concerns requires understanding of all the factors that affect electricity demand especially at peak times. This paper is based on past attempts of proposing models designed to aid our understanding of the influences on residential peak energy demand in a systematic and comprehensive way. Our model has been developed through a group model building process as a systems framework of the problem situation to model the complexity within and between systems and indicate how changes in one element might flow on to others. It is comprised of themes (social, technical and change management options) networked together in a way that captures their influence and association with each other and also their influence, association and impact on appliance usage and residential peak energy demand. The real value of the model is in creating awareness, understanding and insight into the complexity of residential peak energy demand and in working with this complexity to identify and integrate the social, technical and change management option themes and their impact on appliance usage and residential energy demand at peak times. PMID:25807384

  13. Peak quantification in surface-enhanced laser desorption/ionization by using mixture models

    NARCIS (Netherlands)

    Dijkstra, Martijn; Roelofsen, Han; Vonk, Roel J.; Jansen, Ritsert C.

    2006-01-01

    Surface-enhanced laser desorption/ionization (SELDI) time of flight (TOF) is a mass spectrometry technology for measuring the composition of a sampled protein mixture. A mass spectrum contains peaks corresponding to proteins in the sample. The peak areas are proportional to the measured

  14. Disorder Identification in Hysteresis Data: Recognition Analysis of the Random-Bond-Random-Field Ising Model

    International Nuclear Information System (INIS)

    Ovchinnikov, O. S.; Jesse, S.; Kalinin, S. V.; Bintacchit, P.; Trolier-McKinstry, S.

    2009-01-01

    An approach for the direct identification of disorder type and strength in physical systems based on recognition analysis of hysteresis loop shape is developed. A large number of theoretical examples uniformly distributed in the parameter space of the system is generated and is decorrelated using principal component analysis (PCA). The PCA components are used to train a feed-forward neural network using the model parameters as targets. The trained network is used to analyze hysteresis loops for the investigated system. The approach is demonstrated using a 2D random-bond-random-field Ising model, and polarization switching in polycrystalline ferroelectric capacitors.

  15. A random walk model for evaluating clinical trials involving serial observations.

    Science.gov (United States)

    Hopper, J L; Young, G P

    1988-05-01

    For clinical trials where the variable of interest is ordered and categorical (for example, disease severity, symptom scale), and where measurements are taken at intervals, it might be possible to achieve a greater discrimination between the efficacy of treatments by modelling each patient's progress as a stochastic process. The random walk is a simple, easily interpreted model that can be fitted by maximum likelihood using a maximization routine with inference based on standard likelihood theory. In general the model can allow for randomly censored data, incorporates measured prognostic factors, and inference is conditional on the (possibly non-random) allocation of patients. Tests of fit and of model assumptions are proposed, and application to two therapeutic trials of gastroenterological disorders are presented. The model gave measures of the rate of, and variability in, improvement for patients under different treatments. A small simulation study suggested that the model is more powerful than considering the difference between initial and final scores, even when applied to data generated by a mechanism other than the random walk model assumed in the analysis. It thus provides a useful additional statistical method for evaluating clinical trials.

  16. Probabilistic model for untargeted peak detection in LC-MS using Bayesian statistics

    NARCIS (Netherlands)

    Woldegebriel, M.; Vivó-Truyols, G.

    2015-01-01

    We introduce a novel Bayesian probabilistic peak detection algorithm for liquid chromatography mass spectroscopy (LC-MS). The final probabilistic result allows the user to make a final decision about which points in a 2 chromatogram are affected by a chromatographic peak and which ones are only

  17. Probabilistic Model for Untargeted Peak Detection in LC-MS Using Bayesian Statistics.

    Science.gov (United States)

    Woldegebriel, Michael; Vivó-Truyols, Gabriel

    2015-07-21

    We introduce a novel Bayesian probabilistic peak detection algorithm for liquid chromatography-mass spectroscopy (LC-MS). The final probabilistic result allows the user to make a final decision about which points in a chromatogram are affected by a chromatographic peak and which ones are only affected by noise. The use of probabilities contrasts with the traditional method in which a binary answer is given, relying on a threshold. By contrast, with the Bayesian peak detection presented here, the values of probability can be further propagated into other preprocessing steps, which will increase (or decrease) the importance of chromatographic regions into the final results. The present work is based on the use of the statistical overlap theory of component overlap from Davis and Giddings (Davis, J. M.; Giddings, J. Anal. Chem. 1983, 55, 418-424) as prior probability in the Bayesian formulation. The algorithm was tested on LC-MS Orbitrap data and was able to successfully distinguish chemical noise from actual peaks without any data preprocessing.

  18. Random-growth urban model with geographical fitness

    Science.gov (United States)

    Kii, Masanobu; Akimoto, Keigo; Doi, Kenji

    2012-12-01

    This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.

  19. Smooth random change point models.

    Science.gov (United States)

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

    2011-03-15

    Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

  20. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  1. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    Science.gov (United States)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  2. Force Limited Random Vibration Test of TESS Camera Mass Model

    Science.gov (United States)

    Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.

    2015-01-01

    The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.

  3. Simulating intrafraction prostate motion with a random walk model.

    Science.gov (United States)

    Pommer, Tobias; Oh, Jung Hun; Munck Af Rosenschöld, Per; Deasy, Joseph O

    2017-01-01

    Prostate motion during radiation therapy (ie, intrafraction motion) can cause unwanted loss of radiation dose to the prostate and increased dose to the surrounding organs at risk. A compact but general statistical description of this motion could be useful for simulation of radiation therapy delivery or margin calculations. We investigated whether prostate motion could be modeled with a random walk model. Prostate motion recorded during 548 radiation therapy fractions in 17 patients was analyzed and used for input in a random walk prostate motion model. The recorded motion was categorized on the basis of whether any transient excursions (ie, rapid prostate motion in the anterior and superior direction followed by a return) occurred in the trace and transient motion. This was separately modeled as a large step in the anterior/superior direction followed by a returning large step. Random walk simulations were conducted with and without added artificial transient motion using either motion data from all observed traces or only traces without transient excursions as model input, respectively. A general estimate of motion was derived with reasonable agreement between simulated and observed traces, especially during the first 5 minutes of the excursion-free simulations. Simulated and observed diffusion coefficients agreed within 0.03, 0.2 and 0.3 mm 2 /min in the left/right, superior/inferior, and anterior/posterior directions, respectively. A rapid increase in variance at the start of observed traces was difficult to reproduce and seemed to represent the patient's need to adjust before treatment. This could be estimated somewhat using artificial transient motion. Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during

  4. Scaling of peak flows with constant flow velocity in random self-similar networks

    Directory of Open Access Journals (Sweden)

    R. Mantilla

    2011-07-01

    Full Text Available A methodology is presented to understand the role of the statistical self-similar topology of real river networks on scaling, or power law, in peak flows for rainfall-runoff events. We created Monte Carlo generated sets of ensembles of 1000 random self-similar networks (RSNs with geometrically distributed interior and exterior generators having parameters pi and pe, respectively. The parameter values were chosen to replicate the observed topology of real river networks. We calculated flow hydrographs in each of these networks by numerically solving the link-based mass and momentum conservation equation under the assumption of constant flow velocity. From these simulated RSNs and hydrographs, the scaling exponents β and φ characterizing power laws with respect to drainage area, and corresponding to the width functions and flow hydrographs respectively, were estimated. We found that, in general, φ > β, which supports a similar finding first reported for simulations in the river network of the Walnut Gulch basin, Arizona. Theoretical estimation of β and φ in RSNs is a complex open problem. Therefore, using results for a simpler problem associated with the expected width function and expected hydrograph for an ensemble of RSNs, we give heuristic arguments for theoretical derivations of the scaling exponents β(E and φ(E that depend on the Horton ratios for stream lengths and areas. These ratios in turn have a known dependence on the parameters of the geometric distributions of RSN generators. Good agreement was found between the analytically conjectured values of β(E and φ(E and the values estimated by the simulated ensembles of RSNs and hydrographs. The independence of the scaling exponents φ(E and φ with respect to the value of flow velocity and runoff intensity implies an interesting connection between unit

  5. Degree of conversion of resin-based materials cured with dual-peak or single-peak LED light-curing units.

    Science.gov (United States)

    Lucey, Siobhan M; Santini, Ario; Roebuck, Elizabeth M

    2015-03-01

    There is a lack of data on polymerization of resin-based materials (RBMs) used in paediatric dentistry, using dual-peak light-emitting diode (LED) light-curing units (LCUs). To evaluate the degree of conversion (DC) of RBMs cured with dual-peak or single-peak LED LCUs. Samples of Vit-l-escence (Ultradent) and Herculite XRV Ultra (Kerr) and fissure sealants Delton Clear and Delton Opaque (Dentsply) were prepared (n = 3 per group) and cured with either one of two dual-peak LCUs (bluephase(®) G2; Ivoclar Vivadent or Valo; Ultradent) or a single-peak (bluephase(®) ; Ivoclar Vivadent). High-performance liquid chromatography and nuclear magnetic resonance spectroscopy were used to confirm the presence or absence of initiators other than camphorquinone. The DC was determined using micro-Raman spectroscopy. Data were analysed using general linear model anova; α = 0.05. With Herculite XRV Ultra, the single-peak LCU gave higher DC values than either of the two dual-peak LCUs (P < 0.05). Both fissure sealants showed higher DC compared with the two RBMs (P < 0.05); the DC at the bottom of the clear sealant was greater than the opaque sealant, (P < 0.05). 2,4,6-trimethylbenzoyldiphenylphosphine oxide (Lucirin(®) TPO) was found only in Vit-l-escence. Dual-peak LED LCUs may not be best suited for curing non-Lucirin(®) TPO-containing materials. A clear sealant showed a better cure throughout the material and may be more appropriate than opaque versions in deep fissures. © 2014 BSPD, IAPD and John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Automatic fitting of Gaussian peaks using abductive machine learning

    Science.gov (United States)

    Abdel-Aal, R. E.

    1998-02-01

    Analytical techniques have been used for many years for fitting Gaussian peaks in nuclear spectroscopy. However, the complexity of the approach warrants looking for machine-learning alternatives where intensive computations are required only once (during training), while actual analysis on individual spectra is greatly simplified and quickened. This should allow the use of simple portable systems for fast and automated analysis of large numbers of spectra, particularly in situations where accuracy may be traded for speed and simplicity. This paper proposes the use of abductive networks machine learning for this purpose. The Abductory Induction Mechanism (AIM) tool was used to build models for analyzing both single and double Gaussian peaks in the presence of noise depicting statistical uncertainties in collected spectra. AIM networks were synthesized by training on 1000 representative simulated spectra and evaluated on 500 new spectra. A classifier network determines the multiplicity of single/double peaks with an accuracy of 5.8%. With statistical uncertainties corresponding to a peak count of 100, average percentage absolute errors for the height, position, and width of single peaks are 4.9, 2.9, and 4.2%, respectively. For double peaks, these average errors are within 7.0, 3.1, and 5.9%, respectively. Models have been developed which account for the effect of a linear background on a single peak. Performance is compared with a neural network application and with an analytical curve-fitting routine, and the new technique is applied to actual data of an alpha spectrum.

  7. Automatic fitting of Gaussian peaks using abductive machine learning

    International Nuclear Information System (INIS)

    Abdel-Aal, R.E.

    1998-01-01

    Analytical techniques have been used for many years for fitting Gaussian peaks in nuclear spectroscopy. However, the complexity of the approach warrants looking for machine-learning alternatives where intensive computations are required only once (during training), while actual analysis on individual spectra is greatly simplified and quickened. This should allow the use of simple portable systems for fast and automated analysis of large numbers of spectra, particularly in situations where accuracy may be traded for speed and simplicity. This paper proposes the use of abductive networks machine learning for this purpose. The Abductory Induction Mechanism (AIM) tool was used to build models for analyzing both single and double Gaussian peaks in the presence of noise depicting statistical uncertainties in collected spectra. AIM networks were synthesized by training on 1,000 representative simulated spectra and evaluated on 500 new spectra. A classifier network determines the multiplicity of single/double peaks with an accuracy of 98%. With statistical uncertainties corresponding to a peak count of 100, average percentage absolute errors for the height, position, and width of single peaks are 4.9, 2.9, and 4.2%, respectively. For double peaks, these average errors are within 7.0, 3.1, and 5.9%, respectively. Models have been developed which account for the effect of a linear background on a single peak. Performance is compared with a neural network application and with an analytical curve-fitting routine, and the new technique is applied to actual data of an alpha spectrum

  8. Money creation process in a random redistribution model

    Science.gov (United States)

    Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan

    2014-01-01

    In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.

  9. Peak capacity analysis of coal power in China based on full-life cycle cost model optimization

    Science.gov (United States)

    Yan, Xiaoqing; Zhang, Jinfang; Huang, Xinting

    2018-02-01

    13th five-year and the next period are critical for the energy and power reform of China. In order to ease the excessive power supply, policies have been introduced by National Energy Board especially toward coal power capacity control. Therefore the rational construction scale and scientific development timing for coal power are of great importance and paid more and more attentions. In this study, the comprehensive influence of coal power reduction policies is analyzed from diverse point of views. Full-life cycle cost model of coal power is established to fully reflect the external and internal cost. Then this model is introduced in an improved power planning optimization theory. The power planning and diverse scenarios production simulation shows that, in order to meet the power, electricity and peak balance of power system, China’s coal power peak capacity is within 1.15 ∼ 1.2 billion kilowatts before or after 2025. The research result is expected to be helpful to the power industry in 14th and 15th five-year periods, promoting the efficiency and safety of power system.

  10. Extreme daily increases in peak electricity demand: Tail-quantile estimation

    International Nuclear Information System (INIS)

    Sigauke, Caston; Verster, Andréhette; Chikobvu, Delson

    2013-01-01

    A Generalized Pareto Distribution (GPD) is used to model extreme daily increases in peak electricity demand. The model is fitted to years 2000–2011 recorded data for South Africa to make a comparative analysis with the Generalized Pareto-type (GP-type) distribution. Peak electricity demand is influenced by the tails of probability distributions as well as by means or averages. At times there is a need to depart from the average thinking and exploit information provided by the extremes (tails). Empirical results show that both the GP-type and the GPD are a good fit to the data. One of the main advantages of the GP-type is the estimation of only one parameter. Modelling of extreme daily increases in peak electricity demand helps in quantifying the amount of electricity which can be shifted from the grid to off peak periods. One of the policy implications derived from this study is the need for day-time use of electricity billing system similar to the one used in the cellular telephone/and fixed line-billing technology. This will result in the shifting of electricity demand on the grid to off peak time slots as users try to avoid high peak hour charges. - Highlights: ► Policy makers should design demand response strategies to save electricity. ► Peak electricity demand is influenced by tails of probability distributions. ► Both the GSP and the GPD are a good fit to the data. ► Accurate assessment of level and frequency of extreme load forecasts is important.

  11. Government procurement of peak capacity in the New Zealand electricity market

    International Nuclear Information System (INIS)

    Poletti, Steve

    2009-01-01

    This paper analyzes the impact of government procurement of reserve electricity generation capacity on the long-run equilibrium in the electricity market. The approach here is to model the electricity market in a context where the supply companies have market power. The model is then used to analyze the impact of government direct supply of peak capacity on the market. We find that the firms build less peak-generation capacity when the government procures peak generating capacity. The long-run equilibrium with N firms and government capacity of K G results in an increase of total peak generation capacity of K G /(N+1) compared to the long-run equilibrium with no government capacity. Supply disruptions of baseline capacity during the peak time period are also considered. It is found that peak prices do not go up any further with (anticipated) supply disruptions. Instead the entire cost of the extra peakers is borne by customers on traditional meters and off-peak customers who face real-time pricing. (author)

  12. Ultrasonic Transducer Peak-to-Peak Optical Measurement

    Directory of Open Access Journals (Sweden)

    Pavel Skarvada

    2012-01-01

    Full Text Available Possible optical setups for measurement of the peak-to-peak value of an ultrasonic transducer are described in this work. The Michelson interferometer with the calibrated nanopositioner in reference path and laser Doppler vibrometer were used for the basic measurement of vibration displacement. Langevin type of ultrasonic transducer is used for the purposes of Electro-Ultrasonic Nonlinear Spectroscopy (EUNS. Parameters of produced mechanical vibration have to been well known for EUNS. Moreover, a monitoring of mechanical vibration frequency shift with a mass load and sample-transducer coupling is important for EUNS measurement.

  13. Enhanced vegetation growth peak and its key mechanisms

    Science.gov (United States)

    Huang, K.; Xia, J.; Wang, Y.; Ahlström, A.; Schwalm, C.; Huntzinger, D. N.; Chen, J.; Cook, R. B.; Fang, Y.; Fisher, J. B.; Jacobson, A. R.; Michalak, A.; Schaefer, K. M.; Wei, Y.; Yan, L.; Luo, Y.

    2017-12-01

    It remains unclear that whether and how the vegetation growth peak has been shifted globally during the past three decades. Here we used two global datasets of gross primary productivity (GPP) and a satellite-derived Normalized Difference Vegetation Index (NDVI) to characterize recent changes in seasonal peak vegetation growth. The attribution of changes in peak growth to their driving factors was examined with several datasets. We demonstrated that the growth peak of global vegetation has been linearly increasing during the past three decades. About 65% of this trend is evenly explained by the expanding croplands (21%), rising atmospheric [CO2] (22%), and intensifying nitrogen deposition (22%). The contribution of expanding croplands to the peak growth trend was substantiated by measurements from eddy-flux towers, sun-induced chlorophyll fluorescence and a global database of plant traits, all of which demonstrated that croplands have a higher photosynthetic capacity than other vegetation types. The contribution of rising atmospheric [CO2] and nitrogen deposition are consistent with the positive response of leaf growth to elevated [CO2] (25%) and nitrogen addition (8%) from 346 manipulated experiments. The positive effect of rising atmospheric [CO2] was also well captured by 15 terrestrial biosphere models. However, most models underestimated the contributions of land-cover change and nitrogen deposition, but overestimated the positive effect of climate change.

  14. Stochastic equilibria of an asset pricing model with heterogeneous beliefs and random dividends

    NARCIS (Netherlands)

    Zhu, M.; Wang, D.; Guo, M.

    2011-01-01

    We investigate dynamical properties of a heterogeneous agent model with random dividends and further study the relationship between dynamical properties of the random model and those of the corresponding deterministic skeleton, which is obtained by setting the random dividends as their constant mean

  15. Neopuff T-piece resuscitator mask ventilation: Does mask leak vary with different peak inspiratory pressures in a manikin model?

    Science.gov (United States)

    Maheshwari, Rajesh; Tracy, Mark; Hinder, Murray; Wright, Audrey

    2017-08-01

    The aim of this study was to compare mask leak with three different peak inspiratory pressure (PIP) settings during T-piece resuscitator (TPR; Neopuff) mask ventilation on a neonatal manikin model. Participants were neonatal unit staff members. They were instructed to provide mask ventilation with a TPR with three PIP settings (20, 30, 40 cm H 2 O) chosen in a random order. Each episode was for 2 min with 2-min rest period. Flow rate and positive end-expiratory pressure (PEEP) were kept constant. Airway pressure, inspiratory and expiratory tidal volumes, mask leak, respiratory rate and inspiratory time were recorded. Repeated measures analysis of variance was used for statistical analysis. A total of 12 749 inflations delivered by 40 participants were analysed. There were no statistically significant differences (P > 0.05) in the mask leak with the three PIP settings. No statistically significant differences were seen in respiratory rate and inspiratory time with the three PIP settings. There was a significant rise in PEEP as the PIP increased. Failure to achieve the desired PIP was observed especially at the higher settings. In a neonatal manikin model, the mask leak does not vary as a function of the PIP when the flow rate is constant. With a fixed rate and inspiratory time, there seems to be a rise in PEEP with increasing PIP. © 2017 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  16. Predictors of the peak width for networks with exponential links

    Science.gov (United States)

    Troutman, B.M.; Karlinger, M.R.

    1989-01-01

    We investigate optimal predictors of the peak (S) and distance to peak (T) of the width function of drainage networks under the assumption that the networks are topologically random with independent and exponentially distributed link lengths. Analytical results are derived using the fact that, under these assumptions, the width function is a homogeneous Markov birth-death process. In particular, exact expressions are derived for the asymptotic conditional expectations of S and T given network magnitude N and given mainstream length H. In addition, a simulation study is performed to examine various predictors of S and T, including N, H, and basin morphometric properties; non-asymptotic conditional expectations and variances are estimated. The best single predictor of S is N, of T is H, and of the scaled peak (S divided by the area under the width function) is H. Finally, expressions tested on a set of drainage basins from the state of Wyoming perform reasonably well in predicting S and T despite probable violations of the original assumptions. ?? 1989 Springer-Verlag.

  17. A method for estimating peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area

    Science.gov (United States)

    Asquith, William H.; Cleveland, Theodore G.; Roussel, Meghan C.

    2011-01-01

    Estimates of peak and time of peak streamflow for small watersheds (less than about 640 acres) in a suburban to urban, low-slope setting are needed for drainage design that is cost-effective and risk-mitigated. During 2007-10, the U.S. Geological Survey (USGS), in cooperation with the Harris County Flood Control District and the Texas Department of Transportation, developed a method to estimate peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area. To develop the method, 24 watersheds in the study area with drainage areas less than about 3.5 square miles (2,240 acres) and with concomitant rainfall and runoff data were selected. The method is based on conjunctive analysis of rainfall and runoff data in the context of the unit hydrograph method and the rational method. For the unit hydrograph analysis, a gamma distribution model of unit hydrograph shape (a gamma unit hydrograph) was chosen and parameters estimated through matching of modeled peak and time of peak streamflow to observed values on a storm-by-storm basis. Watershed mean or watershed-specific values of peak and time to peak ("time to peak" is a parameter of the gamma unit hydrograph and is distinct from "time of peak") of the gamma unit hydrograph were computed. Two regression equations to estimate peak and time to peak of the gamma unit hydrograph that are based on watershed characteristics of drainage area and basin-development factor (BDF) were developed. For the rational method analysis, a lag time (time-R), volumetric runoff coefficient, and runoff coefficient were computed on a storm-by-storm basis. Watershed-specific values of these three metrics were computed. A regression equation to estimate time-R based on drainage area and BDF was developed. Overall arithmetic means of volumetric runoff coefficient (0.41 dimensionless) and runoff coefficient (0.25 dimensionless) for the 24 watersheds were used to express the rational

  18. Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology

    Science.gov (United States)

    Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…

  19. Quantum random oracle model for quantum digital signature

    Science.gov (United States)

    Shang, Tao; Lei, Qi; Liu, Jianwei

    2016-10-01

    The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.

  20. Random-Effects Models for Meta-Analytic Structural Equation Modeling: Review, Issues, and Illustrations

    Science.gov (United States)

    Cheung, Mike W.-L.; Cheung, Shu Fai

    2016-01-01

    Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…

  1. Random isotropic one-dimensional XY-model

    Science.gov (United States)

    Gonçalves, L. L.; Vieira, A. P.

    1998-01-01

    The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .

  2. Studies in astronomical time series analysis: Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  3. Non-Gaussian bias: insights from discrete density peaks

    CERN Document Server

    Desjacques, Vincent; Riotto, Antonio

    2013-01-01

    Corrections induced by primordial non-Gaussianity to the linear halo bias can be computed from a peak-background split or the widespread local bias model. However, numerical simulations clearly support the prediction of the former, in which the non-Gaussian amplitude is proportional to the linear halo bias. To understand better the reasons behind the failure of standard Lagrangian local bias, in which the halo overdensity is a function of the local mass overdensity only, we explore the effect of a primordial bispectrum on the 2-point correlation of discrete density peaks. We show that the effective local bias expansion to peak clustering vastly simplifies the calculation. We generalize this approach to excursion set peaks and demonstrate that the resulting non-Gaussian amplitude, which is a weighted sum of quadratic bias factors, precisely agrees with the peak-background split expectation, which is a logarithmic derivative of the halo mass function with respect to the normalisation amplitude. We point out tha...

  4. Lotka-Volterra system in a random environment

    Science.gov (United States)

    Dimentberg, Mikhail F.

    2002-03-01

    Classical Lotka-Volterra (LV) model for oscillatory behavior of population sizes of two interacting species (predator-prey or parasite-host pairs) is conservative. This may imply unrealistically high sensitivity of the system's behavior to environmental variations. Thus, a generalized LV model is considered with the equation for preys' reproduction containing the following additional terms: quadratic ``damping'' term that accounts for interspecies competition, and term with white-noise random variations of the preys' reproduction factor that simulates the environmental variations. An exact solution is obtained for the corresponding Fokker-Planck-Kolmogorov equation for stationary probability densities (PDF's) of the population sizes. It shows that both population sizes are independent γ-distributed stationary random processes. Increasing level of the environmental variations does not lead to extinction of the populations. However it may lead to an intermittent behavior, whereby one or both population sizes experience very rare and violent short pulses or outbreaks while remaining on a very low level most of the time. This intermittency is described analytically by direct use of the solutions for the PDF's as well as by applying theory of excursions of random functions and by predicting PDF of peaks in the predators' population size.

  5. BENEFITS OF WILDERNESS EXPANSION WITH EXCESS DEMAND FOR INDIAN PEAKS

    OpenAIRE

    Walsh, Richard G.; Gilliam, Lynde O.

    1982-01-01

    The contingent valuation approach was applied to the problem of estimating the recreation benefits from alleviating congestion at Indian Peaks wilderness area, Colorado. A random sample of 126 individuals were interviewed while hiking and backpacking at the study site in 1979. The results provide an empirical test and confirmation of the Cesario and Freeman proposals that under conditions of excess recreational demand for existing sites, enhanced opportunities to substitute newly designated s...

  6. A generalized model via random walks for information filtering

    Science.gov (United States)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-08-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.

  7. Genetic Analysis of Daily Maximum Milking Speed by a Random Walk Model in Dairy Cows

    DEFF Research Database (Denmark)

    Karacaören, Burak; Janss, Luc; Kadarmideen, Haja

    Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models of ...... filter applications: random walk model could give online prediction of breeding values. Hence without waiting for whole lactation records, genetic evaluation could be made when the daily or monthly data is available......Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models...... of maximum milking speed. Wood curve did not provide a good fit to the data set. Quadratic random regressions gave better predictions compared with the random walk model. However random walk model does not need to be evaluated for different orders of regression coefficients. In addition with the Kalman...

  8. Lamplighter model of a random copolymer adsorption on a line

    Directory of Open Access Journals (Sweden)

    L.I. Nazarov

    2014-09-01

    Full Text Available We present a model of an AB-diblock random copolymer sequential self-packaging with local quenched interactions on a one-dimensional infinite sticky substrate. It is assumed that the A-A and B-B contacts are favorable, while A-B are not. The position of a newly added monomer is selected in view of the local contact energy minimization. The model demonstrates a self-organization behavior with the nontrivial dependence of the total energy, E (the number of unfavorable contacts, on the number of chain monomers, N: E ~ N^3/4 for quenched random equally probable distribution of A- and B-monomers along the chain. The model is treated by mapping it onto the "lamplighter" random walk and the diffusion-controlled chemical reaction of X+X → 0 type with the subdiffusive motion of reagents.

  9. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  10. Annealed central limit theorems for the ising model on random graphs

    NARCIS (Netherlands)

    Giardinà, C.; Giberti, C.; van der Hofstad, R.W.; Prioriello, M.L.

    2016-01-01

    The aim of this paper is to prove central limit theorems with respect to the annealed measure for the magnetization rescaled by √N of Ising models on random graphs. More precisely, we consider the general rank-1 inhomogeneous random graph (or generalized random graph), the 2-regular configuration

  11. Single-cluster dynamics for the random-cluster model

    NARCIS (Netherlands)

    Deng, Y.; Qian, X.; Blöte, H.W.J.

    2009-01-01

    We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q-state Potts model to noninteger values q>1. Its results for static quantities are in a satisfactory agreement with those

  12. Hourly peak concentration measuring the PM2.5-mortality association: Results from six cities in the Pearl River Delta study

    Science.gov (United States)

    Lin, Hualiang; Ratnapradipa, Kendra; Wang, Xiaojie; Zhang, Yonghui; Xu, Yanjun; Yao, Zhenjiang; Dong, Guanghui; Liu, Tao; Clark, Jessica; Dick, Rebecca; Xiao, Jianpeng; Zeng, Weilin; Li, Xing; Qian, Zhengmin (Min); Ma, Wenjun

    2017-07-01

    Compared with daily mean concentration of air pollution, hourly peak concentration may be more directly relevant to the acute health effects due to the high concentration levels, however, few have analyzed the acute mortality effects of hourly peak levels of air pollution. We examined the associations of hourly peak concentration of fine particulate matter air pollution (PM2.5) with mortality in six cities in Pearl River Delta, China. We used generalized additive Poisson models to examine the associations with adjustment for potential confounders in each city. We further applied random-effects meta-analyses to estimate the regional overall effects. We further estimated the mortality burden attributable to hourly peak and daily mean PM2.5. We observed significant associations between hourly peak PM2.5 and mortality. Each 10 μg/m3 increase in 4-day averaged (lag03) hourly peak PM2.5 corresponded to a 0.9% [95% confidence interval (CI): 0.7%, 1.1%] increase in total mortality, 1.2% (95% CI: 1.0%, 1.5%) in cardiovascular mortality, and 0.7% (95% CI: 0.2%, 1.1%) in respiratory mortality. We observed a greater mortality burden using hourly peak PM2.5 than daily mean PM2.5, with an estimated 12915 (95% CI: 9922, 15949) premature deaths attributable to hourly peak PM2.5, and 7951 (95% CI: 5067, 10890) to daily mean PM2.5 in the Pearl River Delta (PRD) region during the study period. This study suggests that hourly peak PM2.5 might be one important risk factor of mortality in PRD region of China; the finding provides important information for future air pollution management and epidemiological studies.

  13. A dynamic random effects multinomial logit model of household car ownership

    DEFF Research Database (Denmark)

    Bue Bjørner, Thomas; Leth-Petersen, Søren

    2007-01-01

    Using a large household panel we estimate demand for car ownership by means of a dynamic multinomial model with correlated random effects. Results suggest that the persistence in car ownership observed in the data should be attributed to both true state dependence and to unobserved heterogeneity...... (random effects). It also appears that random effects related to single and multiple car ownership are correlated, suggesting that the IIA assumption employed in simple multinomial models of car ownership is invalid. Relatively small elasticities with respect to income and car costs are estimated...

  14. A single-level random-effects cross-lagged panel model for longitudinal mediation analysis.

    Science.gov (United States)

    Wu, Wei; Carroll, Ian A; Chen, Po-Yi

    2017-12-06

    Cross-lagged panel models (CLPMs) are widely used to test mediation with longitudinal panel data. One major limitation of the CLPMs is that the model effects are assumed to be fixed across individuals. This assumption is likely to be violated (i.e., the model effects are random across individuals) in practice. When this happens, the CLPMs can potentially yield biased parameter estimates and misleading statistical inferences. This article proposes a model named a random-effects cross-lagged panel model (RE-CLPM) to account for random effects in CLPMs. Simulation studies show that the RE-CLPM outperforms the CLPM in recovering the mean indirect and direct effects in a longitudinal mediation analysis when random effects exist in the population. The performance of the RE-CLPM is robust to a certain degree, even when the random effects are not normally distributed. In addition, the RE-CLPM does not produce harmful results when the model effects are in fact fixed in the population. Implications of the simulation studies and potential directions for future research are discussed.

  15. The reverse effects of random perturbation on discrete systems for single and multiple population models

    International Nuclear Information System (INIS)

    Kang, Li; Tang, Sanyi

    2016-01-01

    Highlights: • The discrete single species and multiple species models with random perturbation are proposed. • The complex dynamics and interesting bifurcation behavior have been investigated. • The reverse effects of random perturbation on discrete systems have been discussed and revealed. • The main results can be applied for pest control and resources management. - Abstract: The natural species are likely to present several interesting and complex phenomena under random perturbations, which have been confirmed by simple mathematical models. The important questions are: how the random perturbations influence the dynamics of the discrete population models with multiple steady states or multiple species interactions? and is there any different effects for single species and multiple species models with random perturbation? To address those interesting questions, we have proposed the discrete single species model with two stable equilibria and the host-parasitoid model with Holling type functional response functions to address how the random perturbation affects the dynamics. The main results indicate that the random perturbation does not change the number of blurred orbits of the single species model with two stable steady states compared with results for the classical Ricker model with same random perturbation, but it can strength the stability. However, extensive numerical investigations depict that the random perturbation does not influence the complexities of the host-parasitoid models compared with the results for the models without perturbation, while it does increase the period of periodic orbits doubly. All those confirm that the random perturbation has a reverse effect on the dynamics of the discrete single and multiple population models, which could be applied in reality including pest control and resources management.

  16. Electricity Portfolio Management: Optimal Peak / Off-Peak Allocations

    OpenAIRE

    Huisman, Ronald; Mahieu, Ronald; Schlichter, Felix

    2007-01-01

    textabstractElectricity purchasers manage a portfolio of contracts in order to purchase the expected future electricity consumption profile of a company or a pool of clients. This paper proposes a mean-variance framework to address the concept of structuring the portfolio and focuses on how to allocate optimal positions in peak and off-peak forward contracts. It is shown that the optimal allocations are based on the difference in risk premiums per unit of day-ahead risk as a measure of relati...

  17. Modeling of chromosome intermingling by partially overlapping uniform random polygons.

    Science.gov (United States)

    Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J

    2011-03-01

    During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.

  18. Some Limits Using Random Slope Models to Measure Academic Growth

    Directory of Open Access Journals (Sweden)

    Daniel B. Wright

    2017-11-01

    Full Text Available Academic growth is often estimated using a random slope multilevel model with several years of data. However, if there are few time points, the estimates can be unreliable. While using random slope multilevel models can lower the variance of the estimates, these procedures can produce more highly erroneous estimates—zero and negative correlations with the true underlying growth—than using ordinary least squares estimates calculated for each student or school individually. An example is provided where schools with increasing graduation rates are estimated to have negative growth and vice versa. The estimation is worse when the underlying data are skewed. It is recommended that there are at least six time points for estimating growth if using a random slope model. A combination of methods can be used to avoid some of the aberrant results if it is not possible to have six or more time points.

  19. Utility based maintenance analysis using a Random Sign censoring model

    International Nuclear Information System (INIS)

    Andres Christen, J.; Ruggeri, Fabrizio; Villa, Enrique

    2011-01-01

    Industrial systems subject to failures are usually inspected when there are evident signs of an imminent failure. Maintenance is therefore performed at a random time, somehow dependent on the failure mechanism. A competing risk model, namely a Random Sign model, is considered to relate failure and maintenance times. We propose a novel Bayesian analysis of the model and apply it to actual data from a water pump in an oil refinery. The design of an optimal maintenance policy is then discussed under a formal decision theoretic approach, analyzing the goodness of the current maintenance policy and making decisions about the optimal maintenance time.

  20. Constraining the shape of the CMB: A peak-by-peak analysis

    International Nuclear Information System (INIS)

    Oedman, Carolina J.; Hobson, Michael P.; Lasenby, Anthony N.; Melchiorri, Alessandro

    2003-01-01

    The recent measurements of the power spectrum of cosmic microwave background anisotropies are consistent with the simplest inflationary scenario and big bang nucleosynthesis constraints. However, these results rely on the assumption of a class of models based on primordial adiabatic perturbations, cold dark matter and a cosmological constant. In this paper we investigate the need for deviations from the Λ-CDM scenario by first characterizing the spectrum using a phenomenological function in a 15 dimensional parameter space. Using a Monte Carlo Markov chain approach to Bayesian inference and a low curvature model template we then check for the presence of new physics and/or systematics in the CMB data. We find an almost perfect consistency between the phenomenological fits and the standard Λ-CDM models. The curvature of the secondary peaks is weakly constrained by the present data, but they are well located. The improved spectral resolution expected from future satellite experiments is warranted for a definitive test of the scenario

  1. Application of random regression models to the genetic evaluation ...

    African Journals Online (AJOL)

    The model included fixed regression on AM (range from 30 to 138 mo) and the effect of herd-measurement date concatenation. Random parts of the model were RRM coefficients for additive and permanent environmental effects, while residual effects were modelled to account for heterogeneity of variance by AY. Estimates ...

  2. Method of model reduction and multifidelity models for solute transport in random layered porous media

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Zhijie; Tartakovsky, Alexandre M.

    2017-09-01

    This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersion initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.

  3. New constraints on modelling the random magnetic field of the MW

    Energy Technology Data Exchange (ETDEWEB)

    Beck, Marcus C.; Nielaba, Peter [Department of Physics, University of Konstanz, Universitätsstr. 10, D-78457 Konstanz (Germany); Beck, Alexander M.; Dolag, Klaus [University Observatory Munich, Scheinerstr. 1, D-81679 Munich (Germany); Beck, Rainer [Max Planck Institute for Radioastronomy, Auf dem Hügel 69, D-53121 Bonn (Germany); Strong, Andrew W., E-mail: marcus.beck@uni-konstanz.de, E-mail: abeck@usm.uni-muenchen.de, E-mail: rbeck@mpifr-bonn.mpg.de, E-mail: dolag@usm.uni-muenchen.de, E-mail: aws@mpe.mpg.de, E-mail: peter.nielaba@uni-konstanz.de [Max Planck Institute for Extraterrestrial Physics, Giessenbachstr. 1, D-85748 Garching (Germany)

    2016-05-01

    We extend the description of the isotropic and anisotropic random component of the small-scale magnetic field within the existing magnetic field model of the Milky Way from Jansson and Farrar, by including random realizations of the small-scale component. Using a magnetic-field power spectrum with Gaussian random fields, the NE2001 model for the thermal electrons and the Galactic cosmic-ray electron distribution from the current GALPROP model we derive full-sky maps for the total and polarized synchrotron intensity as well as the Faraday rotation-measure distribution. While previous work assumed that small-scale fluctuations average out along the line-of-sight or which only computed ensemble averages of random fields, we show that these fluctuations need to be carefully taken into account. Comparing with observational data we obtain not only good agreement with 408 MHz total and WMAP7 22 GHz polarized intensity emission maps, but also an improved agreement with Galactic foreground rotation-measure maps and power spectra, whose amplitude and shape strongly depend on the parameters of the random field. We demonstrate that a correlation length of 0≈22 pc (05 pc being a 5σ lower limit) is needed to match the slope of the observed power spectrum of Galactic foreground rotation-measure maps. Using multiple realizations allows us also to infer errors on individual observables. We find that previously-used amplitudes for random and anisotropic random magnetic field components need to be rescaled by factors of ≈0.3 and 0.6 to account for the new small-scale contributions. Our model predicts a rotation measure of −2.8±7.1 rad/m{sup 2} and 04.4±11. rad/m{sup 2} for the north and south Galactic poles respectively, in good agreement with observations. Applying our model to deflections of ultra-high-energy cosmic rays we infer a mean deflection of ≈3.5±1.1 degree for 60 EeV protons arriving from CenA.

  4. Infant BMI peak, breastfeeding, and body composition at age 3 y

    DEFF Research Database (Denmark)

    Jensen, Signe Marie; Ritz, Christian; Ejlerskov, Katrine Tschentscher

    2015-01-01

    BACKGROUND: With the increasing focus on obesity, growth patterns in infancy and early childhood have gained much attention. Although the adiposity rebound has been in focus because of a shown association with adult obesity, not much has been published about the infant peak in body mass index (BMI......) cohort were used to estimate BMI growth curves for the age span from 14 d to 19 mo by using a nonlinear mixed-effects model. BMI growth velocity before peak and age and BMI at peak were derived from the subject-specific models. Information about pregnancy and breastfeeding was assessed from background...

  5. A random walk model to evaluate autism

    Science.gov (United States)

    Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.

    2018-02-01

    A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.

  6. Connectivity ranking of heterogeneous random conductivity models

    Science.gov (United States)

    Rizzo, C. B.; de Barros, F.

    2017-12-01

    To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.

  7. The origin of a peak in the reststrahlen region of SiC

    International Nuclear Information System (INIS)

    Engelbrecht, J.A.A.; Rooyen, I.J. van; Henry, A.; Janzén, E.; Olivier, E.J.

    2012-01-01

    A peak in the reststrahlen region of SiC is analyzed in order to establish the origin of this peak. The peak can be associated with a thin damaged layer on the SiC wafers, and a relation is found between surface roughness and the height of this peak, by modeling the damaged layer as an additional layer when simulating the reflectivity from the wafers.

  8. The origin of a peak in the reststrahlen region of SiC

    Energy Technology Data Exchange (ETDEWEB)

    Engelbrecht, J.A.A., E-mail: Japie.Engelbrecht@nmmu.ac.za [Physics Department, Nelson Mandela Metropolitan University, PO Box 77000, Port Elizabeth 6031 (South Africa); Rooyen, I.J. van [Physics Department, Nelson Mandela Metropolitan University, PO Box 77000, Port Elizabeth 6031 (South Africa); Henry, A.; Janzen, E. [Department of Physics, Chemistry and Biology, Linkoeping University, SE-581 83 Linkoeping (Sweden); Olivier, E.J. [Physics Department, Nelson Mandela Metropolitan University, PO Box 77000, Port Elizabeth 6031 (South Africa)

    2012-05-15

    A peak in the reststrahlen region of SiC is analyzed in order to establish the origin of this peak. The peak can be associated with a thin damaged layer on the SiC wafers, and a relation is found between surface roughness and the height of this peak, by modeling the damaged layer as an additional layer when simulating the reflectivity from the wafers.

  9. Accumulator and random-walk models of psychophysical discrimination: a counter-evaluation.

    Science.gov (United States)

    Vickers, D; Smith, P

    1985-01-01

    In a recent assessment of models of psychophysical discrimination, Heath criticises the accumulator model for its reliance on computer simulation and qualitative evidence, and contrasts it unfavourably with a modified random-walk model, which yields exact predictions, is susceptible to critical test, and is provided with simple parameter-estimation techniques. A counter-evaluation is presented, in which the approximations employed in the modified random-walk analysis are demonstrated to be seriously inaccurate, the resulting parameter estimates to be artefactually determined, and the proposed test not critical. It is pointed out that Heath's specific application of the model is not legitimate, his data treatment inappropriate, and his hypothesis concerning confidence inconsistent with experimental results. Evidence from adaptive performance changes is presented which shows that the necessary assumptions for quantitative analysis in terms of the modified random-walk model are not satisfied, and that the model can be reconciled with data at the qualitative level only by making it virtually indistinguishable from an accumulator process. A procedure for deriving exact predictions for an accumulator process is outlined.

  10. A cellular automata model of traffic flow with variable probability of randomization

    International Nuclear Information System (INIS)

    Zheng Wei-Fan; Zhang Ji-Ye

    2015-01-01

    Research on the stochastic behavior of traffic flow is important to understand the intrinsic evolution rules of a traffic system. By introducing an interactional potential of vehicles into the randomization step, an improved cellular automata traffic flow model with variable probability of randomization is proposed in this paper. In the proposed model, the driver is affected by the interactional potential of vehicles before him, and his decision-making process is related to the interactional potential. Compared with the traditional cellular automata model, the modeling is more suitable for the driver’s random decision-making process based on the vehicle and traffic situations in front of him in actual traffic. From the improved model, the fundamental diagram (flow–density relationship) is obtained, and the detailed high-density traffic phenomenon is reproduced through numerical simulation. (paper)

  11. MODELING URBAN DYNAMICS USING RANDOM FOREST: IMPLEMENTING ROC AND TOC FOR MODEL EVALUATION

    Directory of Open Access Journals (Sweden)

    M. Ahmadlou

    2016-06-01

    Full Text Available The importance of spatial accuracy of land use/cover change maps necessitates the use of high performance models. To reach this goal, calibrating machine learning (ML approaches to model land use/cover conversions have received increasing interest among the scholars. This originates from the strength of these techniques as they powerfully account for the complex relationships underlying urban dynamics. Compared to other ML techniques, random forest has rarely been used for modeling urban growth. This paper, drawing on information from the multi-temporal Landsat satellite images of 1985, 2000 and 2015, calibrates a random forest regression (RFR model to quantify the variable importance and simulation of urban change spatial patterns. The results and performance of RFR model were evaluated using two complementary tools, relative operating characteristics (ROC and total operating characteristics (TOC, by overlaying the map of observed change and the modeled suitability map for land use change (error map. The suitability map produced by RFR model showed 82.48% area under curve for the ROC model which indicates a very good performance and highlights its appropriateness for simulating urban growth.

  12. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    Science.gov (United States)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  13. Limitation of peak fitting and peak shape methods for determination of activation energy of thermoluminescence glow peaks

    CERN Document Server

    Sunta, C M; Piters, T M; Watanabe, S

    1999-01-01

    This paper shows the limitation of general order peak fitting and peak shape methods for determining the activation energy of the thermoluminescence glow peaks in the cases in which retrapping probability is much higher than the recombination probability and the traps are filled up to near saturation level. Right values can be obtained when the trap occupancy is reduced by using small doses or by post-irradiation partial bleaching. This limitation in the application of these methods has not been indicated earlier. In view of the unknown nature of kinetics in the experimental samples, it is recommended that these methods of activation energy determination should be applied only at doses well below the saturation dose.

  14. Forecasting Ebola with a regression transmission model

    Directory of Open Access Journals (Sweden)

    Jason Asher

    2018-03-01

    Full Text Available We describe a relatively simple stochastic model of Ebola transmission that was used to produce forecasts with the lowest mean absolute error among Ebola Forecasting Challenge participants. The model enabled prediction of peak incidence, the timing of this peak, and final size of the outbreak. The underlying discrete-time compartmental model used a time-varying reproductive rate modeled as a multiplicative random walk driven by the number of infectious individuals. This structure generalizes traditional Susceptible-Infected-Recovered (SIR disease modeling approaches and allows for the flexible consideration of outbreaks with complex trajectories of disease dynamics. Keywords: Ebola, Forecasting, Mathematical modeling, Bayesian inference

  15. Using computational modeling to compare X-ray tube Practical Peak Voltage for Dental Radiology

    International Nuclear Information System (INIS)

    Holanda Cassiano, Deisemar; Arruda Correa, Samanda Cristine; Monteiro de Souza, Edmilson; Silva, Ademir Xaxier da; Pereira Peixoto, José Guilherme; Tadeu Lopes, Ricardo

    2014-01-01

    The Practical Peak Voltage-PPV has been adopted to measure the voltage applied to an X-ray tube. The PPV was recommended by the IEC document and accepted and published in the TRS no. 457 code of practice. The PPV is defined and applied to all forms of waves and is related to the spectral distribution of X-rays and to the properties of the image. The calibration of X-rays tubes was performed using the MCNPX Monte Carlo code. An X-ray tube for Dental Radiology (operated from a single phase power supply) and an X-ray tube used as a reference (supplied from a constant potential power supply) were used in simulations across the energy range of interest of 40 kV to 100 kV. Results obtained indicated a linear relationship between the tubes involved. - Highlights: • Computational Model was developed to X-ray tube Practical Peak Voltage for Dental Radiology. • The calibration of X-rays tubes was performed using the MCNPX Monte Carlo code. • The energy range was 40–100 kV. • Results obtained indicated a linear relationship between the Dental Radiology and reference X-ray tubes

  16. From Modeling of Plasticity in Single-Crystal Superalloys to High-Resolution X-rays Three-Crystal Diffractometer Peaks Simulation

    Science.gov (United States)

    Jacques, Alain

    2016-12-01

    The dislocation-based modeling of the high-temperature creep of two-phased single-crystal superalloys requires input data beyond strain vs time curves. This may be obtained by use of in situ experiments combining high-temperature creep tests with high-resolution synchrotron three-crystal diffractometry. Such tests give access to changes in phase volume fractions and to the average components of the stress tensor in each phase as well as the plastic strain of each phase. Further progress may be obtained by a new method making intensive use of the Fast Fourier Transform, and first modeling the behavior of a representative volume of material (stress fields, plastic strain, dislocation densities…), then simulating directly the corresponding diffraction peaks, taking into account the displacement field within the material, chemical variations, and beam coherence. Initial tests indicate that the simulated peak shapes are close to the experimental ones and are quite sensitive to the details of the microstructure and to dislocation densities at interfaces and within the soft γ phase.

  17. Make peak flow a habit

    Science.gov (United States)

    Asthma - make peak flow a habit; Reactive airway disease - peak flow; Bronchial asthma - peak flow ... 2014:chap 55. National Asthma Education and Prevention Program website. How to use a peak flow meter. ...

  18. Peak Dose Assessment for Proposed DOE-PPPO Authorized Limits

    International Nuclear Information System (INIS)

    Maldonado, Delis

    2012-01-01

    The Oak Ridge Institute for Science and Education (ORISE), a U.S. Department of Energy (DOE) prime contractor, was contracted by the DOE Portsmouth/Paducah Project Office (DOE-PPPO) to conduct a peak dose assessment in support of the Authorized Limits Request for Solid Waste Disposal at Landfill C-746-U at the Paducah Gaseous Diffusion Plant (DOE-PPPO 2011a). The peak doses were calculated based on the DOE-PPPO Proposed Single Radionuclides Soil Guidelines and the DOE-PPPO Proposed Authorized Limits (AL) Volumetric Concentrations available in DOE-PPPO 2011a. This work is provided as an appendix to the Dose Modeling Evaluations and Technical Support Document for the Authorized Limits Request for the C-746-U Landfill at the Paducah Gaseous Diffusion Plant, Paducah, Kentucky (ORISE 2012). The receptors evaluated in ORISE 2012 were selected by the DOE-PPPO for the additional peak dose evaluations. These receptors included a Landfill Worker, Trespasser, Resident Farmer (onsite), Resident Gardener, Recreational User, Outdoor Worker and an Offsite Resident Farmer. The RESRAD (Version 6.5) and RESRAD-OFFSITE (Version 2.5) computer codes were used for the peak dose assessments. Deterministic peak dose assessments were performed for all the receptors and a probabilistic dose assessment was performed only for the Offsite Resident Farmer at the request of the DOE-PPPO. In a deterministic analysis, a single input value results in a single output value. In other words, a deterministic analysis uses single parameter values for every variable in the code. By contrast, a probabilistic approach assigns parameter ranges to certain variables, and the code randomly selects the values for each variable from the parameter range each time it calculates the dose (NRC 2006). The receptor scenarios, computer codes and parameter input files were previously used in ORISE 2012. A few modifications were made to the parameter input files as appropriate for this effort. Some of these changes

  19. Dynamics of supercooled liquids: excess wings, β peaks, and rotation-translation coupling

    International Nuclear Information System (INIS)

    Cummins, H Z

    2005-01-01

    Dielectric susceptibility spectra of liquids cooled towards the liquid-glass transition often exhibit secondary structure in the frequency region between the α peak and the susceptibility minimum, in the form of either an 'excess wing' or a secondary peak-the Johari-Goldstein β peak. Recently, Goetze and Sperl (2004 Phys. Rev. Lett. 92 105701) showed that a simple schematic mode coupling theory model, which incorporates rotation-translation (RT) coupling, successfully describes the nearly logarithmic decay observed in optical Kerr effect data. This model also exhibits both excess wing and β peak features, qualitatively resembling experimental dielectric data. It also predicts that the excess wing slope decreases with decreasing temperature and gradually evolves into a β peak with increasing RT coupling. We therefore suggest that these features and their observed evolution with temperature may be consequences of RT coupling

  20. Joint modeling of ChIP-seq data via a Markov random field model

    NARCIS (Netherlands)

    Bao, Yanchun; Vinciotti, Veronica; Wit, Ernst; 't Hoen, Peter A C

    Chromatin ImmunoPrecipitation-sequencing (ChIP-seq) experiments have now become routine in biology for the detection of protein-binding sites. In this paper, we present a Markov random field model for the joint analysis of multiple ChIP-seq experiments. The proposed model naturally accounts for

  1. Peak Communication Experiences: Concept, Structure, and Sex Differences.

    Science.gov (United States)

    Gordon, Ron; Dulaney, Earl

    A study was conducted to test a "peak communication experience" (PCE) scale developed from Abraham Maslow's theory of PCE's, a model of one's highest interpersonal communication moments in terms of perceived mutual understanding, happiness, and personal fulfillment. Nineteen items, extrapolated from Maslow's model but rendered more…

  2. Generalized Whittle-Matern random field as a model of correlated fluctuations

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2009-01-01

    This paper considers a generalization of the Gaussian random field with covariance function of the Whittle-Matern family. Such a random field can be obtained as the solution to the fractional stochastic differential equation with two fractional orders. Asymptotic properties of the covariance functions belonging to this generalized Whittle-Matern family are studied, which are used to deduce the sample path properties of the random field. The Whittle-Matern field has been widely used in modeling geostatistical data such as sea beam data, wind speed, field temperature and soil data. In this paper we show that the generalized Whittle-Matern field provides a more flexible model for wind speed data

  3. Peak tree: a new tool for multiscale hierarchical representation and peak detection of mass spectrometry data.

    Science.gov (United States)

    Zhang, Peng; Li, Houqiang; Wang, Honghui; Wong, Stephen T C; Zhou, Xiaobo

    2011-01-01

    Peak detection is one of the most important steps in mass spectrometry (MS) analysis. However, the detection result is greatly affected by severe spectrum variations. Unfortunately, most current peak detection methods are neither flexible enough to revise false detection results nor robust enough to resist spectrum variations. To improve flexibility, we introduce peak tree to represent the peak information in MS spectra. Each tree node is a peak judgment on a range of scales, and each tree decomposition, as a set of nodes, is a candidate peak detection result. To improve robustness, we combine peak detection and common peak alignment into a closed-loop framework, which finds the optimal decomposition via both peak intensity and common peak information. The common peak information is derived and loopily refined from the density clustering of the latest peak detection result. Finally, we present an improved ant colony optimization biomarker selection method to build a whole MS analysis system. Experiment shows that our peak detection method can better resist spectrum variations and provide higher sensitivity and lower false detection rates than conventional methods. The benefits from our peak-tree-based system for MS disease analysis are also proved on real SELDI data.

  4. Model simulations of the chemical and aerosol microphysical evolution of the Sarychev Peak 2009 eruption cloud compared to in situ and satellite observations

    Science.gov (United States)

    Lurton, Thibaut; Jégou, Fabrice; Berthet, Gwenaël; Renard, Jean-Baptiste; Clarisse, Lieven; Schmidt, Anja; Brogniez, Colette; Roberts, Tjarda J.

    2018-03-01

    Volcanic eruptions impact climate through the injection of sulfur dioxide (SO2), which is oxidized to form sulfuric acid aerosol particles that can enhance the stratospheric aerosol optical depth (SAOD). Besides large-magnitude eruptions, moderate-magnitude eruptions such as Kasatochi in 2008 and Sarychev Peak in 2009 can have a significant impact on stratospheric aerosol and hence climate. However, uncertainties remain in quantifying the atmospheric and climatic impacts of the 2009 Sarychev Peak eruption due to limitations in previous model representations of volcanic aerosol microphysics and particle size, whilst biases have been identified in satellite estimates of post-eruption SAOD. In addition, the 2009 Sarychev Peak eruption co-injected hydrogen chloride (HCl) alongside SO2, whose potential stratospheric chemistry impacts have not been investigated to date. We present a study of the stratospheric SO2-particle-HCl processing and impacts following Sarychev Peak eruption, using the Community Earth System Model version 1.0 (CESM1) Whole Atmosphere Community Climate Model (WACCM) - Community Aerosol and Radiation Model for Atmospheres (CARMA) sectional aerosol microphysics model (with no a priori assumption on particle size). The Sarychev Peak 2009 eruption injected 0.9 Tg of SO2 into the upper troposphere and lower stratosphere (UTLS), enhancing the aerosol load in the Northern Hemisphere. The post-eruption evolution of the volcanic SO2 in space and time are well reproduced by the model when compared to Infrared Atmospheric Sounding Interferometer (IASI) satellite data. Co-injection of 27 Gg HCl causes a lengthening of the SO2 lifetime and a slight delay in the formation of aerosols, and acts to enhance the destruction of stratospheric ozone and mono-nitrogen oxides (NOx) compared to the simulation with volcanic SO2 only. We therefore highlight the need to account for volcanic halogen chemistry when simulating the impact of eruptions such as Sarychev on

  5. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...

  6. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...

  7. Pervasive randomness in physics: an introduction to its modelling and spectral characterisation

    Science.gov (United States)

    Howard, Roy

    2017-10-01

    An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.

  8. Urban Summertime Ozone of China: Peak Ozone Hour and Nighttime Mixing

    Science.gov (United States)

    Qu, H.; Wang, Y.; Zhang, R.

    2017-12-01

    We investigate the observed diurnal cycle of summertime ozone in the cities of China using a regional chemical transport model. The simulated daytime ozone is in general agreement with the observations. Model simulations suggest that the ozone peak time and peak concentration are a function of NOx (NO + NO2) and volatile organic compound (VOC) emissions. The differences between simulated and observed ozone peak time and peak concentration in some regions can be applied to understand biases in the emission inventories. For example, the VOCs emissions are underestimated over the Pearl River Delta (PRD) region, and either NOx emissions are underestimated or VOC emissions are overestimated over the Yangtze River Delta (YRD) regions. In contrast to the general good daytime ozone simulations, the simulated nighttime ozone has a large low bias of up to 40 ppbv. Nighttime ozone in urban areas is sensitive to the nocturnal boundary-layer mixing, and enhanced nighttime mixing (from the surface to 200-500 m) is necessary for the model to reproduce the observed level of ozone.

  9. Numerical Simulation of Entropy Growth for a Nonlinear Evolutionary Model of Random Markets

    Directory of Open Access Journals (Sweden)

    Mahdi Keshtkar

    2016-01-01

    Full Text Available In this communication, the generalized continuous economic model for random markets is revisited. In this model for random markets, agents trade by pairs and exchange their money in a random and conservative way. They display the exponential wealth distribution as asymptotic equilibrium, independently of the effectiveness of the transactions and of the limitation of the total wealth. In the current work, entropy of mentioned model is defined and then some theorems on entropy growth of this evolutionary problem are given. Furthermore, the entropy increasing by simulation on some numerical examples is verified.

  10. Scaling of coercivity in a 3d random anisotropy model

    Energy Technology Data Exchange (ETDEWEB)

    Proctor, T.C., E-mail: proctortc@gmail.com; Chudnovsky, E.M., E-mail: EUGENE.CHUDNOVSKY@lehman.cuny.edu; Garanin, D.A.

    2015-06-15

    The random-anisotropy Heisenberg model is numerically studied on lattices containing over ten million spins. The study is focused on hysteresis and metastability due to topological defects, and is relevant to magnetic properties of amorphous and sintered magnets. We are interested in the limit when ferromagnetic correlations extend beyond the size of the grain inside which the magnetic anisotropy axes are correlated. In that limit the coercive field computed numerically roughly scales as the fourth power of the random anisotropy strength and as the sixth power of the grain size. Theoretical arguments are presented that provide an explanation of numerical results. Our findings should be helpful for designing amorphous and nanosintered materials with desired magnetic properties. - Highlights: • We study the random-anisotropy model on lattices containing up to ten million spins. • Irreversible behavior due to topological defects (hedgehogs) is elucidated. • Hysteresis loop area scales as the fourth power of the random anisotropy strength. • In nanosintered magnets the coercivity scales as the six power of the grain size.

  11. Particle filters for random set models

    CERN Document Server

    Ristic, Branko

    2013-01-01

    “Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based  on the Monte Carlo statistical method. The resulting  algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from  navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...

  12. Robust Peak Recognition in Intracranial Pressure Signals

    Directory of Open Access Journals (Sweden)

    Bergsneider Marvin

    2010-10-01

    Full Text Available Abstract Background The waveform morphology of intracranial pressure pulses (ICP is an essential indicator for monitoring, and forecasting critical intracranial and cerebrovascular pathophysiological variations. While current ICP pulse analysis frameworks offer satisfying results on most of the pulses, we observed that the performance of several of them deteriorates significantly on abnormal, or simply more challenging pulses. Methods This paper provides two contributions to this problem. First, it introduces MOCAIP++, a generic ICP pulse processing framework that generalizes MOCAIP (Morphological Clustering and Analysis of ICP Pulse. Its strength is to integrate several peak recognition methods to describe ICP morphology, and to exploit different ICP features to improve peak recognition. Second, it investigates the effect of incorporating, automatically identified, challenging pulses into the training set of peak recognition models. Results Experiments on a large dataset of ICP signals, as well as on a representative collection of sampled challenging ICP pulses, demonstrate that both contributions are complementary and significantly improve peak recognition performance in clinical conditions. Conclusion The proposed framework allows to extract more reliable statistics about the ICP waveform morphology on challenging pulses to investigate the predictive power of these pulses on the condition of the patient.

  13. Asthma Self-Management Model: Randomized Controlled Trial

    Science.gov (United States)

    Olivera, Carolina M. X.; Vianna, Elcio Oliveira; Bonizio, Roni C.; de Menezes, Marcelo B.; Ferraz, Erica; Cetlin, Andrea A.; Valdevite, Laura M.; Almeida, Gustavo A.; Araujo, Ana S.; Simoneti, Christian S.; de Freitas, Amanda; Lizzi, Elisangela A.; Borges, Marcos C.; de Freitas, Osvaldo

    2016-01-01

    Information for patients provided by the pharmacist is reflected in adhesion to treatment, clinical results and patient quality of life. The objective of this study was to assess an asthma self-management model for rational medicine use. This was a randomized controlled trial with 60 asthmatic patients assigned to attend five modules presented by…

  14. Do Parents Know Best? Examining the Relationship Between Parenting Profiles, Prevention Efforts, and Peak Drinking in College Students.

    Science.gov (United States)

    Mallett, Kimberly A; Turrisi, Rob; Ray, Anne E; Stapleton, Jerod; Abar, Caitlin; Mastroleo, Nadine R; Tollison, Sean; Grossbard, Joel; Larimer, Mary E

    2011-12-01

    The study examined parent profiles among high school athletes transitioning to college and their association with high-risk drinking in a multi-site, randomized trial. Students ( n = 587) were randomized to a control or combined parent-based and brief motivational intervention condition and completed measures at baseline and at 5- and 10-month follow-ups. Four parent profiles (authoritative, authoritarian, permissive, indifferent) were observed among participants. Findings indicated control participants with authoritarian parenting were at the greatest risk for heavy drinking. Alternately, students exposed to permissive or authoritarian parenting reported lower peak drinking when administered the combined intervention, compared to controls. Findings suggest the combined intervention was efficacious in reducing peak alcohol consumption among high-risk students based on athlete status and parenting profiles.

  15. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    Science.gov (United States)

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  16. Weak Lensing Peaks in Simulated Light-Cones: Investigating the Coupling between Dark Matter and Dark Energy

    Science.gov (United States)

    Giocoli, Carlo; Moscardini, Lauro; Baldi, Marco; Meneghetti, Massimo; Metcalf, Robert B.

    2018-05-01

    In this paper, we study the statistical properties of weak lensing peaks in light-cones generated from cosmological simulations. In order to assess the prospects of such observable as a cosmological probe, we consider simulations that include interacting Dark Energy (hereafter DE) models with coupling term between DE and Dark Matter. Cosmological models that produce a larger population of massive clusters have more numerous high signal-to-noise peaks; among models with comparable numbers of clusters those with more concentrated haloes produce more peaks. The most extreme model under investigation shows a difference in peak counts of about 20% with respect to the reference ΛCDM model. We find that peak statistics can be used to distinguish a coupling DE model from a reference one with the same power spectrum normalisation. The differences in the expansion history and the growth rate of structure formation are reflected in their halo counts, non-linear scale features and, through them, in the properties of the lensing peaks. For a source redshift distribution consistent with the expectations of future space-based wide field surveys, we find that typically seventy percent of the cluster population contributes to weak-lensing peaks with signal-to-noise ratios larger than two, and that the fraction of clusters in peaks approaches one-hundred percent for haloes with redshift z ≤ 0.5. Our analysis demonstrates that peak statistics are an important tool for disentangling DE models by accurately tracing the structure formation processes as a function of the cosmic time.

  17. Peak Experience Project

    Science.gov (United States)

    Scott, Daniel G.; Evans, Jessica

    2010-01-01

    This paper emerges from the continued analysis of data collected in a series of international studies concerning Childhood Peak Experiences (CPEs) based on developments in understanding peak experiences in Maslow's hierarchy of needs initiated by Dr Edward Hoffman. Bridging from the series of studies, Canadian researchers explore collected…

  18. Bayesian Peptide Peak Detection for High Resolution TOF Mass Spectrometry.

    Science.gov (United States)

    Zhang, Jianqiu; Zhou, Xiaobo; Wang, Honghui; Suffredini, Anthony; Zhang, Lin; Huang, Yufei; Wong, Stephen

    2010-11-01

    In this paper, we address the issue of peptide ion peak detection for high resolution time-of-flight (TOF) mass spectrometry (MS) data. A novel Bayesian peptide ion peak detection method is proposed for TOF data with resolution of 10 000-15 000 full width at half-maximum (FWHW). MS spectra exhibit distinct characteristics at this resolution, which are captured in a novel parametric model. Based on the proposed parametric model, a Bayesian peak detection algorithm based on Markov chain Monte Carlo (MCMC) sampling is developed. The proposed algorithm is tested on both simulated and real datasets. The results show a significant improvement in detection performance over a commonly employed method. The results also agree with expert's visual inspection. Moreover, better detection consistency is achieved across MS datasets from patients with identical pathological condition.

  19. Peak Pc Prediction in Conjunction Analysis: Conjunction Assessment Risk Analysis. Pc Behavior Prediction Models

    Science.gov (United States)

    Vallejo, J.J.; Hejduk, M.D.; Stamey, J. D.

    2015-01-01

    Satellite conjunction risk typically evaluated through the probability of collision (Pc). Considers both conjunction geometry and uncertainties in both state estimates. Conjunction events initially discovered through Joint Space Operations Center (JSpOC) screenings, usually seven days before Time of Closest Approach (TCA). However, JSpOC continues to track objects and issue conjunction updates. Changes in state estimate and reduced propagation time cause Pc to change as event develops. These changes a combination of potentially predictable development and unpredictable changes in state estimate covariance. Operationally useful datum: the peak Pc. If it can reasonably be inferred that the peak Pc value has passed, then risk assessment can be conducted against this peak value. If this value is below remediation level, then event intensity can be relaxed. Can the peak Pc location be reasonably predicted?

  20. Restoration of dimensional reduction in the random-field Ising model at five dimensions

    Science.gov (United States)

    Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D equality at all studied dimensions.

  1. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA.

    Science.gov (United States)

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-06-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models.

  2. A NEW METHOD OF PEAK DETECTION FOR ANALYSIS OF COMPREHENSIVE TWO-DIMENSIONAL GAS CHROMATOGRAPHY MASS SPECTROMETRY DATA*

    Science.gov (United States)

    Kim, Seongho; Ouyang, Ming; Jeong, Jaesik; Shen, Changyu; Zhang, Xiang

    2014-01-01

    We develop a novel peak detection algorithm for the analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF MS) data using normal-exponential-Bernoulli (NEB) and mixture probability models. The algorithm first performs baseline correction and denoising simultaneously using the NEB model, which also defines peak regions. Peaks are then picked using a mixture of probability distribution to deal with the co-eluting peaks. Peak merging is further carried out based on the mass spectral similarities among the peaks within the same peak group. The algorithm is evaluated using experimental data to study the effect of different cut-offs of the conditional Bayes factors and the effect of different mixture models including Poisson, truncated Gaussian, Gaussian, Gamma, and exponentially modified Gaussian (EMG) distributions, and the optimal version is introduced using a trial-and-error approach. We then compare the new algorithm with two existing algorithms in terms of compound identification. Data analysis shows that the developed algorithm can detect the peaks with lower false discovery rates than the existing algorithms, and a less complicated peak picking model is a promising alternative to the more complicated and widely used EMG mixture models. PMID:25264474

  3. Simulating intrafraction prostate motion with a random walk model

    Directory of Open Access Journals (Sweden)

    Tobias Pommer, PhD

    2017-07-01

    Conclusions: Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during delivery of radiation therapy.

  4. Random matrices and the six-vertex model

    CERN Document Server

    Bleher, Pavel

    2013-01-01

    This book provides a detailed description of the Riemann-Hilbert approach (RH approach) to the asymptotic analysis of both continuous and discrete orthogonal polynomials, and applications to random matrix models as well as to the six-vertex model. The RH approach was an important ingredient in the proofs of universality in unitary matrix models. This book gives an introduction to the unitary matrix models and discusses bulk and edge universality. The six-vertex model is an exactly solvable two-dimensional model in statistical physics, and thanks to the Izergin-Korepin formula for the model with domain wall boundary conditions, its partition function matches that of a unitary matrix model with nonpolynomial interaction. The authors introduce in this book the six-vertex model and include a proof of the Izergin-Korepin formula. Using the RH approach, they explicitly calculate the leading and subleading terms in the thermodynamic asymptotic behavior of the partition function of the six-vertex model with domain wa...

  5. Modelling and computing the peaks of carbon emission with balanced growth

    International Nuclear Information System (INIS)

    Chang, Shuhua; Wang, Xinyu; Wang, Zheng

    2016-01-01

    Highlights: • We use a more practical utility function to quantify the society’s welfare. • A so-called discontinuous Galerkin method is proposed to solve the ordinary differential equation satisfied by the consumption. • The theoretical results of the discontinuous Galerkin method are obtained. • We establish a Markov model to forecast the energy mix and the industrial structure. - Abstract: In this paper, we assume that under the balanced and optimal economic growth path, the economic growth rate is equal to the consumption growth rate, from which we can obtain the ordinary differential equation governing the consumption level by solving an optimal control problem. Then, a novel numerical method, namely a so-called discontinuous Galerkin method, is applied to solve the ordinary differential equation. The error estimation and the superconvergence estimation of this method are also performed. The model’s mechanism, which makes our assumption coherent, is that once the energy intensity is given, the economic growth is determined, followed by the GDP, the energy demand and the emissions. By applying this model to China, we obtain the conclusion that under the balanced and optimal economic growth path the CO_2 emission will reach its peak in 2030 in China, which is consistent with the U.S.-China Joint Announcement on Climate Change and with other previous scientific results.

  6. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope

    Energy Technology Data Exchange (ETDEWEB)

    Quan, Wei; Lv, Lin, E-mail: lvlinlch1990@163.com; Liu, Baiqi [School of Instrument Science and Opto-Electronics Engineering, Beihang University, Beijing 100191 (China)

    2014-11-15

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  7. On the relation between the mean and variance of delay in dynamic queues with random capacity and demand

    DEFF Research Database (Denmark)

    Fosgerau, Mogens

    2010-01-01

    This paper investigates the distribution of delays during a repeatedly occurring demand peak in a congested facility with random capacity and demand, such as an airport or an urban road. Congestion is described in the form of a dynamic queue using the Vickrey bottleneck model and assuming Nash...

  8. Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.

    Science.gov (United States)

    Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay

    2017-11-01

    Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.

  9. Big data prediction of durations for online collective actions based on peak's timing

    Science.gov (United States)

    Nie, Shizhao; Wang, Zheng; Pujia, Wangmo; Nie, Yuan; Lu, Peng

    2018-02-01

    Peak Model states that each collective action has a life circle, which contains four periods of "prepare", "outbreak", "peak", and "vanish"; and the peak determines the max energy and the whole process. The peak model's re-simulation indicates that there seems to be a stable ratio between the peak's timing (TP) and the total span (T) or duration of collective actions, which needs further validations through empirical data of collective actions. Therefore, the daily big data of online collective actions is applied to validate the model; and the key is to check the ratio between peak's timing and the total span. The big data is obtained from online data recording & mining of websites. It is verified by the empirical big data that there is a stable ratio between TP and T; furthermore, it seems to be normally distributed. This rule holds for both the general cases and the sub-types of collective actions. Given the distribution of the ratio, estimated probability density function can be obtained, and therefore the span can be predicted via the peak's timing. Under the scenario of big data, the instant span (how long the collective action lasts or when it ends) will be monitored and predicted in real-time. With denser data (Big Data), the estimation of the ratio's distribution gets more robust, and the prediction of collective actions' spans or durations will be more accurate.

  10. Model simulations of the chemical and aerosol microphysical evolution of the Sarychev Peak 2009 eruption cloud compared to in situ and satellite observations

    Directory of Open Access Journals (Sweden)

    T. Lurton

    2018-03-01

    Full Text Available Volcanic eruptions impact climate through the injection of sulfur dioxide (SO2, which is oxidized to form sulfuric acid aerosol particles that can enhance the stratospheric aerosol optical depth (SAOD. Besides large-magnitude eruptions, moderate-magnitude eruptions such as Kasatochi in 2008 and Sarychev Peak in 2009 can have a significant impact on stratospheric aerosol and hence climate. However, uncertainties remain in quantifying the atmospheric and climatic impacts of the 2009 Sarychev Peak eruption due to limitations in previous model representations of volcanic aerosol microphysics and particle size, whilst biases have been identified in satellite estimates of post-eruption SAOD. In addition, the 2009 Sarychev Peak eruption co-injected hydrogen chloride (HCl alongside SO2, whose potential stratospheric chemistry impacts have not been investigated to date. We present a study of the stratospheric SO2–particle–HCl processing and impacts following Sarychev Peak eruption, using the Community Earth System Model version 1.0 (CESM1 Whole Atmosphere Community Climate Model (WACCM – Community Aerosol and Radiation Model for Atmospheres (CARMA sectional aerosol microphysics model (with no a priori assumption on particle size. The Sarychev Peak 2009 eruption injected 0.9 Tg of SO2 into the upper troposphere and lower stratosphere (UTLS, enhancing the aerosol load in the Northern Hemisphere. The post-eruption evolution of the volcanic SO2 in space and time are well reproduced by the model when compared to Infrared Atmospheric Sounding Interferometer (IASI satellite data. Co-injection of 27 Gg HCl causes a lengthening of the SO2 lifetime and a slight delay in the formation of aerosols, and acts to enhance the destruction of stratospheric ozone and mono-nitrogen oxides (NOx compared to the simulation with volcanic SO2 only. We therefore highlight the need to account for volcanic halogen chemistry when simulating the impact of eruptions

  11. A comparison of observation-level random effect and Beta-Binomial models for modelling overdispersion in Binomial data in ecology & evolution.

    Science.gov (United States)

    Harrison, Xavier A

    2015-01-01

    Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed

  12. An analytical reconstruction model of the spread-out Bragg peak using laser-accelerated proton beams.

    Science.gov (United States)

    Tao, Li; Zhu, Kun; Zhu, Jungao; Xu, Xiaohan; Lin, Chen; Ma, Wenjun; Lu, Haiyang; Zhao, Yanying; Lu, Yuanrong; Chen, Jia-Er; Yan, Xueqing

    2017-07-07

    With the development of laser technology, laser-driven proton acceleration provides a new method for proton tumor therapy. However, it has not been applied in practice because of the wide and decreasing energy spectrum of laser-accelerated proton beams. In this paper, we propose an analytical model to reconstruct the spread-out Bragg peak (SOBP) using laser-accelerated proton beams. Firstly, we present a modified weighting formula for protons of different energies. Secondly, a theoretical model for the reconstruction of SOBPs with laser-accelerated proton beams has been built. It can quickly calculate the number of laser shots needed for each energy interval of the laser-accelerated protons. Finally, we show the 2D reconstruction results of SOBPs for laser-accelerated proton beams and the ideal situation. The final results show that our analytical model can give an SOBP reconstruction scheme that can be used for actual tumor therapy.

  13. Peak Oil and other threatening peaks-Chimeras without substance

    International Nuclear Information System (INIS)

    Radetzki, Marian

    2010-01-01

    The Peak Oil movement has widely spread its message about an impending peak in global oil production, caused by an inadequate resource base. On closer scrutiny, the underlying analysis is inconsistent, void of a theoretical foundation and without support in empirical observations. Global oil resources are huge and expanding, and pose no threat to continuing output growth within an extended time horizon. In contrast, temporary or prolonged supply crunches are indeed plausible, even likely, on account of growing resource nationalism denying access to efficient exploitation of the existing resource wealth.

  14. Universality for 1d Random Band Matrices: Sigma-Model Approximation

    Science.gov (United States)

    Shcherbina, Mariya; Shcherbina, Tatyana

    2018-02-01

    The paper continues the development of the rigorous supersymmetric transfer matrix approach to the random band matrices started in (J Stat Phys 164:1233-1260, 2016; Commun Math Phys 351:1009-1044, 2017). We consider random Hermitian block band matrices consisting of W× W random Gaussian blocks (parametrized by j,k \\in Λ =[1,n]^d\\cap Z^d ) with a fixed entry's variance J_{jk}=δ _{j,k}W^{-1}+β Δ _{j,k}W^{-2} , β >0 in each block. Taking the limit W→ ∞ with fixed n and β , we derive the sigma-model approximation of the second correlation function similar to Efetov's one. Then, considering the limit β , n→ ∞, we prove that in the dimension d=1 the behaviour of the sigma-model approximation in the bulk of the spectrum, as β ≫ n , is determined by the classical Wigner-Dyson statistics.

  15. Peak Velocity as an Alternative Method for Training Prescription in Mice.

    Science.gov (United States)

    Picoli, Caroline de Carvalho; Romero, Paulo Vitor da Silva; Gilio, Gustavo R; Guariglia, Débora A; Tófolo, Laize P; de Moraes, Solange M F; Machado, Fabiana A; Peres, Sidney B

    2018-01-01

    Purpose: To compare the efficiency of an aerobic physical training program prescribed according to either velocity associated with maximum oxygen uptake (vVO 2max ) or peak running speed obtained during an incremental treadmill test (V peak_K ) in mice. Methods: Twenty male Swiss mice, 60 days old, were randomly divided into two groups with 10 animals each: 1. group trained by vVO 2max (GVO 2 ), 2. group trained by V peak_K (GVP). After the adaptation training period, an incremental test was performed at the beginning of each week to adjust training load and to determine the amount of VO 2 and VCO 2 fluxes consumed, energy expenditure (EE) and run distance during the incremental test. Mice were submitted to 4 weeks of aerobic exercise training of moderate intensity (velocity referring to 70% of vVO 2max and V peak_K ) in a programmable treadmill. The sessions lasted from 30 to 40 min in the first week, to reach 60 min in the fourth week, in order to provide the mice with a moderate intensity exercise, totaling 20 training sessions. Results: Mice demonstrated increases in VO 2max (ml·kg -1 ·min -1 ) (GVO 2 = 49.1% and GVP = 56.2%), V peak_K (cm·s -1 ) (GVO 2 = 50.9% and GVP = 22.3%), EE (ml·kg -0,75 ·min -1 ) (GVO 2 = 39.9% and GVP = 51.5%), and run distance (cm) (GVO 2 = 43.5% and GVP = 33.4%), after 4 weeks of aerobic training (time effect, P < 0.05); there were no differences between the groups. Conclusions: V peak_K , as well as vVO 2max , can be adopted as an alternative test to determine the performance and correct prescription of systemized aerobic protocol training to mice.

  16. PEAK SHAVING CONSIDERING STREAMFLOW UNCERTAINTIES

    African Journals Online (AJOL)

    user

    The random nature of the system load is re-organized by using a Markov load model. The results include a ... has received a considerable attention among optimisation problems. ... the dynamic programming theory used in this work is given ...

  17. Effects of peatland drainage management on peak flows

    Directory of Open Access Journals (Sweden)

    C. E. Ballard

    2012-07-01

    Full Text Available Open ditch drainage has historically been a common land management practice in upland blanket peats, particularly in the UK. However, peatland drainage is now generally considered to have adverse effects on the upland environment, including increased peak flows. As a result, drain blocking has become a common management strategy in the UK over recent years, although there is only anecdotal evidence to suggest that this might decrease peak flows. The change in the hydrological regime associated with the drainage of blanket peat and the subsequent blocking of drains is poorly understood, therefore a new physics-based model has been developed that allows the exploration of the associated hydrological processes. A series of simulations is used to explore the response of intact, drained and blocked drain sites at field scales. While drainage is generally found to increase peak flows, the effect of drain blocking appears to be dependent on local conditions, sometimes decreasing and sometimes increasing peak flows. Based on insights from these simulations we identify steep smooth drains as those that would experience the greatest reduction in field-scale peak flows if blocked and recommend that future targeted field studies should be focused on examining surface runoff characteristics.

  18. Exploring the Influence of Neighborhood Characteristics on Burglary Risks: A Bayesian Random Effects Modeling Approach

    Directory of Open Access Journals (Sweden)

    Hongqiang Liu

    2016-06-01

    Full Text Available A Bayesian random effects modeling approach was used to examine the influence of neighborhood characteristics on burglary risks in Jianghan District, Wuhan, China. This random effects model is essentially spatial; a spatially structured random effects term and an unstructured random effects term are added to the traditional non-spatial Poisson regression model. Based on social disorganization and routine activity theories, five covariates extracted from the available data at the neighborhood level were used in the modeling. Three regression models were fitted and compared by the deviance information criterion to identify which model best fit our data. A comparison of the results from the three models indicates that the Bayesian random effects model is superior to the non-spatial models in fitting the data and estimating regression coefficients. Our results also show that neighborhoods with above average bar density and department store density have higher burglary risks. Neighborhood-specific burglary risks and posterior probabilities of neighborhoods having a burglary risk greater than 1.0 were mapped, indicating the neighborhoods that should warrant more attention and be prioritized for crime intervention and reduction. Implications and limitations of the study are discussed in our concluding section.

  19. Particle in cell simulation of peaking switch for breakdown evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Umbarkar, Sachin B.; Bindu, S.; Mangalvedekar, H.A.; Saxena, A.; Singh, N.M., E-mail: sachin.b.umbarkar@gmail.com [Department of Electric Engineering, Veermata Jijabai Technological Institute, Mumbai (India); Sharma, Archana; Saroj, P.C.; Mittal, K.C. [Accelerator Pulse Power Division, Bhabha Atomic Research Centre, Mumbai (India)

    2014-07-01

    Marx generator connected to peaking capacitor and peaking switch can generate Ultra-Wideband (UWB) radiation. A new peaking switch is designed for converting the existing nanosecond Marx generator to a UWB source. The paper explains the particle in cell (PIC) simulation for this peaking switch, using MAGIC 3D software. This peaking switch electrode is made up of copper tungsten material and is fixed inside the hermitically sealed derlin material. The switch can withstand a gas pressure up to 13.5 kg/cm{sup 2}. The lower electrode of the switch is connected to the last stage of the Marx generator. Initially Marx generator (without peaking stage) in air; gives the output pulse with peak amplitude of 113.75 kV and pulse rise time of 25 ns. Thus, we design a new peaking switch to improve the rise time of output pulse and to pressurize this peaking switch separately (i.e. Marx and peaking switch is at different pressure). The PIC simulation gives the particle charge density, current density, E counter plot, emitted electron current, and particle energy along the axis of gap between electrodes. The charge injection and electric field dependence on ionic dissociation phenomenon are briefly analyzed using this simulation. The model is simulated with different gases (N{sub 2}, H{sub 2}, and Air) under different pressure (2 kg/cm{sup 2}, 5 kg/cm{sup 2}, 10 kg/cm{sup 2}). (author)

  20. Impact of Smart Grid Technologies on Peak Load to 2050

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    The IEA's Smart Grids Technology Roadmap identified five global trends that could be effectively addressed by deploying smart grids. These are: increasing peak load (the maximum power that the grid delivers during peak hours), rising electricity consumption, electrification of transport, deployment of variable generation technologies (e.g. wind and solar PV) and ageing infrastructure. Along with this roadmap, a new working paper -- Impact of Smart Grid Technologies on Peak Load to 2050 -- develops a methodology to estimate the evolution of peak load until 2050. It also analyses the impact of smart grid technologies in reducing peak load for four key regions; OECD North America, OECD Europe, OECD Pacific and China. This working paper is a first IEA effort in an evolving modelling process of smart grids that is considering demand response in residential and commercial sectors as well as the integration of electric vehicles.

  1. On the Magnitude and Orientation of Stress during Shock Metamorphism: Understanding Peak Ring Formation by Combining Observations and Models.

    Science.gov (United States)

    Rae, A.; Poelchau, M.; Collins, G. S.; Timms, N.; Cavosie, A. J.; Lofi, J.; Salge, T.; Riller, U. P.; Ferrière, L.; Grieve, R. A. F.; Osinski, G.; Morgan, J. V.; Expedition 364 Science Party, I. I.

    2017-12-01

    . Our results quantitatively describe the deviatoric stress conditions of rocks in shock, which are consistent with observations of shock deformation. Our integrated analysis provides further support for the dynamic collapse model of peak-ring formation, and places dynamic constraints on the conditions of peak-ring formation.

  2. A comparison of observation-level random effect and Beta-Binomial models for modelling overdispersion in Binomial data in ecology & evolution

    Directory of Open Access Journals (Sweden)

    Xavier A. Harrison

    2015-07-01

    Full Text Available Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels, I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non

  3. A lattice-model representation of continuous-time random walks

    International Nuclear Information System (INIS)

    Campos, Daniel; Mendez, Vicenc

    2008-01-01

    We report some ideas for constructing lattice models (LMs) as a discrete approach to the reaction-dispersal (RD) or reaction-random walks (RRW) models. The analysis of a rather general class of Markovian and non-Markovian processes, from the point of view of their wavefront solutions, let us show that in some regimes their macroscopic dynamics (front speed) turns out to be different from that by classical reaction-diffusion equations, which are often used as a mean-field approximation to the problem. So, the convenience of a more general framework as that given by the continuous-time random walks (CTRW) is claimed. Here we use LMs as a numerical approach in order to support that idea, while in previous works our discussion was restricted to analytical models. For the two specific cases studied here, we derive and analyze the mean-field expressions for our LMs. As a result, we are able to provide some links between the numerical and analytical approaches studied

  4. A lattice-model representation of continuous-time random walks

    Energy Technology Data Exchange (ETDEWEB)

    Campos, Daniel [School of Mathematics, Department of Applied Mathematics, University of Manchester, Manchester M60 1QD (United Kingdom); Mendez, Vicenc [Grup de Fisica Estadistica, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra (Barcelona) (Spain)], E-mail: daniel.campos@uab.es, E-mail: vicenc.mendez@uab.es

    2008-02-29

    We report some ideas for constructing lattice models (LMs) as a discrete approach to the reaction-dispersal (RD) or reaction-random walks (RRW) models. The analysis of a rather general class of Markovian and non-Markovian processes, from the point of view of their wavefront solutions, let us show that in some regimes their macroscopic dynamics (front speed) turns out to be different from that by classical reaction-diffusion equations, which are often used as a mean-field approximation to the problem. So, the convenience of a more general framework as that given by the continuous-time random walks (CTRW) is claimed. Here we use LMs as a numerical approach in order to support that idea, while in previous works our discussion was restricted to analytical models. For the two specific cases studied here, we derive and analyze the mean-field expressions for our LMs. As a result, we are able to provide some links between the numerical and analytical approaches studied.

  5. Self-dual random-plaquette gauge model and the quantum toric code

    Science.gov (United States)

    Takeda, Koujin; Nishimori, Hidetoshi

    2004-05-01

    We study the four-dimensional Z2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, pc=0.889972…, and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions.

  6. Self-dual random-plaquette gauge model and the quantum toric code

    International Nuclear Information System (INIS)

    Takeda, Koujin; Nishimori, Hidetoshi

    2004-01-01

    We study the four-dimensional Z 2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, p c =0.889972..., and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions

  7. Model Related Estimates of time dependent quantiles of peak flows - case study for selected catchments in Poland

    Science.gov (United States)

    Strupczewski, Witold G.; Bogdanowich, Ewa; Debele, Sisay

    2016-04-01

    Under Polish climate conditions the series of Annual Maxima (AM) flows are usually a mixture of peak flows of thaw- and rainfall- originated floods. The northern, lowland regions are dominated by snowmelt floods whilst in mountainous regions the proportion of rainfall floods is predominant. In many stations the majority of AM can be of snowmelt origin, but the greatest peak flows come from rainfall floods or vice versa. In a warming climate, precipitation is less likely to occur as snowfall. A shift from a snow- towards a rain-dominated regime results in a decreasing trend in mean and standard deviations of winter peak flows whilst rainfall floods do not exhibit any trace of non-stationarity. That is why a simple form of trends (i.e. linear trends) are more difficult to identify in AM time-series than in Seasonal Maxima (SM), usually winter season time-series. Hence it is recommended to analyse trends in SM, where a trend in standard deviation strongly influences the time -dependent upper quantiles. The uncertainty associated with the extrapolation of the trend makes it necessary to apply a relationship for trend which has time derivative tending to zero, e.g. we can assume a new climate equilibrium epoch approaching, or a time horizon is limited by the validity of the trend model. For both winter and summer SM time series, at least three distributions functions with trend model in the location, scale and shape parameters are estimated by means of the GAMLSS package using the ML-techniques. The resulting trend estimates in mean and standard deviation are mutually compared to the observed trends. Then, using AIC measures as weights, a multi-model distribution is constructed for each of two seasons separately. Further, assuming a mutual independence of the seasonal maxima, an AM model with time-dependent parameters can be obtained. The use of a multi-model approach can alleviate the effects of different and often contradictory trends obtained by using and identifying

  8. Reduction of peak energy demand based on smart appliances energy consumption adjustment

    Science.gov (United States)

    Powroźnik, P.; Szulim, R.

    2017-08-01

    In the paper the concept of elastic model of energy management for smart grid and micro smart grid is presented. For the proposed model a method for reducing peak demand in micro smart grid has been defined. The idea of peak demand reduction in elastic model of energy management is to introduce a balance between demand and supply of current power for the given Micro Smart Grid in the given moment. The results of the simulations studies were presented. They were carried out on real household data available on UCI Machine Learning Repository. The results may have practical application in the smart grid networks, where there is a need for smart appliances energy consumption adjustment. The article presents a proposal to implement the elastic model of energy management as the cloud computing solution. This approach of peak demand reduction might have application particularly in a large smart grid.

  9. KiDS-450: cosmological constraints from weak lensing peak statistics - I. Inference from analytical prediction of high signal-to-noise ratio convergence peaks

    Science.gov (United States)

    Shan, HuanYuan; Liu, Xiangkun; Hildebrandt, Hendrik; Pan, Chuzhong; Martinet, Nicolas; Fan, Zuhui; Schneider, Peter; Asgari, Marika; Harnois-Déraps, Joachim; Hoekstra, Henk; Wright, Angus; Dietrich, Jörg P.; Erben, Thomas; Getman, Fedor; Grado, Aniello; Heymans, Catherine; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Puddu, Emanuella; Radovich, Mario; Wang, Qiao

    2018-02-01

    This paper is the first of a series of papers constraining cosmological parameters with weak lensing peak statistics using ˜ 450 deg2 of imaging data from the Kilo Degree Survey (KiDS-450). We measure high signal-to-noise ratio (SNR: ν) weak lensing convergence peaks in the range of 3 < ν < 5, and employ theoretical models to derive expected values. These models are validated using a suite of simulations. We take into account two major systematic effects, the boost factor and the effect of baryons on the mass-concentration relation of dark matter haloes. In addition, we investigate the impacts of other potential astrophysical systematics including the projection effects of large-scale structures, intrinsic galaxy alignments, as well as residual measurement uncertainties in the shear and redshift calibration. Assuming a flat Λ cold dark matter model, we find constraints for S_8=σ _8(Ω _m/0.3)^{0.5}=0.746^{+0.046}_{-0.107} according to the degeneracy direction of the cosmic shear analysis and Σ _8=σ _8(Ω _m/0.3)^{0.38}=0.696^{+0.048}_{-0.050} based on the derived degeneracy direction of our high-SNR peak statistics. The difference between the power index of S8 and in Σ8 indicates that combining cosmic shear with peak statistics has the potential to break the degeneracy in σ8 and Ωm. Our results are consistent with the cosmic shear tomographic correlation analysis of the same data set and ˜2σ lower than the Planck 2016 results.

  10. Peak power ratio generator

    Science.gov (United States)

    Moyer, R.D.

    A peak power ratio generator is described for measuring, in combination with a conventional power meter, the peak power level of extremely narrow pulses in the gigahertz radio frequency bands. The present invention in a preferred embodiment utilizes a tunnel diode and a back diode combination in a detector circuit as the only high speed elements. The high speed tunnel diode provides a bistable signal and serves as a memory device of the input pulses for the remaining, slower components. A hybrid digital and analog loop maintains the peak power level of a reference channel at a known amount. Thus, by measuring the average power levels of the reference signal and the source signal, the peak power level of the source signal can be determined.

  11. The transverse spin-1 Ising model with random interactions

    Energy Technology Data Exchange (ETDEWEB)

    Bouziane, Touria [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco)], E-mail: touria582004@yahoo.fr; Saber, Mohammed [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco); Dpto. Fisica Aplicada I, EUPDS (EUPDS), Plaza Europa, 1, San Sebastian 20018 (Spain)

    2009-01-15

    The phase diagrams of the transverse spin-1 Ising model with random interactions are investigated using a new technique in the effective field theory that employs a probability distribution within the framework of the single-site cluster theory based on the use of exact Ising spin identities. A model is adopted in which the nearest-neighbor exchange couplings are independent random variables distributed according to the law P(J{sub ij})=p{delta}(J{sub ij}-J)+(1-p){delta}(J{sub ij}-{alpha}J). General formulae, applicable to lattices with coordination number N, are given. Numerical results are presented for a simple cubic lattice. The possible reentrant phenomenon displayed by the system due to the competitive effects between exchange interactions occurs for the appropriate range of the parameter {alpha}.

  12. Local lattice relaxations in random metallic alloys: Effective tetrahedron model and supercell approach

    DEFF Research Database (Denmark)

    Ruban, Andrei; Simak, S.I.; Shallcross, S.

    2003-01-01

    We present a simple effective tetrahedron model for local lattice relaxation effects in random metallic alloys on simple primitive lattices. A comparison with direct ab initio calculations for supercells representing random Ni0.50Pt0.50 and Cu0.25Au0.75 alloys as well as the dilute limit of Au-ri......-rich CuAu alloys shows that the model yields a quantitatively accurate description of the relaxtion energies in these systems. Finally, we discuss the bond length distribution in random alloys....

  13. Role of Statistical Random-Effects Linear Models in Personalized Medicine.

    Science.gov (United States)

    Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose

    2012-03-01

    Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization.

  14. Distribution network planning method considering distributed generation for peak cutting

    International Nuclear Information System (INIS)

    Ouyang Wu; Cheng Haozhong; Zhang Xiubin; Yao Liangzhong

    2010-01-01

    Conventional distribution planning method based on peak load brings about large investment, high risk and low utilization efficiency. A distribution network planning method considering distributed generation (DG) for peak cutting is proposed in this paper. The new integrated distribution network planning method with DG implementation aims to minimize the sum of feeder investments, DG investments, energy loss cost and the additional cost of DG for peak cutting. Using the solution techniques combining genetic algorithm (GA) with the heuristic approach, the proposed model determines the optimal planning scheme including the feeder network and the siting and sizing of DG. The strategy for the site and size of DG, which is based on the radial structure characteristics of distribution network, reduces the complexity degree of solving the optimization model and eases the computational burden substantially. Furthermore, the operation schedule of DG at the different load level is also provided.

  15. Discrete random walk models for space-time fractional diffusion

    International Nuclear Information System (INIS)

    Gorenflo, Rudolf; Mainardi, Francesco; Moretti, Daniele; Pagnini, Gianni; Paradisi, Paolo

    2002-01-01

    A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order α is part of (0,2] and skewness θ (moduleθ≤{α,2-α}), and the first-order time derivative with a Caputo derivative of order β is part of (0,1]. Such evolution equation implies for the flux a fractional Fick's law which accounts for spatial and temporal non-locality. The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process that we view as a generalized diffusion process. By adopting appropriate finite-difference schemes of solution, we generate models of random walk discrete in space and time suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation

  16. Understand the impacts of wetland restoration on peak flow and baseflow by coupling hydrologic and hydrodynamic models

    Science.gov (United States)

    Gao, H.; Sabo, J. L.

    2016-12-01

    Wetlands as the earth's kidneys provides various ecosystem services, such as absorbing pollutants, purifying freshwater, providing habitats for diverse ecosystems, sustaining species richness and biodiversity. From hydrologic perspective, wetlands can store storm-flood water in flooding seasons and release it afterwards, which will reduce flood peaks and reshape hydrograph. Therefore, as a green infrastructure and natural capital, wetlands provides a competent alternative to manage water resources in a green way, with potential to replace the widely criticized traditional gray infrastructure (i.e. dams and dikes) in certain cases. However, there are few systematic scientific tools to support our decision-making on site selection and allow us to quantitatively investigate the impacts of restored wetlands on hydrological process, not only in local scale but also in the view of entire catchment. In this study, we employed a topographic index, HAND (the Height Above the Nearest Drainage), to support our decision on potential site selection. Subsequently, a hydrological model (VIC, Variable Infiltration Capacity) was coupled with a macro-scale hydrodynamic model (CaMa-Flood, Catchment-Based Macro-scale Floodplain) to simulate the impact of wetland restoration on flood peaks and baseflow. The results demonstrated that topographic information is an essential factor to select wetland restoration location. Different reaches, wetlands area and the change of roughness coefficient should be taken into account while evaluating the impacts of wetland restoration. The simulated results also clearly illustrated that wetland restoration will increase the local storage and decrease the downstream peak flow which is beneficial for flood prevention. However, its impact on baseflow is ambiguous. Theoretically, restored wetlands will increase the baseflow due to the slower release of the stored flood water, but the increase of wetlands area may also increase the actual evaporation

  17. Noise distribution of a peak track and hold circuit

    International Nuclear Information System (INIS)

    Seller, Paul; Hardie, Alec L.; Morrissey, Quentin

    2012-01-01

    Noise in linear electronic circuits is well characterised in terms of power spectral density in the frequency domain and the Normal probability density function in the time domain. For instance a charge preamplifier followed by a simple time independent pulse shaping circuit produces an output with a predictable, easily calculated Normal density function. By the Ergodic Principle this is true if the signal is sampled randomly in time or the experiment is run many times and measured at a fixed time after the circuit is released from reset. Apart from well defined cases, the time of the sample after release of reset does not affect the density function. If this signal is then passed through a peak track-and-hold circuit the situation is very different. The probability density function of the sampled signal is no longer Normal and the function changes with the time of the sample after release of reset. This density function can be classified by the Gumbel probability density function which characterises the Extreme Value Distribution of a defined number of Normally distributed values. The number of peaks in the signal is an important factor in the analysis. This issue is analysed theoretically and compared with a time domain noise simulation programme. This is then related to a real electronic circuit used for low-noise X-ray measurements and shows how the low-energy resolution of this system is significantly degraded when using a peak track-and-hold.

  18. Investigating the impact of land cover change on peak river flow in UK upland peat catchments, based on modelled scenarios

    Science.gov (United States)

    Gao, Jihui; Holden, Joseph; Kirkby, Mike

    2014-05-01

    Changes to land cover can influence the velocity of overland flow. In headwater peatlands, saturation means that overland flow is a dominant source of runoff, particularly during heavy rainfall events. Human modifications in headwater peatlands may include removal of vegetation (e.g. by erosion processes, fire, pollution, overgrazing) or pro-active revegetation of peat with sedges such as Eriophorum or mosses such as Sphagnum. How these modifications affect the river flow, and in particular the flood peak, in headwater peatlands is a key problem for land management. In particular, the impact of the spatial distribution of land cover change (e.g. different locations and sizes of land cover change area) on river flow is not clear. In this presentation a new fully distributed version of TOPMODEL, which represents the effects of distributed land cover change on river discharge, was employed to investigate land cover change impacts in three UK upland peat catchments (Trout Beck in the North Pennines, the Wye in mid-Wales and the East Dart in southwest England). Land cover scenarios with three typical land covers (i.e. Eriophorum, Sphagnum and bare peat) having different surface roughness in upland peatlands were designed for these catchments to investigate land cover impacts on river flow through simulation runs of the distributed model. As a result of hypothesis testing three land cover principles emerged from the work as follows: Principle (1): Well vegetated buffer strips are important for reducing flow peaks. A wider bare peat strip nearer to the river channel gives a higher flow peak and reduces the delay to peak; conversely, a wider buffer strip with higher density vegetation (e.g. Sphagnum) leads to a lower peak and postpones the peak. In both cases, a narrower buffer strip surrounding upstream and downstream channels has a greater effect than a thicker buffer strip just based around the downstream river network. Principle (2): When the area of change is equal

  19. [Research on K-means clustering segmentation method for MRI brain image based on selecting multi-peaks in gray histogram].

    Science.gov (United States)

    Chen, Zhaoxue; Yu, Haizhong; Chen, Hao

    2013-12-01

    To solve the problem of traditional K-means clustering in which initial clustering centers are selected randomly, we proposed a new K-means segmentation algorithm based on robustly selecting 'peaks' standing for White Matter, Gray Matter and Cerebrospinal Fluid in multi-peaks gray histogram of MRI brain image. The new algorithm takes gray value of selected histogram 'peaks' as the initial K-means clustering center and can segment the MRI brain image into three parts of tissue more effectively, accurately, steadily and successfully. Massive experiments have proved that the proposed algorithm can overcome many shortcomings caused by traditional K-means clustering method such as low efficiency, veracity, robustness and time consuming. The histogram 'peak' selecting idea of the proposed segmentootion method is of more universal availability.

  20. Randomly dispersed particle fuel model in the PSG Monte Carlo neutron transport code

    International Nuclear Information System (INIS)

    Leppaenen, J.

    2007-01-01

    High-temperature gas-cooled reactor fuels are composed of thousands of microscopic fuel particles, randomly dispersed in a graphite matrix. The modelling of such geometry is complicated, especially using continuous-energy Monte Carlo codes, which are unable to apply any deterministic corrections in the calculation. This paper presents the geometry routine developed for modelling randomly dispersed particle fuels using the PSG Monte Carlo reactor physics code. The model is based on the delta-tracking method, and it takes into account the spatial self-shielding effects and the random dispersion of the fuel particles. The calculation routine is validated by comparing the results to reference MCNP4C calculations using uranium and plutonium based fuels. (authors)

  1. Peak and Tail Scaling of Breakthrough Curves in Hydrologic Tracer Tests

    Science.gov (United States)

    Aquino, T.; Aubeneau, A. F.; Bolster, D.

    2014-12-01

    Power law tails, a marked signature of anomalous transport, have been observed in solute breakthrough curves time and time again in a variety of hydrologic settings, including in streams. However, due to the low concentrations at which they occur they are notoriously difficult to measure with confidence. This leads us to ask if there are other associated signatures of anomalous transport that can be sought. We develop a general stochastic transport framework and derive an asymptotic relation between the tail scaling of a breakthrough curve for a conservative tracer at a fixed downstream position and the scaling of the peak concentration of breakthrough curves as a function of downstream position, demonstrating that they provide equivalent information. We then quantify the relevant spatiotemporal scales for the emergence of this asymptotic regime, where the relationship holds, in the context of a very simple model that represents transport in an idealized river. We validate our results using random walk simulations. The potential experimental benefits and limitations of these findings are discussed.

  2. Modeling the Quality of Videos Displayed With Local Dimming Backlight at Different Peak White and Ambient Light Levels

    DEFF Research Database (Denmark)

    Mantel, Claire; Søgaard, Jacob; Bech, Søren

    2016-01-01

    is computed using a model of the display. Widely used objective quality metrics are applied based on the rendering models of the videos to predict the subjective evaluations. As these predictions are not satisfying, three machine learning methods are applied: partial least square regression, elastic net......This paper investigates the impact of ambient light and peak white (maximum brightness of a display) on the perceived quality of videos displayed using local backlight dimming. Two subjective tests providing quality evaluations are presented and analyzed. The analyses of variance show significant...

  3. The Mechanics of Peak-Ring Impact Crater Formation from the IODP-ICDP Expedition 364

    Science.gov (United States)

    Melosh, H.; Collins, G. S.; Morgan, J. V.; Gulick, S. P. S.

    2017-12-01

    The Chicxulub impact crater is one of very few peak-ring impact craters on Earth. While small (less than 3 km on Earth) impact craters are typically bowl-shaped, larger craters exhibit central peaks, which in still larger (more than about 100 km on Earth) craters expand into mountainous rings with diameters close to half that of the crater rim. The origin of these peak rings has been contentious: Such craters are far too large to create in laboratory experiments and remote sensing of extraterrestrial examples has not clarified the mechanics of their formation. Two principal models of peak ring formation are currently in vogue, the "nested crater" model, in which the peak ring originates at shallow depths in the target, and the "dynamic collapse" model in which the peak ring is uplifted at the base of a collapsing, over-steepened central peak and its rocks originate at mid-crustal depths. IODP-ICDP Expedition 364 sought to elucidate, among other important goals, the mechanics of peak ring formation in the young (66 Myr), fresh, but completely buried Chicxulub impact crater. The cores from this borehole now show unambiguously that the rocks in the Chicxulub peak ring originated at mid-crustal depths, apparently ruling out the nested crater model. These rocks were shocked to pressures on the order of 10-35 GPa and were so shattered that their densities and seismic velocities now resemble those of sedimentary rocks. The morphology of the final crater, its structure as revealed in previous seismic imaging, and the results from the cores are completely consistent with modern numerical models of impact crater excavation and collapse that incorporate a model for post-impact weakening. Subsequent to the opening of a ca. 100 km diameter and 30 km deep transient crater, this enormous hole in the crust collapsed over a period of about 10 minutes. Collapse was enabled by movement of the underlying rocks, which briefly behaved in the manner of a high-viscosity fluid, a brittle

  4. Shape Modelling Using Markov Random Field Restoration of Point Correspondences

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen

    2003-01-01

    A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized sh...

  5. Chinese emissions peak: Not when, but how

    International Nuclear Information System (INIS)

    Spencer, Thomas; Colombier, Michel; Wang, Xin; Sartor, Oliver; Waisman, Henri

    2016-07-01

    It seems highly likely that China will overachieve its 2020 and 2030 targets, and peak its emissions before 2030 and possibly at a lower level than often assumed. This paper argues that the debate on the timing of the peak is misplaced: what matters is not when by why. For the peak to be seen as a harbinger of deep transformation, it needs to be based on significant macro-economic reform and restructuring, with attendant improvement in energy intensity. The Chinese economic model has been extraordinarily investment and resource intensive, and has driven the growth in GHG emissions. That model is no longer economically or environmentally sustainable. Therefore Chinese policy-makers are faced with a trade-off between slower short-term growth and economic reform, versus supporting short-term growth but slowing economic reform. The outcome will be crucial for the transition to a low-carbon economy. Overall, the 13. FYP (2016-2020) gives the impression of a cautious reflection of the new normal paradigm on the economic front, and a somewhat conservative translation of this shift into the energy and climate targets. Nonetheless, the 13. FYP targets set China well on the way to overachieving its 2020 pledge undertaken at COP15 in Copenhagen, and to potentially overachieving its INDC. It thus seems likely that China will achieve its emissions peak before 2030. However, the crucial question is not when China peaks, but whether the underlying transformation of the Chinese economy and energy system lays the basis for deep decarbonization thereafter. Thorough assessments of the implications of the 'new normal' for Chinese emissions and energy system trajectories, taking into account the link with the Chinese macro-economy, are needed. Scenarios provide a useful framework and should focus on a number of short-term uncertainties. Most energy system and emissions scenarios published today assume a continuity of trends between 2010-2015 and 2015-2020, which is at odds with clear

  6. A spatial error model with continuous random effects and an application to growth convergence

    Science.gov (United States)

    Laurini, Márcio Poletti

    2017-10-01

    We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.

  7. Allometric modelling of peak oxygen uptake in male soccer players of 8-18 years of age

    NARCIS (Netherlands)

    Valente-dos-Santos, Joao; Coelho-e-Silva, Manuel J.; Tavares, Oscar M.; Brito, Joao; Seabra, Andre; Rebelo, Antonio; Sherar, Lauren B.; Elferink-Gemser, Marije T.; Malina, Robert M.

    Background: Peak oxygen uptake (VO2peak) is routinely scaled as mL O-2 per kilogram body mass despite theoretical and statistical limitations of using ratios. Aim: To examine the contribution of maturity status and body size descriptors to ageassociated inter-individual variability in VO2peak and to

  8. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    Science.gov (United States)

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Improved simulation of peak flows under climate change: post-processing or composite opjective calibration?

    NARCIS (Netherlands)

    Zhang, Xujie; Booij, Martijn J.; Xu, YuePing

    2015-01-01

    Climate change is expected to have large impacts on peak flows. However, there may be bias in the simulation of peak flows by hydrological models. This study aims to improve the simulation of peak flows under climate change in Lanjiang catchment, east China, by comparing two approaches:

  10. Investigating Facebook Groups through a Random Graph Model

    OpenAIRE

    Dinithi Pallegedara; Lei Pan

    2014-01-01

    Facebook disseminates messages for billions of users everyday. Though there are log files stored on central servers, law enforcement agencies outside of the U.S. cannot easily acquire server log files from Facebook. This work models Facebook user groups by using a random graph model. Our aim is to facilitate detectives quickly estimating the size of a Facebook group with which a suspect is involved. We estimate this group size according to the number of immediate friends and the number of ext...

  11. Ultraviolet radiation-damage absorption peak in solid deuterium-tritium. Revision 1

    International Nuclear Information System (INIS)

    Fearon, E.M.; Tsugawa, R.T.; Souers, P.C.; Poll, J.D.; Hunt, J.L.

    1985-01-01

    An ultraviolet absorption peak has been seen in solid deuterium-tritium and hydrogen-tritium at a sensor temperature of 5 K. The peak occurs at 3.6 eV and is about 1.5 eV wide. It bleaches out when the temperature is raised to about 10 K but reappears upon cooling and is, therefore, radiation induced. At 5 K, the peak forms on a time scale of minutes and appears to represent part-per-million levels of electron-mass defects. The suggested model is that of a trapped electron, where the peak is the ground state-to-the-conduction band transition. A marked isotope effect is seen between D-T and H-T

  12. Least squares estimation in a simple random coefficient autoregressive model

    DEFF Research Database (Denmark)

    Johansen, S; Lange, T

    2013-01-01

    The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p...... we prove the curious result that View the MathML source. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of View the MathML source and View the MathML source and hence the limit of View the MathML source...

  13. Observation, modeling, and temperature dependence of doubly peaked electric fields in irradiated silicon pixel sensors

    CERN Document Server

    Swartz, M.; Allkofer, Y.; Bortoletto, D.; Cremaldi, L.; Cucciarelli, S.; Dorokhov, A.; Hoermann, C.; Kim, D.; Konecki, M.; Kotlinski, D.; Prokofiev, Kirill; Regenfus, Christian; Rohe, T.; Sanders, D.A.; Son, S.; Speer, T.

    2006-01-01

    We show that doubly peaked electric fields are necessary to describe grazing-angle charge collection measurements of irradiated silicon pixel sensors. A model of irradiated silicon based upon two defect levels with opposite charge states and the trapping of charge carriers can be tuned to produce a good description of the measured charge collection profiles in the fluence range from 0.5x10^{14} Neq/cm^2 to 5.9x10^{14} Neq/cm^2. The model correctly predicts the variation in the profiles as the temperature is changed from -10C to -25C. The measured charge collection profiles are inconsistent with the linearly-varying electric fields predicted by the usual description based upon a uniform effective doping density. This observation calls into question the practice of using effective doping densities to characterize irradiated silicon.

  14. Bridging Weighted Rules and Graph Random Walks for Statistical Relational Models

    Directory of Open Access Journals (Sweden)

    Seyed Mehran Kazemi

    2018-02-01

    Full Text Available The aim of statistical relational learning is to learn statistical models from relational or graph-structured data. Three main statistical relational learning paradigms include weighted rule learning, random walks on graphs, and tensor factorization. These paradigms have been mostly developed and studied in isolation for many years, with few works attempting at understanding the relationship among them or combining them. In this article, we study the relationship between the path ranking algorithm (PRA, one of the most well-known relational learning methods in the graph random walk paradigm, and relational logistic regression (RLR, one of the recent developments in weighted rule learning. We provide a simple way to normalize relations and prove that relational logistic regression using normalized relations generalizes the path ranking algorithm. This result provides a better understanding of relational learning, especially for the weighted rule learning and graph random walk paradigms. It opens up the possibility of using the more flexible RLR rules within PRA models and even generalizing both by including normalized and unnormalized relations in the same model.

  15. The ising model on the dynamical triangulated random surface

    International Nuclear Information System (INIS)

    Aleinov, I.D.; Migelal, A.A.; Zmushkow, U.V.

    1990-01-01

    The critical properties of Ising model on the dynamical triangulated random surface embedded in D-dimensional Euclidean space are investigated. The strong coupling expansion method is used. The transition to thermodynamical limit is performed by means of continuous fractions

  16. Smart households: Dispatch strategies and economic analysis of distributed energy storage for residential peak shaving

    International Nuclear Information System (INIS)

    Zheng, Menglian; Meinrenken, Christoph J.; Lackner, Klaus S.

    2015-01-01

    Highlights: • Cost-effectiveness of building-based storage for peak shaving has hitherto not been well understood. • Several existing storage technologies are shown to provide cost-effective peak shaving. • Setting grid demand targets rather than hard demand limits improves economics. • Accounting for seasonal demand variations in storage dispatch strategy improves economics further. • Total-energy-throughput approach is used to determine storage lifetimes. - Abstract: Meeting time-varying peak demand poses a key challenge to the U.S. electricity system. Building-based electricity storage – to enable demand response (DR) without curtailing actual appliance usage – offers potential benefits of lower electricity production cost, higher grid reliability, and more flexibility to integrate renewables. DR tariffs are currently available in the U.S. but building-based storage is still underutilized due to insufficiently understood cost-effectiveness and dispatch strategies. Whether DR schemes can yield a profit for building operators (i.e., reduction in electricity bill that exceeds levelized storage cost) and which particular storage technology yields the highest profit is yet to be answered. This study aims to evaluate the economics of providing peak shaving DR under a realistic tariff (Con Edison, New York), using a range of storage technologies (conventional and advanced batteries, flywheel, magnetic storage, pumped hydro, compressed air, and capacitors). An agent-based stochastic model is used to randomly generate appliance-level demand profiles for an average U.S. household. We first introduce a levelized storage cost model which is based on a total-energy-throughput lifetime. We then develop a storage dispatch strategy which optimizes the storage capacity and the demand limit on the grid. We find that (i) several storage technologies provide profitable DR; (ii) annual profit from such DR can range from 1% to 39% of the household’s non-DR electricity

  17. Social aggregation in pea aphids: experiment and random walk modeling.

    Directory of Open Access Journals (Sweden)

    Christa Nilsen

    Full Text Available From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.

  18. Investigation of magnetic fluids exhibiting field-induced increasing loss peaks

    International Nuclear Information System (INIS)

    Fannin, P.C.; Marin, C.N.; Couper, C.

    2010-01-01

    A theoretical analysis to explain an increase of the Brownian loss peak with increasing polarizing field, H, in a magnetic fluid, is presented. The model is based on the competition between the Brownian and Neel relaxation processes. It is demonstrated that in magnetic fluids with particles having small anisotropy constant, small average magnetic diameter and narrow particle size distribution an increase of the Brownian loss peak with the polarizing field can be observed. The theoretical results are compared with the experimental results of an Isopar M-based magnetic fluid with magnetite particles stabilized with oleic acid and the model explains qualitatively the main characteristics of the experimental results.

  19. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  20. Spatial peak-load pricing

    International Nuclear Information System (INIS)

    Arellano, M. Soledad; Serra, Pablo

    2007-01-01

    This article extends the traditional electricity peak-load pricing model to include transmission costs. In the context of a two-node, two-technology electric power system, where suppliers face inelastic demand, we show that when the marginal plant is located at the energy-importing center, generators located away from that center should pay the marginal capacity transmission cost; otherwise, consumers should bear this cost through capacity payments. Since electric power transmission is a natural monopoly, marginal-cost pricing does not fully cover costs. We propose distributing the revenue deficit among users in proportion to the surplus they derive from the service priced at marginal cost. (Author)

  1. Determination of the diagnostic x-ray tube practical peak voltage (PPV) from average or average peak voltage measurements

    Energy Technology Data Exchange (ETDEWEB)

    Hourdakis, C J, E-mail: khour@gaec.gr [Ionizing Radiation Calibration Laboratory-Greek Atomic Energy Commission, PO Box 60092, 15310 Agia Paraskevi, Athens, Attiki (Greece)

    2011-04-07

    The practical peak voltage (PPV) has been adopted as the reference measuring quantity for the x-ray tube voltage. However, the majority of commercial kV-meter models measure the average peak, U-bar{sub P}, the average, U-bar, the effective, U{sub eff} or the maximum peak, U{sub P} tube voltage. This work proposed a method for determination of the PPV from measurements with a kV-meter that measures the average U-bar or the average peak, U-bar{sub p} voltage. The kV-meter reading can be converted to the PPV by applying appropriate calibration coefficients and conversion factors. The average peak k{sub PPV,kVp} and the average k{sub PPV,Uav} conversion factors were calculated from virtual voltage waveforms for conventional diagnostic radiology (50-150 kV) and mammography (22-35 kV) tube voltages and for voltage ripples from 0% to 100%. Regression equation and coefficients provide the appropriate conversion factors at any given tube voltage and ripple. The influence of voltage waveform irregularities, like 'spikes' and pulse amplitude variations, on the conversion factors was investigated and discussed. The proposed method and the conversion factors were tested using six commercial kV-meters at several x-ray units. The deviations between the reference and the calculated - according to the proposed method - PPV values were less than 2%. Practical aspects on the voltage ripple measurement were addressed and discussed. The proposed method provides a rigorous base to determine the PPV with kV-meters from U-bar{sub p} and U-bar measurement. Users can benefit, since all kV-meters, irrespective of their measuring quantity, can be used to determine the PPV, complying with the IEC standard requirements.

  2. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  3. A new neural network model for solving random interval linear programming problems.

    Science.gov (United States)

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Operational and structural measures to reduce hydro-peaking impact on fish larvae

    International Nuclear Information System (INIS)

    Kopecki, Ianina; Schneider, Matthias

    2016-01-01

    Eco-hydraulic investigations studying the effects of hydro-peaking on river biota are gaining in importance. Negative effects of rapid flow fluctuations due to hydro power production are well documented by many studies, with the larvae and juvenile fish identified among the mostly affected life stages. Therefore, elaboration of efficient hydro-peaking mitigation strategies is an important issue for energy companies as well as for water body administrations responsible for the fulfilment of WFD requirements. The present case study strives for practical solutions allowing to minimize or compensate the negative effects of hydro-peaking on the fish fauna of the 7 km long river reach on the river Lech (southern Germany). Model based investigations allow to access the impact from currently authorized discharge regime, suggest operational and structural measures within the reach in terms of reducing the risk of stranding for fish larvae and select the measures most easy to implement and with the largest ecological benefit. The paper describes the approach for accessing the effects of hydro-peaking based on 2D hydrodynamic modelling, fuzzy logic based habitat modelling and information on cutting-edge biological investigations on fish larvae from Lunz experimental facility (Austria). (authors)

  5. A Fay-Herriot Model with Different Random Effect Variances

    Czech Academy of Sciences Publication Activity Database

    Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.

    2011-01-01

    Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf

  6. Low frequency noise peak near magnon emission energy in magnetic tunnel junctions

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Liang; Xiang, Li; Guo, Huiqiang; Wei, Jian, E-mail: weijian6791@pku.edu.cn [International Center for Quantum Materials, School of Physics, Peking University, Beijing 100871, China and Collaborative Innovation Center of Quantum Matter, Beijing (China); Li, D. L.; Yuan, Z. H.; Feng, J. F., E-mail: jiafengfeng@iphy.ac.cn; Han, X. F. [Beijing National Laboratory of Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China); Coey, J. M. D. [CRANN and School of Physics, Trinity College, Dublin 2 (Ireland)

    2014-12-15

    We report on the low frequency (LF) noise measurements in magnetic tunnel junctions (MTJs) below 4 K and at low bias, where the transport is strongly affected by scattering with magnons emitted by hot tunnelling electrons, as thermal activation of magnons from the environment is suppressed. For both CoFeB/MgO/CoFeB and CoFeB/AlO{sub x}/CoFeB MTJs, enhanced LF noise is observed at bias voltage around magnon emission energy, forming a peak in the bias dependence of noise power spectra density, independent of magnetic configurations. The noise peak is much higher and broader for unannealed AlO{sub x}-based MTJ, and besides Lorentzian shape noise spectra in the frequency domain, random telegraph noise (RTN) is visible in the time traces. During repeated measurements the noise peak reduces and the RTN becomes difficult to resolve, suggesting defects being annealed. The Lorentzian shape noise spectra can be fitted with bias-dependent activation of RTN, with the attempt frequency in the MHz range, consistent with magnon dynamics. These findings suggest magnon-assisted activation of defects as the origin of the enhanced LF noise.

  7. Low frequency noise peak near magnon emission energy in magnetic tunnel junctions

    Directory of Open Access Journals (Sweden)

    Liang Liu

    2014-12-01

    Full Text Available We report on the low frequency (LF noise measurements in magnetic tunnel junctions (MTJs below 4 K and at low bias, where the transport is strongly affected by scattering with magnons emitted by hot tunnelling electrons, as thermal activation of magnons from the environment is suppressed. For both CoFeB/MgO/CoFeB and CoFeB/AlOx/CoFeB MTJs, enhanced LF noise is observed at bias voltage around magnon emission energy, forming a peak in the bias dependence of noise power spectra density, independent of magnetic configurations. The noise peak is much higher and broader for unannealed AlOx-based MTJ, and besides Lorentzian shape noise spectra in the frequency domain, random telegraph noise (RTN is visible in the time traces. During repeated measurements the noise peak reduces and the RTN becomes difficult to resolve, suggesting defects being annealed. The Lorentzian shape noise spectra can be fitted with bias-dependent activation of RTN, with the attempt frequency in the MHz range, consistent with magnon dynamics. These findings suggest magnon-assisted activation of defects as the origin of the enhanced LF noise.

  8. On a Stochastic Failure Model under Random Shocks

    Science.gov (United States)

    Cha, Ji Hwan

    2013-02-01

    In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.

  9. Random Walk Model for Cell-To-Cell Misalignments in Accelerator Structures

    International Nuclear Information System (INIS)

    Stupakov, Gennady

    2000-01-01

    Due to manufacturing and construction errors, cells in accelerator structures can be misaligned relative to each other. As a consequence, the beam generates a transverse wakefield even when it passes through the structure on axis. The most important effect is the long-range transverse wakefield that deflects the bunches and causes growth of the bunch train projected emittance. In this paper, the effect of the cell-to-cell misalignments is evaluated using a random walk model that assumes that each cell is shifted by a random step relative to the previous one. The model is compared with measurements of a few accelerator structures

  10. Precise determination of the Bragg peak position of proton beams in liquid water

    International Nuclear Information System (INIS)

    Marouane, Abdelhak; Ouaskit, Said; Inchaouh, Jamal

    2011-01-01

    The influence of water molecules on the surrounding biological molecules during irradiation with protons is currently a major subject in radiation science. Proton collisions with the water molecules are estimated around the Bragg peak region, taking into account ionization, excitation, charge-changing processes, and energetic secondary electron behavior. The Bragg peak profile and position was determined by adopting a new approach involving discretization, incrementation, and dividing the target into layers, the thickness of each layer being selected randomly from a distribution weighted by the values of the total interaction cross section, from excitation up to ionization of the target and the incident projectile charge exchange. The calculation was carried out by a Monte-Carlo simulation in the energy range 20 ≤ E ≤ 10 8 eV, including the relativistic corrections.

  11. Nonlinear and diffraction effects in propagation of N-waves in randomly inhomogeneous moving media.

    Science.gov (United States)

    Averiyanov, Mikhail; Blanc-Benon, Philippe; Cleveland, Robin O; Khokhlova, Vera

    2011-04-01

    Finite amplitude acoustic wave propagation through atmospheric turbulence is modeled using a Khokhlov-Zabolotskaya-Kuznetsov (KZK)-type equation. The equation accounts for the combined effects of nonlinearity, diffraction, absorption, and vectorial inhomogeneities of the medium. A numerical algorithm is developed which uses a shock capturing scheme to reduce the number of temporal grid points. The inhomogeneous medium is modeled using random Fourier modes technique. Propagation of N-waves through the medium produces regions of focusing and defocusing that is consistent with geometrical ray theory. However, differences up to ten wavelengths are observed in the locations of fist foci. Nonlinear effects are shown to enhance local focusing, increase the maximum peak pressure (up to 60%), and decrease the shock rise time (about 30 times). Although the peak pressure increases and the rise time decreases in focal regions, statistical analysis across the entire wavefront at a distance 120 wavelengths from the source indicates that turbulence: decreases the mean time-of-flight by 15% of a pulse duration, decreases the mean peak pressure by 6%, and increases the mean rise time by almost 100%. The peak pressure and the arrival time are primarily governed by large scale inhomogeneities, while the rise time is also sensitive to small scales.

  12. IDENTIFICATIONS PEAK HOURS ON INTERSECTIONS SET IN BIELSKO-BIAŁA CITY

    Directory of Open Access Journals (Sweden)

    Marcin KŁOS

    2016-03-01

    Full Text Available Traffic flow in cities is usually examined locally. This method is not effective for through traffic analysis. The paper discusses the problem of determining peak traffic hours taking into account vehicle distributions. Peak hours represent time periods of traffic flow which demand special treatment by traffic control systems. This is particularly important in the case of ITS. High values of traffic flow require relieving actions not only at the junctions but preferably along the transit routes. The north-south transit route in Bielsko-Biała was chosen for analysis. Instead of the usual two distinct peaks it is determined that the traffic flow is characterised by five peaks. This pattern is the result of specific location of the route, which links residential areas, industrial zones and shopping centres besides carrying through traffic. This multi peak graph more accurately models the traffic flow.

  13. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  14. Peak capacity and peak capacity per unit time in capillary and microchip zone electrophoresis.

    Science.gov (United States)

    Foley, Joe P; Blackney, Donna M; Ennis, Erin J

    2017-11-10

    The origins of the peak capacity concept are described and the important contributions to the development of that concept in chromatography and electrophoresis are reviewed. Whereas numerous quantitative expressions have been reported for one- and two-dimensional separations, most are focused on chromatographic separations and few, if any, quantitative unbiased expressions have been developed for capillary or microchip zone electrophoresis. Making the common assumption that longitudinal diffusion is the predominant source of zone broadening in capillary electrophoresis, analytical expressions for the peak capacity are derived, first in terms of migration time, diffusion coefficient, migration distance, and desired resolution, and then in terms of the remaining underlying fundamental parameters (electric field, electroosmotic and electrophoretic mobilities) that determine the migration time. The latter expressions clearly illustrate the direct square root dependence of peak capacity on electric field and migration distance and the inverse square root dependence on solute diffusion coefficient. Conditions that result in a high peak capacity will result in a low peak capacity per unit time and vice-versa. For a given symmetrical range of relative electrophoretic mobilities for co- and counter-electroosmotic species (cations and anions), the peak capacity increases with the square root of the electric field even as the temporal window narrows considerably, resulting in a significant reduction in analysis time. Over a broad relative electrophoretic mobility interval [-0.9, 0.9], an approximately two-fold greater amount of peak capacity can be generated for counter-electroosmotic species although it takes about five-fold longer to do so, consistent with the well-known bias in migration time and resolving power for co- and counter-electroosmotic species. The optimum lower bound of the relative electrophoretic mobility interval [μ r,Z , μ r,A ] that provides the maximum

  15. EMPIRICALLY DERIVED INTEGRATED STELLAR YIELDS OF Fe-PEAK ELEMENTS

    International Nuclear Information System (INIS)

    Henry, R. B. C.; Cowan, John J.; Sobeck, Jennifer

    2010-01-01

    We present here the initial results of a new study of massive star yields of Fe-peak elements. We have compiled from the literature a database of carefully determined solar neighborhood stellar abundances of seven iron-peak elements, Ti, V, Cr, Mn, Fe, Co, and Ni, and then plotted [X/Fe] versus [Fe/H] to study the trends as functions of metallicity. Chemical evolution models were then employed to force a fit to the observed trends by adjusting the input massive star metallicity-sensitive yields of Kobayashi et al. Our results suggest that yields of Ti, V, and Co are generally larger as well as anticorrelated with metallicity, in contrast to the Kobayashi et al. predictions. We also find the yields of Cr and Mn to be generally smaller and directly correlated with metallicity compared to the theoretical results. Our results for Ni are consistent with theory, although our model suggests that all Ni yields should be scaled up slightly. The outcome of this exercise is the computation of a set of integrated yields, i.e., stellar yields weighted by a slightly flattened time-independent Salpeter initial mass function and integrated over stellar mass, for each of the above elements at several metallicity points spanned by the broad range of observations. These results are designed to be used as empirical constraints on future iron-peak yield predictions by stellar evolution modelers. Special attention is paid to the interesting behavior of [Cr/Co] with metallicity-these two elements have opposite slopes-as well as the indirect correlation of [Ti/Fe] with [Fe/H]. These particular trends, as well as those exhibited by the inferred integrated yields of all iron-peak elements with metallicity, are discussed in terms of both supernova nucleosynthesis and atomic physics.

  16. Random cyclic constitutive models of 0Cr18Ni10Ti pipe steel

    International Nuclear Information System (INIS)

    Zhao Yongxiang; Yang Bing

    2004-01-01

    Experimental study is performed on the random cyclic constitutive relations of a new pipe stainless steel, 0Cr18Ni10Ti, by an incremental strain-controlled fatigue test. In the test, it is verified that the random cyclic constitutive relations, like the wide recognized random cyclic strain-life relations, is an intrinsic fatigue phenomenon of engineering materials. Extrapolating the previous work by Zhao et al, probability-based constitutive models are constructed, respectively, on the bases of Ramberg-Osgood equation and its modified form. Scattering regularity and amount of the test data are taken into account. The models consist of the survival probability-strain-life curves, the confidence strain-life curves, and the survival probability-confidence-strain-life curves. Availability and feasibility of the models have been indicated by analysis of the present test data

  17. Peak energy consumption and CO2 emissions in China

    International Nuclear Information System (INIS)

    Yuan, Jiahai; Xu, Yan; Hu, Zheng; Zhao, Changhong; Xiong, Minpeng; Guo, Jingsheng

    2014-01-01

    China is in the processes of rapid industrialization and urbanization. Based on the Kaya identity, this paper proposes an analytical framework for various energy scenarios that explicitly simulates China's economic development, with a prospective consideration on the impacts of urbanization and income distribution. With the framework, China's 2050 energy consumption and associated CO 2 reduction scenarios are constructed. Main findings are: (1) energy consumption will peak at 5200–5400 million tons coal equivalent (Mtce) in 2035–2040; (2) CO 2 emissions will peak at 9200–9400 million tons (Mt) in 2030–2035, whilst it can be potentially reduced by 200–300 Mt; (3) China's per capita energy consumption and per capita CO 2 emission are projected to peak at 4 tce and 6.8 t respectively in 2020–2030, soon after China steps into the high income group. - Highlights: • A framework for modeling China's energy and CO 2 emissions is proposed. • Scenarios are constructed based on various assumptions on the driving forces. • Energy consumption will peak in 2035–2040 at 5200–5400 Mtce. • CO 2 emissions will peak in 2030–2035 at about 9300 Mt and be cut by 300 Mt in a cleaner energy path. • Energy consumption and CO 2 emissions per capita will peak soon after China steps into the high income group

  18. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    DEFF Research Database (Denmark)

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna

    2013-01-01

    machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region......-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods...

  19. A random matrix model of relaxation

    International Nuclear Information System (INIS)

    Lebowitz, J L; Pastur, L

    2004-01-01

    We consider a two-level system, S 2 , coupled to a general n level system, S n , via a random matrix. We derive an integral representation for the mean reduced density matrix ρ(t) of S 2 in the limit n → ∞, and we identify a model of S n which possesses some of the properties expected for macroscopic thermal reservoirs. In particular, it yields the Gibbs form for ρ(∞). We also consider an analog of the van Hove limit and obtain a master equation (Markov dynamics) for the evolution of ρ(t) on an appropriate time scale

  20. Factorisations for partition functions of random Hermitian matrix models

    International Nuclear Information System (INIS)

    Jackson, D.M.; Visentin, T.I.

    1996-01-01

    The partition function Z N , for Hermitian-complex matrix models can be expressed as an explicit integral over R N , where N is a positive integer. Such an integral also occurs in connection with random surfaces and models of two dimensional quantum gravity. We show that Z N can be expressed as the product of two partition functions, evaluated at translated arguments, for another model, giving an explicit connection between the two models. We also give an alternative computation of the partition function for the φ 4 -model.The approach is an algebraic one and holds for the functions regarded as formal power series in the appropriate ring. (orig.)

  1. The fascicular anatomy and peak force capabilities of the sternocleidomastoid muscle.

    Science.gov (United States)

    Kennedy, Ewan; Albert, Michael; Nicholson, Helen

    2017-06-01

    The fascicular morphology of the sternocleidomastoid (SCM) is not well described in modern anatomical texts, and the biomechanical forces it exerts on individual cervical motion segments are not known. The purpose of this study is to investigate the fascicular anatomy and peak force capabilities of the SCM combining traditional dissection and modern imaging. This study is comprised of three parts: Dissection, magnetic resonance imaging (MRI) and biomechanical modelling. Dissection was performed on six embalmed cadavers: three males of age 73-74 years and three females of age 63-93 years. The fascicular arrangement and morphologic data were recorded. MRIs were performed on six young, healthy volunteers: three males of age 24-37 and three females of age 26-28. In vivo volumes of the SCM were calculated using the Cavalieri method. Modelling of the SCM was performed on five sets of computed tomography (CT) scans. This mapped the fascicular arrangement of the SCM with relation to the cervical motion segments, and used volume data from the MRIs to calculate realistic peak force capabilities. Dissection showed the SCM has four parts; sterno-mastoid, sterno-occipital, cleido-mastoid and cleido-occipital portions. Force modelling shows that peak torque capacity of the SCM is higher at lower cervical levels, and minimal at higher levels. Peak shear forces are higher in the lower cervical spine, while compression is consistent throughout. The four-part SCM is capable of producing forces that vary across the cervical motion segments. The implications of these findings are discussed with reference to models of neck muscle function and dysfunction.

  2. Randomizing growing networks with a time-respecting null model

    Science.gov (United States)

    Ren, Zhuo-Ming; Mariani, Manuel Sebastian; Zhang, Yi-Cheng; Medo, Matúš

    2018-05-01

    Complex networks are often used to represent systems that are not static but grow with time: People make new friendships, new papers are published and refer to the existing ones, and so forth. To assess the statistical significance of measurements made on such networks, we propose a randomization methodology—a time-respecting null model—that preserves both the network's degree sequence and the time evolution of individual nodes' degree values. By preserving the temporal linking patterns of the analyzed system, the proposed model is able to factor out the effect of the system's temporal patterns on its structure. We apply the model to the citation network of Physical Review scholarly papers and the citation network of US movies. The model reveals that the two data sets are strikingly different with respect to their degree-degree correlations, and we discuss the important implications of this finding on the information provided by paradigmatic node centrality metrics such as indegree and Google's PageRank. The randomization methodology proposed here can be used to assess the significance of any structural property in growing networks, which could bring new insights into the problems where null models play a critical role, such as the detection of communities and network motifs.

  3. Mechanisms of TL for production of the 230 {sup o}C peak in natural sodalite

    Energy Technology Data Exchange (ETDEWEB)

    Cano, Nilo F., E-mail: nilocano@dfn.if.usp.b [Institute of Physics, University of Sao Paulo, Rua do Matao, Travessa R, 187, CEP 05508-090, Sao Paulo (Brazil); Professional School of Physics, University of San Agustin of Arequipa, Av. Independencia S/N, Arequipa (Peru); Blak, Ana R. [Institute of Physics, University of Sao Paulo, Rua do Matao, Travessa R, 187, CEP 05508-090, Sao Paulo (Brazil); Ayala-Arenas, Jorge S. [Professional School of Physics, University of San Agustin of Arequipa, Av. Independencia S/N, Arequipa (Peru); Watanabe, Shigueo [Institute of Physics, University of Sao Paulo, Rua do Matao, Travessa R, 187, CEP 05508-090, Sao Paulo (Brazil)

    2011-02-15

    The thermoluminescence (TL) peak in natural sodalite near 230 {sup o}C, which appears only after submitted to thermal treatments and to gamma irradiation, has been studied in parallel with electron paramagnetic resonance (EPR) spectrum appearing under the same procedure. This study revealed a full correlation between the 230 {sup o}C TL peak and the eleven hyperfine lines from EPR spectrum. In both case, the centers disappear at the same temperature and are restored after gamma irradiation. A complete model for the 230 {sup o}C TL peak is presented and discussed. In addition to the correlation and TL model, specific characteristics of the TL peaks are described.

  4. Central peaking of magnetized gas discharges

    International Nuclear Information System (INIS)

    Chen, Francis F.; Curreli, Davide

    2013-01-01

    Partially ionized gas discharges used in industry are often driven by radiofrequency (rf) power applied at the periphery of a cylinder. It is found that the plasma density n is usually flat or peaked on axis even if the skin depth of the rf field is thin compared with the chamber radius a. Previous attempts at explaining this did not account for the finite length of the discharge and the boundary conditions at the endplates. A simple 1D model is used to focus on the basic mechanism: the short-circuit effect. It is found that a strong electric field (E-field) scaled to electron temperature T e , drives the ions inward. The resulting density profile is peaked on axis and has a shape independent of pressure or discharge radius. This “universal” profile is not affected by a dc magnetic field (B-field) as long as the ion Larmor radius is larger than a

  5. Peak-by-peak correction of Ge(Li) gamma-ray spectra for photopeaks from background

    Energy Technology Data Exchange (ETDEWEB)

    Cutshall, N H; Larsen, I L [Oak Ridge National Lab., TN (USA)

    1980-12-01

    Background photopeaks can interfere with accurate measurement of low levels of radionuclides by gamma-ray spectrometry. A flowchart for peak-by-peak correction of sample spectra to produce accurate results is presented.

  6. Peaking-factor of PWR

    International Nuclear Information System (INIS)

    Morioka, Noboru; Kato, Yasuji; Yokoi, M.

    1975-01-01

    Output peaking factor often plays an important role in the safety and operation of nuclear reactors. The meaning of the peaking factor of PWRs is categorized into two features or the peaking factor in core (FQ-core) and the peaking factor on the basis of accident analysis (or FQ-limit). FQ-core is the actual peaking factor realized in nuclear core at the time of normal operation, and FQ-limit should be evaluated from loss of coolant accident and other abnormal conditions. If FQ-core is lower than FQ-limit, the reactor may be operated at full load, but if FQ-core is larger than FQ-limit, reactor output should be controlled lower than FQ-limit. FQ-core has two kinds of values, or the one on the basis of nuclear design, and the other actually measured in reactor operation. The first FQ-core should be named as FQ-core-design and the latter as FQ-core-measured. The numerical evaluation of FQ-core-design is as follows; FQ-core-design of three-dimensions is synthesized with FQ-core horizontal value (X-Y) and FQ-core vertical value, the former one is calculated with ASSY-CORE code, and the latter one with one dimensional diffusion code. For the evaluation of FQ-core-measured, on-site data observation from nuclear reactor instrumentation or off-site data observation is used. (Iwase, T.)

  7. Random regression models for daily feed intake in Danish Duroc pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Mark, Thomas; Jensen, Just

    The objective of this study was to develop random regression models and estimate covariance functions for daily feed intake (DFI) in Danish Duroc pigs. A total of 476201 DFI records were available on 6542 Duroc boars between 70 to 160 days of age. The data originated from the National test station......-year-season, permanent, and animal genetic effects. The functional form was based on Legendre polynomials. A total of 64 models for random regressions were initially ranked by BIC to identify the approximate order for the Legendre polynomials using AI-REML. The parsimonious model included Legendre polynomials of 2nd...... order for genetic and permanent environmental curves and a heterogeneous residual variance, allowing the daily residual variance to change along the age trajectory due to scale effects. The parameters of the model were estimated in a Bayesian framework, using the RJMC module of the DMU package, where...

  8. Peak-by-peak correction of Ge(Li) gamma-ray spectra for photopeaks from background

    International Nuclear Information System (INIS)

    Cutshall, N.H.; Larsen, I.L.

    1980-01-01

    Background photopeaks can interfere with accurate measurement of low levels of radionuclides by gamma-ray spectrometry. A flowchart for peak-by-peak correction of sample spectra to produce accurate results is presented. (orig.)

  9. Effect of Caloric Restriction or Aerobic Exercise Training on Peak Oxygen Consumption and Quality of Life in Obese Older Patients with Heart Failure and Preserved Ejection Fraction A Randomized, Controlled Trial

    Science.gov (United States)

    Kitzman, Dalane W.; Brubaker, Peter; Morgan, Timothy; Haykowsky, Mark; Hundley, Gregory; Kraus, William E.; Eggebeen, Joel; Nicklas, Barbara J.

    2016-01-01

    Importance More than 80% of patients with heart failure with preserved ejection fraction (HFPEF), the most common form of HF among older persons, are overweight/obese. Exercise intolerance is the primary symptom of chronic HFPEF and a major determinant of reduced quality-of-life (QOL). Objective To determine whether caloric restriction (Diet), or aerobic exercise training (Exercise), improves exercise capacity and QOL in obese older HFPEF patients. Design Randomized, attention-controlled, 2x2 factorial trial conducted from February 2009 November 2014. Setting Urban academic medical center. Participants 100 older (67±5 years) obese (BMI=39.3±5.6kg/m2) women (n=81) and men (n=19) with chronic, stable HFPEF enrolled from 577 patients initially screened (366 excluded by inclusion / exclusion criteria, 31 for other reasons, 80 declined participation). Twenty-six participants were randomized to Exercise alone, 24 to Diet alone, 25 to Diet+Exercise, and 25 to Control; 92 completed the trial. Interventions 20 weeks of Diet and/or Exercise; Attention Control consisted of telephone calls every 2 weeks. Main Outcomes and Measures Exercise capacity measured as peak oxygen consumption (VO2, ml/kg/min; primary outcome) and QOL measured by the Minnesota Living with HF Questionnaire (MLHF) total score (co-primary outcome; score range: 0–105, higher scores indicate worse HF-related QOL). Results By main effects analysis, peak VO2 was increased significantly by both interventions: Exercise main effect 1.2 ml/kg/min (95%CI: 0.7,1.7; pDiet main effect 1.3 ml/kg/min (95%CI: 0.8,1.8; pExercise+Diet was additive (complementary) for peak VO2 (joint effect 2.5 ml/kg/min). The change in MLHF total score was non-significant with Exercise (main effect −1 unit; 95%CI: −8,5; p=0.70) and with Diet (main effect −6 units; 95%CI: −12,1; p=0.078). The change in peak VO2 was positively correlated with the change in percent lean body mass (r=0.32; p=0.003) and the change in thigh muscle

  10. Random Modeling of Daily Rainfall and Runoff Using a Seasonal Model and Wavelet Denoising

    Directory of Open Access Journals (Sweden)

    Chien-ming Chou

    2014-01-01

    Full Text Available Instead of Fourier smoothing, this study applied wavelet denoising to acquire the smooth seasonal mean and corresponding perturbation term from daily rainfall and runoff data in traditional seasonal models, which use seasonal means for hydrological time series forecasting. The denoised rainfall and runoff time series data were regarded as the smooth seasonal mean. The probability distribution of the percentage coefficients can be obtained from calibrated daily rainfall and runoff data. For validated daily rainfall and runoff data, percentage coefficients were randomly generated according to the probability distribution and the law of linear proportion. Multiplying the generated percentage coefficient by the smooth seasonal mean resulted in the corresponding perturbation term. Random modeling of daily rainfall and runoff can be obtained by adding the perturbation term to the smooth seasonal mean. To verify the accuracy of the proposed method, daily rainfall and runoff data for the Wu-Tu watershed were analyzed. The analytical results demonstrate that wavelet denoising enhances the precision of daily rainfall and runoff modeling of the seasonal model. In addition, the wavelet denoising technique proposed in this study can obtain the smooth seasonal mean of rainfall and runoff processes and is suitable for modeling actual daily rainfall and runoff processes.

  11. On the portents of peak oil (and other indicators of resource scarcity)

    International Nuclear Information System (INIS)

    Smith, James L.

    2012-01-01

    Economists have studied various indicators of resource scarcity but largely ignored the phenomenon of “peaking” due to its connection to non-economic (physical) theories of resource exhaustion. I consider peaking from the economic point of view, where economic forces determine the shape of the equilibrium extraction path. Within that framework, I ask whether the timing of peak production reveals anything useful about scarcity. I find peaking to be an ambiguous indicator. If someone announced the peak would arrive earlier than expected, and you believed them, you would not know whether the news was good or bad. However, I also show that the traditional economic indicators of resource scarcity (price, cost, and rent) fare no better, and argue that previous studies have misconstrued the connection between changes in underlying scarcity and movements in these traditional indicators. - Highlights: ► We ask whether “peak oil” provides a useful economic indicator of scarcity. ► Timing of the peak follows Hotelling's model of inter-temporal equilibrium. ► The peak provides an ambiguous signal. ► Unexpectedly early peaking could be good news or bad. ► The traditional indicators (cost, price, and rent) do not fare much better.

  12. When did HIV incidence peak in Harare, Zimbabwe? Back-calculation from mortality statistics.

    Directory of Open Access Journals (Sweden)

    Ben Lopman

    2008-03-01

    Full Text Available HIV prevalence has recently begun to decline in Zimbabwe, a result of both high levels of AIDS mortality and a reduction in incident infections. An important component in understanding the dynamics in HIV prevalence is knowledge of past trends in incidence, such as when incidence peaked and at what level. However, empirical measurements of incidence over an extended time period are not available from Zimbabwe or elsewhere in sub-Saharan Africa. Using mortality data, we use a back-calculation technique to reconstruct historic trends in incidence. From AIDS mortality data, extracted from death registration in Harare, together with an estimate of survival post-infection, HIV incidence trends were reconstructed that would give rise to the observed patterns of AIDS mortality. Models were fitted assuming three parametric forms of the incidence curve and under nine different assumptions regarding combinations of trends in non-AIDS mortality and patterns of survival post-infection with HIV. HIV prevalence was forward-projected from the fitted incidence and mortality curves. Models that constrained the incidence pattern to a cubic spline function were flexible and produced well-fitting, realistic patterns of incidence. In models assuming constant levels of non-AIDS mortality, annual incidence peaked between 4 and 5% between 1988 and 1990. Under other assumptions the peak level ranged from 3 to 8% per annum. However, scenarios assuming increasing levels of non-AIDS mortality resulted in implausibly low estimates of peak prevalence (11%, whereas models with decreasing underlying crude mortality could be consistent with the prevalence and mortality data. HIV incidence is most likely to have peaked in Harare between 1988 and 1990, which may have preceded the peak elsewhere in Zimbabwe. This finding, considered alongside the timing and location of HIV prevention activities, will give insight into the decline of HIV prevalence in Zimbabwe.

  13. Flood frequency analysis for nonstationary annual peak records in an urban drainage basin

    Science.gov (United States)

    Villarini, Gabriele; Smith, James A.; Serinaldi, Francesco; Bales, Jerad; Bates, Paul D.; Krajewski, Witold F.

    2009-08-01

    Flood frequency analysis in urban watersheds is complicated by nonstationarities of annual peak records associated with land use change and evolving urban stormwater infrastructure. In this study, a framework for flood frequency analysis is developed based on the Generalized Additive Models for Location, Scale and Shape parameters (GAMLSS), a tool for modeling time series under nonstationary conditions. GAMLSS is applied to annual maximum peak discharge records for Little Sugar Creek, a highly urbanized watershed which drains the urban core of Charlotte, North Carolina. It is shown that GAMLSS is able to describe the variability in the mean and variance of the annual maximum peak discharge by modeling the parameters of the selected parametric distribution as a smooth function of time via cubic splines. Flood frequency analyses for Little Sugar Creek (at a drainage area of 110km) show that the maximum flow with a 0.01-annual probability (corresponding to 100-year flood peak under stationary conditions) over the 83-year record has ranged from a minimum unit discharge of 2.1mskm to a maximum of 5.1mskm. An alternative characterization can be made by examining the estimated return interval of the peak discharge that would have an annual exceedance probability of 0.01 under the assumption of stationarity (3.2mskm). Under nonstationary conditions, alternative definitions of return period should be adapted. Under the GAMLSS model, the return interval of an annual peak discharge of 3.2mskm ranges from a maximum value of more than 5000 years in 1957 to a minimum value of almost 8 years for the present time (2007). The GAMLSS framework is also used to examine the links between population trends and flood frequency, as well as trends in annual maximum rainfall. These analyses are used to examine evolving flood frequency over future decades.

  14. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  15. A novel peak detection approach with chemical noise removal using short-time FFT for prOTOF MS data.

    Science.gov (United States)

    Zhang, Shuqin; Wang, Honghui; Zhou, Xiaobo; Hoehn, Gerard T; DeGraba, Thomas J; Gonzales, Denise A; Suffredini, Anthony F; Ching, Wai-Ki; Ng, Michael K; Wong, Stephen T C

    2009-08-01

    Peak detection is a pivotal first step in biomarker discovery from MS data and can significantly influence the results of downstream data analysis steps. We developed a novel automatic peak detection method for prOTOF MS data, which does not require a priori knowledge of protein masses. Random noise is removed by an undecimated wavelet transform and chemical noise is attenuated by an adaptive short-time discrete Fourier transform. Isotopic peaks corresponding to a single protein are combined by extracting an envelope over them. Depending on the S/N, the desired peaks in each individual spectrum are detected and those with the highest intensity among their peak clusters are recorded. The common peaks among all the spectra are identified by choosing an appropriate cut-off threshold in the complete linkage hierarchical clustering. To remove the 1 Da shifting of the peaks, the peak corresponding to the same protein is determined as the detected peak with the largest number among its neighborhood. We validated this method using a data set of serial peptide and protein calibration standards. Compared with MoverZ program, our new method detects more peaks and significantly enhances S/N of the peak after the chemical noise removal. We then successfully applied this method to a data set from prOTOF MS spectra of albumin and albumin-bound proteins from serum samples of 59 patients with carotid artery disease compared to vascular disease-free patients to detect peaks with S/N> or =2. Our method is easily implemented and is highly effective to define peaks that will be used for disease classification or to highlight potential biomarkers.

  16. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar

    2018-04-17

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  17. Bayesian hierarchical model for variations in earthquake peak ground acceleration within small-aperture arrays

    KAUST Repository

    Rahpeyma, Sahar; Halldorsson, Benedikt; Hrafnkelsson, Birgir; Jonsson, Sigurjon

    2018-01-01

    Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.

  18. Random unitary evolution model of quantum Darwinism with pure decoherence

    Science.gov (United States)

    Balanesković, Nenad

    2015-10-01

    We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.

  19. Ultra-photo-stable coherent random laser based on liquid waveguide gain channels doped with boehmite nanosheets

    Science.gov (United States)

    Zhang, Hua; Zhang, Hong; Yang, Chao; Dai, Jiangyun; Yin, Jiajia; Xue, Hongyan; Feng, Guoying; Zhou, Shouhuan

    2018-02-01

    Construction of ultra-photo-stable coherent random laser based on liquid waveguide gain channels doped with boehmite nanosheets has been demonstrated. An Al plate uniformly coated with boehmite nanosheets was prepared by an alkali-treatment method and used as a scattering surface for the coherent random laser. Microcavity may be formed between these boehmite nanosheets owing to the strong optical feedback induced by the multiple light scattering. Many sharp peaks are observed in the emission spectra, and their laser thresholds are different, which confirms the feedback mechanism is coherent. The linewidth of the main peak at 571.74 nm is 0.28 nm, and the threshold of the main peak is about 4.96 mJ/cm2. Due to the fluidity of liquid waveguide gain medium, the photostability of this coherent random laser is better than the conventional solid state dye random lasers. The emission direction is well constrained by the waveguide effect within a certain angular range (±30°). This kind of coherent random laser can be applied in optical fluid lasers and photonic devices.

  20. Automated asteroseismic peak detections

    DEFF Research Database (Denmark)

    de Montellano, Andres Garcia Saravia Ortiz; Hekker, S.; Themessl, N.

    2018-01-01

    Space observatories such as Kepler have provided data that can potentially revolutionize our understanding of stars. Through detailed asteroseismic analyses we are capable of determining fundamental stellar parameters and reveal the stellar internal structure with unprecedented accuracy. However......, such detailed analyses, known as peak bagging, have so far been obtained for only a small percentage of the observed stars while most of the scientific potential of the available data remains unexplored. One of the major challenges in peak bagging is identifying how many solar-like oscillation modes are visible...... of detected oscillation modes. The algorithm presented here opens the possibility for detailed and automated peak bagging of the thousands of solar-like oscillators observed by Kepler....

  1. Price, environment and security: Exploring multi-modal motivation in voluntary residential peak demand response

    International Nuclear Information System (INIS)

    Gyamfi, Samuel; Krumdieck, Susan

    2011-01-01

    Peak demand on electricity grids is a growing problem that increases costs and risks to supply security. Residential sector loads often contribute significantly to seasonal and daily peak demand. Demand response projects aim to manage peak demand by applying price signals and automated load shedding technologies. This research investigates voluntary load shedding in response to information about the security of supply, the emission profile and the cost of meeting critical peak demand in the customers' network. Customer willingness to change behaviour in response to this information was explored through mail-back survey. The diversified demand modelling method was used along with energy audit data to estimate the potential peak load reduction resulting from the voluntary demand response. A case study was conducted in a suburb of Christchurch, New Zealand, where electricity is the main source for water and space heating. On this network, all water heating cylinders have ripple-control technology and about 50% of the households subscribe to differential day/night pricing plan. The survey results show that the sensitivity to supply security is on par with price, with the emission sensitivity being slightly weaker. The modelling results show potential 10% reduction in critical peak load for aggregate voluntary demand response. - Highlights: → Multiple-factor behaviour intervention is necessarily for effective residential demand response. → Security signals can achieve result comparable to price. → The modelling results show potential 10% reduction in critical peak load for aggregate voluntary demand response. → New Zealand's energy policy should include innovation and development of VDR programmes and technologies.

  2. Environmental impacts of public transport. Why peak-period travellers cause a greater environmental burden than off-peak travellers

    International Nuclear Information System (INIS)

    Rietveld, P.

    2002-01-01

    Given the difference between peak and off-peak occupancy rates in public transport, emissions per traveller kilometre are lower in the peak than in the off-peak period, whereas the opposite pattern is observed for cars. It is argued that it is much more fruitful to analyse environmental effects in marginal terms. This calls for a careful analysis of capacity management policies of public transport suppliers that are facing increased demand during both peak and off-peak periods. A detailed analysis of capacity management by the Netherlands Railways (NS) revealed that off-peak capacity supply is mainly dictated by the demand levels during the peak period. The analysis included the effects of increased frequency and increased vehicle size on environmental impacts, while environmental economies of vehicle size were also taken into account. The main conclusion is that the marginal environmental burden during the peak hours is much higher than is usually thought, whereas it is almost zero during the off-peak period. This implies a pattern that is the precise opposite of the average environmental burden. Thus, an analysis of environmental effects of public transport based on average performance would yield misleading conclusions [nl

  3. Adjunctive triamcinolone acetonide for Ahmed glaucoma valve implantation: a randomized clinical trial.

    Science.gov (United States)

    Yazdani, Shahin; Doozandeh, Azadeh; Pakravan, Mohammad; Ownagh, Vahid; Yaseri, Mehdi

    2017-06-26

    To evaluate the effect of intraoperative sub-Tenon injection of triamcinolone acetonide (TA) as an adjunct to Ahmed glaucoma valve (AGV) implantation. In this triple-blind randomized clinical trial, 104 eyes with refractory glaucoma were randomly assigned to conventional AGV (non-TA group) or AGV with adjunctive triamcinolone (TA group). In the TA group, 10 mg TA was injected in the sub-Tenon space around the AGV plate intraoperatively. Patients were followed for 1 year. The main outcome measure was intraocular pressure (IOP). Other outcome measures included best-corrected visual acuity (BCVA), occurrence of hypertensive phase (HP), peak IOP, number of antiglaucoma medications, and complications. A total of 90 patients were included in the final analysis. Mean IOP was lower in the TA group at most follow-up visits; however, the difference was statistically significant only at the first month (p = 0.004). Linear mixed model showed that mean IOP was 1.5 mm Hg lower in the TA group throughout the study period (p = 0.006). Peak postoperative IOP was significantly lower in the TA group (19.3 ± 4.8 mm Hg versus 29 ± 9.2 mm Hg, p = 0.032). Rates of success (defined as 6 2 lines was more common in the non-TA group (p = 0.032). Adjunctive intraoperative TA injection during AGV implantation can blunt peak IOP levels and reduce mean IOP up to 1 year. Visual outcomes also seem to be superior to standard surgery.

  4. PEAK SHIFTS PRODUCED BY CORRELATED RESPONSE TO SELECTION.

    Science.gov (United States)

    Price, Trevor; Turelli, Michael; Slatkin, Montgomery

    1993-02-01

    Traits may evolve both as a consequence of direct selection and also as a correlated response to selection on other traits. While correlated response may be important for both the production of evolutionary novelty and in the build-up of complex characters, its potential role in peak shifts has been neglected empirically and theoretically. We use a quantitative genetic model to investigate the conditions under which a character, Y, which has two alternative optima, can be dragged from one optimum to the other as a correlated response to selection on a second character, X. High genetic correlations between the two characters make the transition, or peak shift, easier, as does weak selection tending to restore Y to the optimum from which it is being dragged. When selection on Y is very weak, the conditions for a peak shift depend only on the location of the new optimum for X and are independent of the strength of selection moving it there. Thus, if the "adaptive valley" for Y is very shallow, little reduction in mean fitness is needed to produce a shift. If the selection acts strongly to keep Y at its current optimum, very intense directional selection on X, associated with a dramatic drop in mean fitness, is required for a peak shift. When strong selection is required, the conditions for peak shifts driven by correlated response might occur rarely, but still with sufficient frequency on a geological timescale to be evolutionarily important. © 1993 The Society for the Study of Evolution.

  5. Leveraging probabilistic peak detection to estimate baseline drift in complex chromatographic samples.

    Science.gov (United States)

    Lopatka, Martin; Barcaru, Andrei; Sjerps, Marjan J; Vivó-Truyols, Gabriel

    2016-01-29

    Accurate analysis of chromatographic data often requires the removal of baseline drift. A frequently employed strategy strives to determine asymmetric weights in order to fit a baseline model by regression. Unfortunately, chromatograms characterized by a very high peak saturation pose a significant challenge to such algorithms. In addition, a low signal-to-noise ratio (i.e. s/npeak detection algorithm. A posterior probability of being affected by a peak is computed for each point in the chromatogram, leading to a set of weights that allow non-iterative calculation of a baseline estimate. For extremely saturated chromatograms, the peak weighted (PW) method demonstrates notable improvement compared to the other methods examined. However, in chromatograms characterized by low-noise and well-resolved peaks, the asymmetric least squares (ALS) and the more sophisticated Mixture Model (MM) approaches achieve superior results in significantly less time. We evaluate the performance of these three baseline correction methods over a range of chromatographic conditions to demonstrate the cases in which each method is most appropriate. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Can double-peaked lines indicate merging effects in AGNs?

    Directory of Open Access Journals (Sweden)

    Popović L.Č.

    2000-01-01

    Full Text Available The influence of merging effects in the central part of an Active Galactic Nucleus (AGN on the emission spectral line shapes are discussed. We present a model of close binary Broad Line Region. The numerical experiments show that the merging effects can explain double peaked lines. The merging effects may also be present in the center of AGNs, although they emit slightly asymmetric as well as symmetric and relatively stable (in profile shape spectral lines. Depending on the black hole masses and their orbit elements such model may explain some of the line profile shapes observed in AGNs. This work shows that if one is looking for the merging effects in the central region as well as in the wide field structure of AGNs, he should first pay attention to objects which have double peaked lines.

  7. Detecting Mountain Peaks and Delineating Their Shapes Using Digital Elevation Models, Remote Sensing and Geographic Information Systems Using Autometric Methodological Procedures

    Directory of Open Access Journals (Sweden)

    Tomaž Podobnikar

    2012-03-01

    Full Text Available The detection of peaks (summits as the upper parts of mountains and the delineation of their shape is commonly confirmed by inspections carried out by mountaineers. In this study the complex task of peak detection and shape delineation is solved by autometric methodological procedures, more precisely, by developing relatively simple but innovative image-processing and spatial-analysis techniques (e.g., developing inventive variables using an annular moving window in remote sensing and GIS domains. The techniques have been integrated into automated morphometric methodological procedures. The concepts of peaks and their shapes (sharp, blunt, oblong, circular and conical were parameterized based on topographic and morphologic criteria. A geomorphologically high quality DEM was used as a fundamental dataset. The results, detected peaks with delineated shapes, have been integratively enriched with numerous independent datasets (e.g., with triangulated spot heights and information (e.g., etymological information, and mountaineering criteria have been implemented to improve the judgments. This holistic approach has proved the applicability of both highly standardized and universal parameters for the geomorphologically diverse Kamnik Alps case study area. Possible applications of this research are numerous, e.g., a comprehensive quality control of DEM or significantly improved models for the spatial planning proposes.

  8. (Non-) Gibbsianness and Phase Transitions in Random Lattice Spin Models

    NARCIS (Netherlands)

    Külske, C.

    1999-01-01

    We consider disordered lattice spin models with finite-volume Gibbs measures µΛ[η](dσ). Here σ denotes a lattice spin variable and η a lattice random variable with product distribution P describing the quenched disorder of the model. We ask: when will the joint measures limΛ↑Zd P(dη)µΛ[η](dσ) be

  9. Projection Effects of Large-scale Structures on Weak-lensing Peak Abundances

    Science.gov (United States)

    Yuan, Shuo; Liu, Xiangkun; Pan, Chuzhong; Wang, Qiao; Fan, Zuhui

    2018-04-01

    High peaks in weak lensing (WL) maps originate dominantly from the lensing effects of single massive halos. Their abundance is therefore closely related to the halo mass function and thus a powerful cosmological probe. However, besides individual massive halos, large-scale structures (LSS) along lines of sight also contribute to the peak signals. In this paper, with ray-tracing simulations, we investigate the LSS projection effects. We show that for current surveys with a large shape noise, the stochastic LSS effects are subdominant. For future WL surveys with source galaxies having a median redshift z med ∼ 1 or higher, however, they are significant. For the cosmological constraints derived from observed WL high-peak counts, severe biases can occur if the LSS effects are not taken into account properly. We extend the model of Fan et al. by incorporating the LSS projection effects into the theoretical considerations. By comparing with simulation results, we demonstrate the good performance of the improved model and its applicability in cosmological studies.

  10. Random regret-based discrete-choice modelling: an application to healthcare.

    Science.gov (United States)

    de Bekker-Grob, Esther W; Chorus, Caspar G

    2013-07-01

    A new modelling approach for analysing data from discrete-choice experiments (DCEs) has been recently developed in transport economics based on the notion of regret minimization-driven choice behaviour. This so-called Random Regret Minimization (RRM) approach forms an alternative to the dominant Random Utility Maximization (RUM) approach. The RRM approach is able to model semi-compensatory choice behaviour and compromise effects, while being as parsimonious and formally tractable as the RUM approach. Our objectives were to introduce the RRM modelling approach to healthcare-related decisions, and to investigate its usefulness in this domain. Using data from DCEs aimed at determining valuations of attributes of osteoporosis drug treatments and human papillomavirus (HPV) vaccinations, we empirically compared RRM models, RUM models and Hybrid RUM-RRM models in terms of goodness of fit, parameter ratios and predicted choice probabilities. In terms of model fit, the RRM model did not outperform the RUM model significantly in the case of the osteoporosis DCE data (p = 0.21), whereas in the case of the HPV DCE data, the Hybrid RUM-RRM model outperformed the RUM model (p implied by the two models can vary substantially. Differences in model fit between RUM, RRM and Hybrid RUM-RRM were found to be small. Although our study did not show significant differences in parameter ratios, the RRM and Hybrid RUM-RRM models did feature considerable differences in terms of the trade-offs implied by these ratios. In combination, our results suggest that RRM and Hybrid RUM-RRM modelling approach hold the potential of offering new and policy-relevant insights for health researchers and policy makers.

  11. Forecasting monthly peak demand of electricity in India—A critique

    International Nuclear Information System (INIS)

    Rallapalli, Srinivasa Rao; Ghosh, Sajal

    2012-01-01

    The nature of electricity differs from that of other commodities since electricity is a non-storable good and there have been significant seasonal and diurnal variations of demand. Under such condition, precise forecasting of demand for electricity should be an integral part of the planning process as this enables the policy makers to provide directions on cost-effective investment and on scheduling the operation of the existing and new power plants so that the supply of electricity can be made adequate enough to meet the future demand and its variations. Official load forecasting in India done by Central Electricity Authority (CEA) is often criticized for being overestimated due to inferior techniques used for forecasting. This paper tries to evaluate monthly peak demand forecasting performance predicted by CEA using trend method and compare it with those predicted by Multiplicative Seasonal Autoregressive Integrated Moving Average (MSARIMA) model. It has been found that MSARIMA model outperforms CEA forecasts both in-sample static and out-of-sample dynamic forecast horizons in all five regional grids in India. For better load management and grid discipline, this study suggests employing sophisticated techniques like MSARIMA for peak load forecasting in India. - Highlights: ► This paper evaluates monthly peak demand forecasting performance by CEA. ► Compares CEA forecasts it with those predicted by MSARIMA model. ► MSARIMA model outperforms CEA forecasts in all five regional grids in India. ► Opportunity exists to improve the performance of CEA forecasts.

  12. Drivers of peak sales for pharmaceutical brands

    NARCIS (Netherlands)

    Fischer, Marc; Leeflang, Peter S. H.; Verhoef, Peter C.

    2010-01-01

    Peak sales are an important metric in the pharmaceutical industry. Specifically, managers are focused on the height-of-peak-sales and the time required achieving peak sales. We analyze how order of entry and quality affect the level of peak sales and the time-to-peak-sales of pharmaceutical brands.

  13. Peak effect and vortex dynamics in superconducting MgB2 single crystals

    International Nuclear Information System (INIS)

    Lee, Hyun-Sook; Jang, Dong-Jin; Kim, Heon-Jung; Kang, Byeongwon; Lee, Sung-Ik

    2007-01-01

    The dynamic nature of the vortex state of MgB 2 single crystals near the peak effect (PE) region, which is very different either from that of conventional low-temperature superconductors or from that of high-temperature cuprate superconductors, is introduced in this article. Relaxation from a disordered, metastable field-cooled (FC) state to an ordered, stable zero-field-cooled (ZFC) state of the MgB 2 single crystals under an applied magnetic field and current is investigated. From an analysis of the noise properties in the ZFC state, a dynamic vortex phase diagram of the MgB 2 is obtained near the PE region. Between the onset and the peak region in the critical current vs. magnetic field diagram, crossovers from a high-noise state to a noise-free state are observed with increasing current. Above the peak, however, an opposite phenomenon, crossovers from a noise-free to a high-noise state, is observed which has not been observed in any other superconductors. The hysteresis in the I-V curves and the two-level random telegraph noise in the time evolution of the voltage response under an constant applied current at the ZFC state are also studied in detail

  14. Model of Random Polygon Particles for Concrete and Mesh Automatic Subdivision

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In order to study the constitutive behavior of concrete in mesoscopic level, a new method is proposed in this paper. This method uses random polygon particles to simulate full grading broken aggregates of concrete. Based on computational geometry, we carry out the automatic generation of the triangle finite element mesh for the model of random polygon particles of concrete. The finite element mesh generated in this paper is also applicable to many other numerical methods.

  15. Covariance of random stock prices in the Stochastic Dividend Discount Model

    OpenAIRE

    Agosto, Arianna; Mainini, Alessandra; Moretto, Enrico

    2016-01-01

    Dividend discount models have been developed in a deterministic setting. Some authors (Hurley and Johnson, 1994 and 1998; Yao, 1997) have introduced randomness in terms of stochastic growth rates, delivering closed-form expressions for the expected value of stock prices. This paper extends such previous results by determining a formula for the covariance between random stock prices when the dividends' rates of growth are correlated. The formula is eventually applied to real market data.

  16. Assessing robustness of designs for random effects parameters for nonlinear mixed-effects models.

    Science.gov (United States)

    Duffull, Stephen B; Hooker, Andrew C

    2017-12-01

    Optimal designs for nonlinear models are dependent on the choice of parameter values. Various methods have been proposed to provide designs that are robust to uncertainty in the prior choice of parameter values. These methods are generally based on estimating the expectation of the determinant (or a transformation of the determinant) of the information matrix over the prior distribution of the parameter values. For high dimensional models this can be computationally challenging. For nonlinear mixed-effects models the question arises as to the importance of accounting for uncertainty in the prior value of the variances of the random effects parameters. In this work we explore the influence of the variance of the random effects parameters on the optimal design. We find that the method for approximating the expectation and variance of the likelihood is of potential importance for considering the influence of random effects. The most common approximation to the likelihood, based on a first-order Taylor series approximation, yields designs that are relatively insensitive to the prior value of the variance of the random effects parameters and under these conditions it appears to be sufficient to consider uncertainty on the fixed-effects parameters only.

  17. Discovery and characterization of the first low-peaked and intermediate-peaked BL Lacertae objects in the very high energy {gamma}-ray regime

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Karsten

    2009-12-19

    20 years after the discovery of the Crab Nebula as a source of very high energy {gamma}-rays, the number of sources newly discovered above 100 GeV using ground-based Cherenkov telescopes has considerably grown, at the time of writing of this thesis to a total of 81. The sources are of different types, including galactic sources such as supernova remnants, pulsars, binary systems, or so-far unidentified accelerators and extragalactic sources such as blazars and radio galaxies. The goal of this thesis work was to search for {gamma}-ray emission from a particular type of blazars previously undetected at very high {gamma}-ray energies, by using the MAGIC telescope. Those blazars previously detected were all of the same type, the so-called high-peaked BL Lacertae objects. The sources emit purely non-thermal emission, and exhibit a peak in their radio-to-X-ray spectral energy distribution at X-ray energies. The entire blazar population extends from these rare, low-luminosity BL Lacertae objects with peaks at X-ray energies to the much more numerous, high-luminosity infrared-peaked radio quasars. Indeed, the low-peaked sources dominate the source counts obtained from space-borne observations at {gamma}-ray energies up to 10 GeV. Their spectra observed at lower {gamma}-ray energies show power-law extensions to higher energies, although theoretical models suggest them to turn over at energies below 100 GeV. This opened the quest for MAGIC as the Cherenkov telescope with the currently lowest energy threshold. In the framework of this thesis, the search was focused on the prominent sources BL Lac, W Comae and S5 0716+714, respectively. Two of the sources were unambiguously discovered at very high energy {gamma}-rays with the MAGIC telescope, based on the analysis of a total of about 150 hours worth of data collected between 2005 and 2008. The analysis of this very large data set required novel techniques for treating the effects of twilight conditions on the data quality

  18. Discovery and characterization of the first low-peaked and intermediate-peaked BL Lacertae objects in the very high energy γ-ray regime

    International Nuclear Information System (INIS)

    Berger, Karsten

    2009-01-01

    20 years after the discovery of the Crab Nebula as a source of very high energy γ-rays, the number of sources newly discovered above 100 GeV using ground-based Cherenkov telescopes has considerably grown, at the time of writing of this thesis to a total of 81. The sources are of different types, including galactic sources such as supernova remnants, pulsars, binary systems, or so-far unidentified accelerators and extragalactic sources such as blazars and radio galaxies. The goal of this thesis work was to search for γ-ray emission from a particular type of blazars previously undetected at very high γ-ray energies, by using the MAGIC telescope. Those blazars previously detected were all of the same type, the so-called high-peaked BL Lacertae objects. The sources emit purely non-thermal emission, and exhibit a peak in their radio-to-X-ray spectral energy distribution at X-ray energies. The entire blazar population extends from these rare, low-luminosity BL Lacertae objects with peaks at X-ray energies to the much more numerous, high-luminosity infrared-peaked radio quasars. Indeed, the low-peaked sources dominate the source counts obtained from space-borne observations at γ-ray energies up to 10 GeV. Their spectra observed at lower γ-ray energies show power-law extensions to higher energies, although theoretical models suggest them to turn over at energies below 100 GeV. This opened the quest for MAGIC as the Cherenkov telescope with the currently lowest energy threshold. In the framework of this thesis, the search was focused on the prominent sources BL Lac, W Comae and S5 0716+714, respectively. Two of the sources were unambiguously discovered at very high energy γ-rays with the MAGIC telescope, based on the analysis of a total of about 150 hours worth of data collected between 2005 and 2008. The analysis of this very large data set required novel techniques for treating the effects of twilight conditions on the data quality. This was successfully achieved

  19. Analysis of Peak-to-Peak Current Ripple Amplitude in Seven-Phase PWM Voltage Source Inverters

    Directory of Open Access Journals (Sweden)

    Gabriele Grandi

    2013-08-01

    Full Text Available Multiphase systems are nowadays considered for various industrial applications. Numerous pulse width modulation (PWM schemes for multiphase voltage source inverters with sinusoidal outputs have been developed, but no detailed analysis of the impact of these modulation schemes on the output peak-to-peak current ripple amplitude has been reported. Determination of current ripple in multiphase PWM voltage source inverters is important for both design and control purposes. This paper gives the complete analysis of the peak-to-peak current ripple distribution over a fundamental period for multiphase inverters, with particular reference to seven-phase VSIs. In particular, peak-to-peak current ripple amplitude is analytically determined as a function of the modulation index, and a simplified expression to get its maximum value is carried out. Although reference is made to the centered symmetrical PWM, being the most simple and effective solution to maximize the DC bus utilization, leading to a nearly-optimal modulation to minimize the RMS of the current ripple, the analysis can be readily extended to either discontinuous or asymmetrical modulations, both carrier-based and space vector PWM. A similar approach can be usefully applied to any phase number. The analytical developments for all different sub-cases are verified by numerical simulations.

  20. Applicability of special quasi-random structure models in thermodynamic calculations using semi-empirical Debye–Grüneisen theory

    International Nuclear Information System (INIS)

    Kim, Jiwoong

    2015-01-01

    In theoretical calculations, expressing the random distribution of atoms in a certain crystal structure is still challenging. The special quasi-random structure (SQS) model is effective for depicting such random distributions. The SQS model has not been applied to semi-empirical thermodynamic calculations; however, Debye–Grüneisen theory (DGT), a semi-empirical method, was used here for that purpose. The model reliability was obtained by comparing supercell models of various sizes. The results for chemical bonds, pair correlation, and elastic properties demonstrated the reliability of the SQS models. Thermodynamic calculations using density functional perturbation theory (DFPT) and DGT assessed the applicability of the SQS models. DGT and DFPT led to similar variations of the mixing and formation energies. This study provides guidelines for theoretical assessments to obtain the reliable SQS models and to calculate the thermodynamic properties of numerous materials with a random atomic distribution. - Highlights: • Various material properties are used to examine reliability of special quasi-random structures. • SQS models are applied to thermodynamic calculations by semi-empirical methods. • Basic calculation guidelines for materials with random atomic distribution are given.

  1. Two-dimensional hydrodynamic modeling to quantify effects of peak-flow management on channel morphology and salmon-spawning habitat in the Cedar River, Washington

    Science.gov (United States)

    Czuba, Christiana; Czuba, Jonathan A.; Gendaszek, Andrew S.; Magirl, Christopher S.

    2010-01-01

    The Cedar River in Washington State originates on the western slope of the Cascade Range and provides the City of Seattle with most of its drinking water, while also supporting a productive salmon habitat. Water-resource managers require detailed information on how best to manage high-flow releases from Chester Morse Lake, a large reservoir on the Cedar River, during periods of heavy precipitation to minimize flooding, while mitigating negative effects on fish populations. Instream flow-management practices include provisions for adaptive management to promote and maintain healthy aquatic habitat in the river system. The current study is designed to understand the linkages between peak flow characteristics, geomorphic processes, riverine habitat, and biological responses. Specifically, two-dimensional hydrodynamic modeling is used to simulate and quantify the effects of the peak-flow magnitude, duration, and frequency on the channel morphology and salmon-spawning habitat. Two study reaches, representative of the typical geomorphic and ecologic characteristics of the Cedar River, were selected for the modeling. Detailed bathymetric data, collected with a real-time kinematic global positioning system and an acoustic Doppler current profiler, were combined with a LiDAR-derived digital elevation model in the overbank area to develop a computational mesh. The model is used to simulate water velocity, benthic shear stress, flood inundation, and morphologic changes in the gravel-bedded river under the current and alternative flood-release strategies. Simulations of morphologic change and salmon-redd scour by floods of differing magnitude and duration enable water-resource managers to incorporate model simulation results into adaptive management of peak flows in the Cedar River. PDF version of a presentation on hydrodynamic modelling in the Cedar River in Washington state. Presented at the American Geophysical Union Fall Meeting 2010.

  2. Medium Term Economic Effects of Peak Oil Today

    OpenAIRE

    Dr. Ulrike Lehr; Dr. Christian Lutz; Kirsten Wiebe

    2011-01-01

    The paper at hand presents results of a model-based scenario analysis on the economic implications in the next decade of an oil peak today and significantly decreasing oil production in the coming years. For that the extraction paths of oil and other fossil fuels given in LBST (2010) are implemented in the global macroeconomic model GINFORS. Additionally, the scenarios incorporate different technological potentials for energy efficiency and renewable energy, which cannot be forecast using eco...

  3. P2 : A random effects model with covariates for directed graphs

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Snijders, T.A.B.; Zijlstra, B.J.H.

    A random effects model is proposed for the analysis of binary dyadic data that represent a social network or directed graph, using nodal and/or dyadic attributes as covariates. The network structure is reflected by modeling the dependence between the relations to and from the same actor or node.

  4. A binomial random sum of present value models in investment analysis

    OpenAIRE

    Βουδούρη, Αγγελική; Ντζιαχρήστος, Ευάγγελος

    1997-01-01

    Stochastic present value models have been widely adopted in financial theory and practice and play a very important role in capital budgeting and profit planning. The purpose of this paper is to introduce a binomial random sum of stochastic present value models and offer an application in investment analysis.

  5. Bayesian analysis for exponential random graph models using the adaptive exchange sampler

    KAUST Repository

    Jin, Ick Hoon; Liang, Faming; Yuan, Ying

    2013-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we

  6. How to use your peak flow meter

    Science.gov (United States)

    ... meter - how to use; Asthma - peak flow meter; Reactive airway disease - peak flow meter; Bronchial asthma - peak ... 2014:chap 55. National Asthma Education and Prevention Program website. How to use a peak flow meter. ...

  7. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media

    Science.gov (United States)

    Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.

    2018-01-01

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development

  8. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media

    International Nuclear Information System (INIS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.

    2016-01-01

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development

  9. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media

    Energy Technology Data Exchange (ETDEWEB)

    Mishchenko, Michael I., E-mail: michael.i.mishchenko@nasa.gov [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Dlugach, Janna M. [Main Astronomical Observatory of the National Academy of Sciences of Ukraine, 27 Zabolotny Str., 03680, Kyiv (Ukraine); Yurkin, Maxim A. [Voevodsky Institute of Chemical Kinetics and Combustion, SB RAS, Institutskaya str. 3, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, Pirogova 2, 630090 Novosibirsk (Russian Federation); Bi, Lei [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Cairns, Brian [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Liu, Li [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Columbia University, 2880 Broadway, New York, NY 10025 (United States); Panetta, R. Lee [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Travis, Larry D. [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Yang, Ping [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Zakharova, Nadezhda T. [Trinnovim LLC, 2880 Broadway, New York, NY 10025 (United States)

    2016-05-16

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development

  10. First-Principles Modeling Of Electromagnetic Scattering By Discrete and Discretely Heterogeneous Random Media

    Science.gov (United States)

    Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.

    2016-01-01

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell- Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of

  11. Forecasting Strategies for Predicting Peak Electric Load Days

    Science.gov (United States)

    Saxena, Harshit

    Academic institutions spend thousands of dollars every month on their electric power consumption. Some of these institutions follow a demand charges pricing structure; here the amount a customer pays to the utility is decided based on the total energy consumed during the month, with an additional charge based on the highest average power load required by the customer over a moving window of time as decided by the utility. Therefore, it is crucial for these institutions to minimize the time periods where a high amount of electric load is demanded over a short duration of time. In order to reduce the peak loads and have more uniform energy consumption, it is imperative to predict when these peaks occur, so that appropriate mitigation strategies can be developed. The research work presented in this thesis has been conducted for Rochester Institute of Technology (RIT), where the demand charges are decided based on a 15 minute sliding window panned over the entire month. This case study makes use of different statistical and machine learning algorithms to develop a forecasting strategy for predicting the peak electric load days of the month. The proposed strategy was tested for a whole year starting May 2015 to April 2016 during which a total of 57 peak days were observed. The model predicted a total of 74 peak days during this period, 40 of these cases were true positives, hence achieving an accuracy level of 70 percent. The results obtained with the proposed forecasting strategy are promising and demonstrate an annual savings potential worth about $80,000 for a single submeter of RIT.

  12. Normal-Gamma-Bernoulli Peak Detection for Analysis of Comprehensive Two-Dimensional Gas Chromatography Mass Spectrometry Data.

    Science.gov (United States)

    Kim, Seongho; Jang, Hyejeong; Koo, Imhoi; Lee, Joohyoung; Zhang, Xiang

    2017-01-01

    Compared to other analytical platforms, comprehensive two-dimensional gas chromatography coupled with mass spectrometry (GC×GC-MS) has much increased separation power for analysis of complex samples and thus is increasingly used in metabolomics for biomarker discovery. However, accurate peak detection remains a bottleneck for wide applications of GC×GC-MS. Therefore, the normal-exponential-Bernoulli (NEB) model is generalized by gamma distribution and a new peak detection algorithm using the normal-gamma-Bernoulli (NGB) model is developed. Unlike the NEB model, the NGB model has no closed-form analytical solution, hampering its practical use in peak detection. To circumvent this difficulty, three numerical approaches, which are fast Fourier transform (FFT), the first-order and the second-order delta methods (D1 and D2), are introduced. The applications to simulated data and two real GC×GC-MS data sets show that the NGB-D1 method performs the best in terms of both computational expense and peak detection performance.

  13. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors

    International Nuclear Information System (INIS)

    Herschtal, A; Te Marvelde, L; Mengersen, K; Foroudi, F; Ball, D; Devereux, T; Pham, D; Greer, P B; Pichler, P; Eade, T; Kneebone, A; Bell, L; Caine, H; Hindson, B; Kron, T; Hosseinifard, Z

    2015-01-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts −19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements. (paper)

  14. Stochastic geometry, spatial statistics and random fields models and algorithms

    CERN Document Server

    2015-01-01

    Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.

  15. Universal parametric correlations of conductance peaks in quantum dots

    International Nuclear Information System (INIS)

    Alhassid, Y.; Attias, H.

    1996-01-01

    We compute the parametric correlation function of the conductance peaks in chaotic and weakly disordered quantum dots in the Coulomb blockade regime and demonstrate its universality upon an appropriate scaling of the parameter. For a symmetric dot we show that this correlation function is affected by breaking time-reversal symmetry but is independent of the details of the channels in the external leads. We derive a new scaling which depends on the eigenfunctions alone and can be extracted directly from the conductance peak heights. Our results are in excellent agreement with model simulations of a disordered quantum dot. copyright 1996 The American Physical Society

  16. Application of RBF neural network improved by peak density function in intelligent color matching of wood dyeing

    International Nuclear Information System (INIS)

    Guan, Xuemei; Zhu, Yuren; Song, Wenlong

    2016-01-01

    According to the characteristics of wood dyeing, we propose a predictive model of pigment formula for wood dyeing based on Radial Basis Function (RBF) neural network. In practical application, however, it is found that the number of neurons in the hidden layer of RBF neural network is difficult to determine. In general, we need to test several times according to experience and prior knowledge, which is lack of a strict design procedure on theoretical basis. And we also don’t know whether the RBF neural network is convergent. This paper proposes a peak density function to determine the number of neurons in the hidden layer. In contrast to existing approaches, the centers and the widths of the radial basis function are initialized by extracting the features of samples. So the uncertainty caused by random number when initializing the training parameters and the topology of RBF neural network is eliminated. The average relative error of the original RBF neural network is 1.55% in 158 epochs. However, the average relative error of the RBF neural network which is improved by peak density function is only 0.62% in 50 epochs. Therefore, the convergence rate and approximation precision of the RBF neural network are improved significantly.

  17. High-temperature series expansions for random Potts models

    Directory of Open Access Journals (Sweden)

    M.Hellmund

    2005-01-01

    Full Text Available We discuss recently generated high-temperature series expansions for the free energy and the susceptibility of random-bond q-state Potts models on hypercubic lattices. Using the star-graph expansion technique, quenched disorder averages can be calculated exactly for arbitrary uncorrelated coupling distributions while keeping the disorder strength p as well as the dimension d as symbolic parameters. We present analyses of the new series for the susceptibility of the Ising (q=2 and 4-state Potts model in three dimensions up to the order 19 and 18, respectively, and compare our findings with results from field-theoretical renormalization group studies and Monte Carlo simulations.

  18. Comparison of the Predictive Performance and Interpretability of Random Forest and Linear Models on Benchmark Data Sets.

    Science.gov (United States)

    Marchese Robinson, Richard L; Palczewska, Anna; Palczewski, Jan; Kidley, Nathan

    2017-08-28

    The ability to interpret the predictions made by quantitative structure-activity relationships (QSARs) offers a number of advantages. While QSARs built using nonlinear modeling approaches, such as the popular Random Forest algorithm, might sometimes be more predictive than those built using linear modeling approaches, their predictions have been perceived as difficult to interpret. However, a growing number of approaches have been proposed for interpreting nonlinear QSAR models in general and Random Forest in particular. In the current work, we compare the performance of Random Forest to those of two widely used linear modeling approaches: linear Support Vector Machines (SVMs) (or Support Vector Regression (SVR)) and partial least-squares (PLS). We compare their performance in terms of their predictivity as well as the chemical interpretability of the predictions using novel scoring schemes for assessing heat map images of substructural contributions. We critically assess different approaches for interpreting Random Forest models as well as for obtaining predictions from the forest. We assess the models on a large number of widely employed public-domain benchmark data sets corresponding to regression and binary classification problems of relevance to hit identification and toxicology. We conclude that Random Forest typically yields comparable or possibly better predictive performance than the linear modeling approaches and that its predictions may also be interpreted in a chemically and biologically meaningful way. In contrast to earlier work looking at interpretation of nonlinear QSAR models, we directly compare two methodologically distinct approaches for interpreting Random Forest models. The approaches for interpreting Random Forest assessed in our article were implemented using open-source programs that we have made available to the community. These programs are the rfFC package ( https://r-forge.r-project.org/R/?group_id=1725 ) for the R statistical

  19. Possibility/Necessity-Based Probabilistic Expectation Models for Linear Programming Problems with Discrete Fuzzy Random Variables

    Directory of Open Access Journals (Sweden)

    Hideki Katagiri

    2017-10-01

    Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.

  20. Flood frequency analysis for nonstationary annual peak records in an urban drainage basin

    Science.gov (United States)

    Villarini, G.; Smith, J.A.; Serinaldi, F.; Bales, J.; Bates, P.D.; Krajewski, W.F.

    2009-01-01

    Flood frequency analysis in urban watersheds is complicated by nonstationarities of annual peak records associated with land use change and evolving urban stormwater infrastructure. In this study, a framework for flood frequency analysis is developed based on the Generalized Additive Models for Location, Scale and Shape parameters (GAMLSS), a tool for modeling time series under nonstationary conditions. GAMLSS is applied to annual maximum peak discharge records for Little Sugar Creek, a highly urbanized watershed which drains the urban core of Charlotte, North Carolina. It is shown that GAMLSS is able to describe the variability in the mean and variance of the annual maximum peak discharge by modeling the parameters of the selected parametric distribution as a smooth function of time via cubic splines. Flood frequency analyses for Little Sugar Creek (at a drainage area of 110 km2) show that the maximum flow with a 0.01-annual probability (corresponding to 100-year flood peak under stationary conditions) over the 83-year record has ranged from a minimum unit discharge of 2.1 m3 s- 1 km- 2 to a maximum of 5.1 m3 s- 1 km- 2. An alternative characterization can be made by examining the estimated return interval of the peak discharge that would have an annual exceedance probability of 0.01 under the assumption of stationarity (3.2 m3 s- 1 km- 2). Under nonstationary conditions, alternative definitions of return period should be adapted. Under the GAMLSS model, the return interval of an annual peak discharge of 3.2 m3 s- 1 km- 2 ranges from a maximum value of more than 5000 years in 1957 to a minimum value of almost 8 years for the present time (2007). The GAMLSS framework is also used to examine the links between population trends and flood frequency, as well as trends in annual maximum rainfall. These analyses are used to examine evolving flood frequency over future decades. ?? 2009 Elsevier Ltd.

  1. Set potential regulation reveals additional oxidation peaks of Geobacter sulfurreducens anodic biofilms

    KAUST Repository

    Zhu, Xiuping

    2012-08-01

    Higher current densities produced in microbial fuel cells and other bioelectrochemical systems are associated with the presence of various Geobacter species. A number of electron transfer components are involved in extracellular electron transfer by the model exoelectrogen, Geobacter sulfurreducens. It has previously been shown that 5 main oxidation peaks can be identified in cyclic voltammetry scans. It is shown here that 7 separate oxidation peaks emerged over relatively long periods of time when a larger range of set potentials was used to acclimate electroactive biofilms. The potentials of oxidation peaks obtained with G. sulfurreducens biofilms acclimated at 0.60 V (vs. Ag/AgCl) were different from those that developed at - 0.46 V, and both of their peaks were different from those obtained for biofilms incubated at - 0.30 V, 0 V, and 0.30 V. These results expand the known range of potentials for which G. sulfurreducens produces identifiable oxidation peaks that could be important for extracellular electron transfer. © 2012 Elsevier B.V.

  2. Automated asteroseismic peak detections

    Science.gov (United States)

    García Saravia Ortiz de Montellano, Andrés; Hekker, S.; Themeßl, N.

    2018-05-01

    Space observatories such as Kepler have provided data that can potentially revolutionize our understanding of stars. Through detailed asteroseismic analyses we are capable of determining fundamental stellar parameters and reveal the stellar internal structure with unprecedented accuracy. However, such detailed analyses, known as peak bagging, have so far been obtained for only a small percentage of the observed stars while most of the scientific potential of the available data remains unexplored. One of the major challenges in peak bagging is identifying how many solar-like oscillation modes are visible in a power density spectrum. Identification of oscillation modes is usually done by visual inspection that is time-consuming and has a degree of subjectivity. Here, we present a peak-detection algorithm especially suited for the detection of solar-like oscillations. It reliably characterizes the solar-like oscillations in a power density spectrum and estimates their parameters without human intervention. Furthermore, we provide a metric to characterize the false positive and false negative rates to provide further information about the reliability of a detected oscillation mode or the significance of a lack of detected oscillation modes. The algorithm presented here opens the possibility for detailed and automated peak bagging of the thousands of solar-like oscillators observed by Kepler.

  3. Acute Whole-Body Vibration does not Facilitate Peak Torque and Stretch Reflex in Healthy Adults

    Directory of Open Access Journals (Sweden)

    Ella W. Yeung

    2013-03-01

    Full Text Available The acute effect of whole-body vibration (WBV training may enhance muscular performance via neural potentiation of the stretch reflex. The purpose of this study was to investigate if acute WBV exposure affects the stretch induced knee jerk reflex [onset latency and electromechanical delay (EMD] and the isokinetic knee extensor peak torque performance. Twenty-two subjects were randomly assigned to the intervention or control group. The intervention group received WBV in a semi-squat position at 30° knee flexion with an amplitude of 0.69 mm, frequency of 45 Hz, and peak acceleration of 27.6 m/s2 for 3 minutes. The control group underwent the same semii-squatting position statically without exposure of WBV. Two-way mixed repeated measures analysis of variance revealed no significant group effects differences on reflex latency of rectus femoris (RF and vastus lateralis (VL; p = 0.934 and 0.935, respectively EMD of RF and VL (p = 0.474 and 0.551, respectively and peak torque production (p = 0.483 measured before and after the WBV. The results of this study indicate that a single session of WBV exposure has no potentiation effect on the stretch induced reflex and peak torque performance in healthy young adults.

  4. Reactive Power Pricing Model Considering the Randomness of Wind Power Output

    Science.gov (United States)

    Dai, Zhong; Wu, Zhou

    2018-01-01

    With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.

  5. Genetic evaluation of European quails by random regression models

    Directory of Open Access Journals (Sweden)

    Flaviana Miranda Gonçalves

    2012-09-01

    Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.

  6. Brain potentials index executive functions during random number generation.

    Science.gov (United States)

    Joppich, Gregor; Däuper, Jan; Dengler, Reinhard; Johannes, Sönke; Rodriguez-Fornells, Antoni; Münte, Thomas F

    2004-06-01

    The generation of random sequences is considered to tax different executive functions. To explore the involvement of these functions further, brain potentials were recorded in 16 healthy young adults while either engaging in random number generation (RNG) by pressing the number keys on a computer keyboard in a random sequence or in ordered number generation (ONG) necessitating key presses in the canonical order. Key presses were paced by an external auditory stimulus to yield either fast (1 press/800 ms) or slow (1 press/1300 ms) sequences in separate runs. Attentional demands of random and ordered tasks were assessed by the introduction of a secondary task (key-press to a target tone). The P3 amplitude to the target tone of this secondary task was reduced during RNG, reflecting the greater consumption of attentional resources during RNG. Moreover, RNG led to a left frontal negativity peaking 140 ms after the onset of the pacing stimulus, whenever the subjects produced a true random response. This negativity could be attributed to the left dorsolateral prefrontal cortex and was absent when numbers were repeated. This negativity was interpreted as an index for the inhibition of habitual responses. Finally, in response locked ERPs a negative component was apparent peaking about 50 ms after the key-press that was more prominent during RNG. Source localization suggested a medial frontal source. This effect was tentatively interpreted as a reflection of the greater monitoring demands during random sequence generation.

  7. Modeling and understanding of effects of randomness in arrays of resonant meta-atoms

    DEFF Research Database (Denmark)

    Tretyakov, Sergei A.; Albooyeh, Mohammad; Alitalo, Pekka

    2013-01-01

    In this review presentation we will discuss approaches to modeling and understanding electromagnetic properties of 2D and 3D lattices of small resonant particles (meta-atoms) in transition from regular (periodic) to random (amorphous) states. Nanostructured metasurfaces (2D) and metamaterials (3D......) are arrangements of optically small but resonant particles (meta-atoms). We will present our results on analytical modeling of metasurfaces with periodical and random arrangements of electrically and magnetically resonant meta-atoms with identical or random sizes, both for the normal and oblique-angle excitations....... We show how the electromagnetic response of metasurfaces is related to the statistical parameters of the structure. Furthermore, we will discuss the phenomenon of anti-resonance in extracted effective parameters of metamaterials and clarify its relation to the periodicity (or amorphous nature...

  8. Analytical connection between thresholds and immunization strategies of SIS model in random networks

    Science.gov (United States)

    Zhou, Ming-Yang; Xiong, Wen-Man; Liao, Hao; Wang, Tong; Wei, Zong-Wen; Fu, Zhong-Qian

    2018-05-01

    Devising effective strategies for hindering the propagation of viruses and protecting the population against epidemics is critical for public security and health. Despite a number of studies based on the susceptible-infected-susceptible (SIS) model devoted to this topic, we still lack a general framework to compare different immunization strategies in completely random networks. Here, we address this problem by suggesting a novel method based on heterogeneous mean-field theory for the SIS model. Our method builds the relationship between the thresholds and different immunization strategies in completely random networks. Besides, we provide an analytical argument that the targeted large-degree strategy achieves the best performance in random networks with arbitrary degree distribution. Moreover, the experimental results demonstrate the effectiveness of the proposed method in both artificial and real-world networks.

  9. Random detailed model for probabilistic neutronic calculation in pebble bed Very High Temperature Reactors

    International Nuclear Information System (INIS)

    Perez Curbelo, J.; Rosales, J.; Garcia, L.; Garcia, C.; Brayner, C.

    2013-01-01

    The pebble bed nuclear reactor is one of the main candidates for the next generation of nuclear power plants. In pebble bed type HTRs, the fuel is contained within graphite pebbles in the form of TRISO particles, which form a randomly packed bed inside a graphite-walled cylindrical cavity. Pebble bed reactors (PBR) offer the opportunity to meet the sustainability requirements, such as nuclear safety, economic competitiveness, proliferation resistance and a minimal production of radioactive waste. In order to simulate PBRs correctly, the double heterogeneity of the system must be considered. It consists on randomly located pebbles into the core and TRISO particles into the fuel pebbles. These features are often neglected due to the difficulty to model with MCPN code. The main reason is that there is a limited number of cells and surfaces to be defined. In this study, a computational tool which allows getting a new geometrical model of fuel pebbles for neutronic calculations with MCNPX code, was developed. The heterogeneity of system is considered, and also the randomly located TRISO particles inside the pebble. Four proposed fuel pebble models were compared regarding their effective multiplication factor and energy liberation profiles. Such models are: Homogeneous Pebble, Five Zone Homogeneous Pebble, Detailed Geometry, and Randomly Detailed Geometry. (Author)

  10. The dilute random field Ising model by finite cluster approximation

    International Nuclear Information System (INIS)

    Benyoussef, A.; Saber, M.

    1987-09-01

    Using the finite cluster approximation, phase diagrams of bond and site diluted three-dimensional simple cubic Ising models with a random field have been determined. The resulting phase diagrams have the same general features for both bond and site dilution. (author). 7 refs, 4 figs

  11. Simultaneous collection method of on-peak window image and off-peak window image in Tl-201 imaging

    International Nuclear Information System (INIS)

    Murakami, Tomonori; Noguchi, Yasushi; Kojima, Akihiro; Takagi, Akihiro; Matsumoto, Masanori

    2007-01-01

    Tl-201 imaging detects the photopeak (71 keV, in on-peak window) of characteristic X-rays of Hg-201 formed from Tl-201 decay. The peak is derived from 4 rays of different energy and emission intensity and does not follow in Gaussian distribution. In the present study, authors made an idea for the method in the title to attain the more effective single imaging, which was examined for its accuracy and reliability with phantoms and applied clinically to Tl-201 scintigraphy in a patient. The authors applied the triple energy window method for data acquisition: the energy window setting was made on Hg-201 X-rays photopeak in three of the lower (3%, L), main (72 keV, M) and upper (14%, U) windows with the gamma camera with 2-gated detector (Toshiba E. CAM/ICON). L, M and U images obtained simultaneously were then constructed to images of on-peak (L+M, Mock on-peak) and off-peak (M+U) window settings for evaluation. Phantoms for line source with Tl-201-containing swab and for multi-defect with acrylic plate containing Tl-201 solution were imaged in water. The female patient with thyroid cancer was subjected to preoperative scintigraphy under the defined conditions. Mock on-, off-peak images were found to be equivalent to the true (ordinary, clinical) on-, off-peak ones, and the present method was thought usable for evaluation of usefulness of off-peak window data. (R.T.)

  12. Full-field peak pressure prediction of shock waves from underwater explosion of cylindrical charges

    NARCIS (Netherlands)

    Liu, Lei; Guo, Rui; Gao, Ke; Zeng, Ming Chao

    2017-01-01

    Cylindrical charge is a main form in most application of explosives. By employing numerical calculation and an indirect mapping method, the relation between peak pressures from underwater explosion of cylindrical and spherical charges is investigated, and further a model to predict full-field peak

  13. Peak Wind Tool for General Forecasting

    Science.gov (United States)

    Barrett, Joe H., III

    2010-01-01

    again by six years, from October 1996 to April 2002, by interpolating 1000-ft sounding data to 100-ft increments. The Phase II developmental data set included observations for the cool season months of October 1996 to February 2007. The AMU calculated 68 candidate predictors from the XMR soundings, to include 19 stability parameters, 48 wind speed parameters and one wind shear parameter. Each day in the data set was stratified by synoptic weather pattern, low-level wind direction, precipitation and Richardson Number, for a total of 60 stratification methods. Linear regression equations, using the 68 predictors and 60 stratification methods, were created for the tool's three forecast parameters: the highest peak wind speed of the day (PWSD), 5-minute average speed at the same time (A WSD), and timing of the PWSD. For PWSD and A WSD, 30 Phase II methods were selected for evaluation in the verification data set. For timing of the PWSD, 12 Phase\\I methods were selected for evaluation. The verification data set contained observations for the cool season months of March 2007 to April 2009. The data set was used to compare the Phase I and II forecast methods to climatology, model forecast winds and wind advisories issued by the 45 WS. The model forecast winds were derived from the 0000 and 1200 UTC runs of the 12-km North American Mesoscale (MesoNAM) model. The forecast methods that performed the best in the verification data set were selected for the Phase II version of the tool. For PWSD and A WSD, linear regression equations based on MesoNAM forecasts performed significantly better than the Phase I and II methods. For timing of the PWSD, none of the methods performed significantly bener than climatology. The AMU then developed the Microsoft Excel and MIDDS GUls. The GUIs display the forecasts for PWSD, AWSD and the probability the PWSD will meet or exceed 25 kt, 35 kt and 50 kt. Since none of the prediction methods for timing of the PWSD performed significantly better

  14. Application of the load flow and random flow models for the analysis of power transmission networks

    International Nuclear Information System (INIS)

    Zio, Enrico; Piccinelli, Roberta; Delfanti, Maurizio; Olivieri, Valeria; Pozzi, Mauro

    2012-01-01

    In this paper, the classical load flow model and the random flow model are considered for analyzing the performance of power transmission networks. The analysis concerns both the system performance and the importance of the different system elements; this latter is computed by power flow and random walk betweenness centrality measures. A network system from the literature is analyzed, representing a simple electrical power transmission network. The results obtained highlight the differences between the LF “global approach” to flow dispatch and the RF local approach of randomized node-to-node load transfer. Furthermore, computationally the LF model is less consuming than the RF model but problems of convergence may arise in the LF calculation.

  15. Topics in random walks in random environment

    International Nuclear Information System (INIS)

    Sznitman, A.-S.

    2004-01-01

    Over the last twenty-five years random motions in random media have been intensively investigated and some new general methods and paradigms have by now emerged. Random walks in random environment constitute one of the canonical models of the field. However in dimension bigger than one they are still poorly understood and many of the basic issues remain to this day unresolved. The present series of lectures attempt to give an account of the progresses which have been made over the last few years, especially in the study of multi-dimensional random walks in random environment with ballistic behavior. (author)

  16. Does quasi-long-range order in the two-dimensional XY model really survive weak random phase fluctuations?

    International Nuclear Information System (INIS)

    Mudry, Christopher; Wen Xiaogang

    1999-01-01

    Effective theories for random critical points are usually non-unitary, and thus may contain relevant operators with negative scaling dimensions. To study the consequences of the existence of negative-dimensional operators, we consider the random-bond XY model. It has been argued that the XY model on a square lattice, when weakly perturbed by random phases, has a quasi-long-range ordered phase (the random spin wave phase) at sufficiently low temperatures. We show that infinitely many relevant perturbations to the proposed critical action for the random spin wave phase were omitted in all previous treatments. The physical origin of these perturbations is intimately related to the existence of broadly distributed correlation functions. We find that those relevant perturbations do enter the Renormalization Group equations, and affect critical behavior. This raises the possibility that the random XY model has no quasi-long-range ordered phase and no Kosterlitz-Thouless (KT) phase transition

  17. Inference of a random potential from random walk realizations: Formalism and application to the one-dimensional Sinai model with a drift

    International Nuclear Information System (INIS)

    Cocco, S; Monasson, R

    2009-01-01

    We consider the Sinai model, in which a random walker moves in a random quenched potential V, and ask the following questions: 1. how can the quenched potential V be inferred from the observations of one or more realizations of the random motion? 2. how many observations (walks) are required to make a reliable inference, that is, to be able to distinguish between two similar but distinct potentials, V 1 and V 2 ? We show how question 1 can be easily solved within the Bayesian framework. In addition, we show that the answer to question 2 is, in general, intimately connected to the calculation of the survival probability of a fictitious walker in a potential W defined from V 1 and V 2 , with partial absorption at sites where V 1 and V 2 do not coincide. For the one-dimensional Sinai model, this survival probability can be analytically calculated, in excellent agreement with numerical simulations.

  18. Criticality of the random-site Ising model: Metropolis, Swendsen-Wang and Wolff Monte Carlo algorithms

    Directory of Open Access Journals (Sweden)

    D.Ivaneyko

    2005-01-01

    Full Text Available We apply numerical simulations to study of the criticality of the 3D Ising model with random site quenched dilution. The emphasis is given to the issues not being discussed in detail before. In particular, we attempt a comparison of different Monte Carlo techniques, discussing regions of their applicability and advantages/disadvantages depending on the aim of a particular simulation set. Moreover, besides evaluation of the critical indices we estimate the universal ratio Γ+/Γ- for the magnetic susceptibility critical amplitudes. Our estimate Γ+/Γ- = 1.67 ± 0.15 is in a good agreement with the recent MC analysis of the random-bond Ising model giving further support that both random-site and random-bond dilutions lead to the same universality class.

  19. An Empirical Study on Raman Peak Fitting and Its Application to Raman Quantitative Research.

    Science.gov (United States)

    Yuan, Xueyin; Mayanovic, Robert A

    2017-10-01

    Fitting experimentally measured Raman bands with theoretical model profiles is the basic operation for numerical determination of Raman peak parameters. In order to investigate the effects of peak modeling using various algorithms on peak fitting results, the representative Raman bands of mineral crystals, glass, fluids as well as the emission lines from a fluorescent lamp, some of which were measured under ambient light whereas others under elevated pressure and temperature conditions, were fitted using Gaussian, Lorentzian, Gaussian-Lorentzian, Voigtian, Pearson type IV, and beta profiles. From the fitting results of the Raman bands investigated in this study, the fitted peak position, intensity, area and full width at half-maximum (FWHM) values of the measured Raman bands can vary significantly depending upon which peak profile function is used in the fitting, and the most appropriate fitting profile should be selected depending upon the nature of the Raman bands. Specifically, the symmetric Raman bands of mineral crystals and non-aqueous fluids are best fit using Gaussian-Lorentzian or Voigtian profiles, whereas the asymmetric Raman bands are best fit using Pearson type IV profiles. The asymmetric O-H stretching vibrations of H 2 O and the Raman bands of soda-lime glass are best fit using several Gaussian profiles, whereas the emission lines from a florescent light are best fit using beta profiles. Multiple peaks that are not clearly separated can be fit simultaneously, provided the residuals in the fitting of one peak will not affect the fitting of the remaining peaks to a significant degree. Once the resolution of the Raman spectrometer has been properly accounted for, our findings show that the precision in peak position and intensity can be improved significantly by fitting the measured Raman peaks with appropriate profiles. Nevertheless, significant errors in peak position and intensity were still observed in the results from fitting of weak and wide Raman

  20. Demand Side Management: An approach to peak load smoothing

    Science.gov (United States)

    Gupta, Prachi

    A preliminary national-level analysis was conducted to determine whether Demand Side Management (DSM) programs introduced by electric utilities since 1992 have made any progress towards their stated goal of reducing peak load demand. Estimates implied that DSM has a very small effect on peak load reduction and there is substantial regional and end-user variability. A limited scholarly literature on DSM also provides evidence in support of a positive effect of demand response programs. Yet, none of these studies examine the question of how DSM affects peak load at the micro-level by influencing end-users' response to prices. After nearly three decades of experience with DSM, controversy remains over how effective these programs have been. This dissertation considers regional analyses that explore both demand-side solutions and supply-side interventions. On the demand side, models are estimated to provide in-depth evidence of end-user consumption patterns for each North American Electric Reliability Corporation (NERC) region, helping to identify sectors in regions that have made a substantial contribution to peak load reduction. The empirical evidence supports the initial hypothesis that there is substantial regional and end-user variability of reductions in peak demand. These results are quite robust in rapidly-urbanizing regions, where air conditioning and lighting load is substantially higher, and regions where the summer peak is more pronounced than the winter peak. It is also evident from the regional experiences that active government involvement, as shaped by state regulations in the last few years, has been successful in promoting DSM programs, and perhaps for the same reason we witness an uptick in peak load reductions in the years 2008 and 2009. On the supply side, we estimate the effectiveness of DSM programs by analyzing the growth of capacity margin with the introduction of DSM programs. The results indicate that DSM has been successful in offsetting the

  1. Experimental discrimination of ion stopping models near the Bragg peak in highly ionized matter

    Science.gov (United States)

    Cayzac, W.; Frank, A.; Ortner, A.; Bagnoud, V.; Basko, M. M.; Bedacht, S.; Bläser, C.; Blažević, A.; Busold, S.; Deppert, O.; Ding, J.; Ehret, M.; Fiala, P.; Frydrych, S.; Gericke, D. O.; Hallo, L.; Helfrich, J.; Jahn, D.; Kjartansson, E.; Knetsch, A.; Kraus, D.; Malka, G.; Neumann, N. W.; Pépitone, K.; Pepler, D.; Sander, S.; Schaumann, G.; Schlegel, T.; Schroeter, N.; Schumacher, D.; Seibert, M.; Tauschwitz, An.; Vorberger, J.; Wagner, F.; Weih, S.; Zobus, Y.; Roth, M.

    2017-01-01

    The energy deposition of ions in dense plasmas is a key process in inertial confinement fusion that determines the α-particle heating expected to trigger a burn wave in the hydrogen pellet and resulting in high thermonuclear gain. However, measurements of ion stopping in plasmas are scarce and mostly restricted to high ion velocities where theory agrees with the data. Here, we report experimental data at low projectile velocities near the Bragg peak, where the stopping force reaches its maximum. This parameter range features the largest theoretical uncertainties and conclusive data are missing until today. The precision of our measurements, combined with a reliable knowledge of the plasma parameters, allows to disprove several standard models for the stopping power for beam velocities typically encountered in inertial fusion. On the other hand, our data support theories that include a detailed treatment of strong ion-electron collisions. PMID:28569766

  2. Peak Vertical Ground Reaction Force during Two-Leg Landing: A Systematic Review and Mathematical Modeling

    Directory of Open Access Journals (Sweden)

    Wenxin Niu

    2014-01-01

    Full Text Available Objectives. (1 To systematically review peak vertical ground reaction force (PvGRF during two-leg drop landing from specific drop height (DH, (2 to construct a mathematical model describing correlations between PvGRF and DH, and (3 to analyze the effects of some factors on the pooled PvGRF regardless of DH. Methods. A computerized bibliographical search was conducted to extract PvGRF data on a single foot when participants landed with both feet from various DHs. An innovative mathematical model was constructed to analyze effects of gender, landing type, shoes, ankle stabilizers, surface stiffness and sample frequency on PvGRF based on the pooled data. Results. Pooled PvGRF and DH data of 26 articles showed that the square root function fits their relationship well. An experimental validation was also done on the regression equation for the medicum frequency. The PvGRF was not significantly affected by surface stiffness, but was significantly higher in men than women, the platform than suspended landing, the barefoot than shod condition, and ankle stabilizer than control condition, and higher than lower frequencies. Conclusions. The PvGRF and root DH showed a linear relationship. The mathematical modeling method with systematic review is helpful to analyze the influence factors during landing movement without considering DH.

  3. Elimination of noise peak for signal processing in Johnson noise thermometry development

    International Nuclear Information System (INIS)

    Hwang, I. G.; Moon, B. S.; Jeong, J. E.; Jeo, Y. H.; Kisner, Roger A.

    2003-01-01

    The internal and external noise is the most considering obstacle in development of Johnson Noise Thermometry system. This paper addresses an external noise elimination issue of the Johnson Noise Thermometry system which is underway of development in collaboration between KAERI and ORNL. Although internal random noise is canceled by Cross Power Spectral Density function, a continuous wave penetrating into the electronic circuit is eliminated by the difference of peaks between Johnson signal and external noise. The elimination logic using standard deviation of CPSD and energy leakage problem in discrete CPSD function are discussed in this paper

  4. GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY

    Directory of Open Access Journals (Sweden)

    F. Biljecki

    2016-09-01

    Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.

  5. NOISY WEAK-LENSING CONVERGENCE PEAK STATISTICS NEAR CLUSTERS OF GALAXIES AND BEYOND

    International Nuclear Information System (INIS)

    Fan Zuhui; Shan Huanyuan; Liu Jiayi

    2010-01-01

    , for the isothermal cluster. For the NFW cluster, Δν ∼ 0.8. The existence of noise also causes a location offset for the weak-lensing identified main-cluster-peak with respect to the true center of the cluster. The offset distribution is very broad and extends to R ∼ R c for the isothermal case. For the NFW cluster, it is relatively narrow and peaked at R ∼ 0.2R c . We also analyze NFW clusters of different concentrations. It is found that the more centrally concentrated the mass distribution of a cluster is, the less its weak-lensing signal is affected by noise. Incorporating these important effects and the mass function of NFW dark matter halos, we further present a model calculating the statistical abundances of total convergence peaks, true and false ones, over a large field beyond individual clusters. The results are in good agreement with those from numerical simulations. The model then allows us to probe cosmologies with the convergence peaks directly without the need of expensive follow-up observations to differentiate true and false peaks.

  6. The peak in neutron powder diffraction

    International Nuclear Information System (INIS)

    Laar, B. van; Yelon, W.B.

    1984-01-01

    For the application of Rietveld profile analysis to neutron powder diffraction data a precise knowledge of the peak profile, in both shape and position, is required. The method now in use employs a Gaussian shaped profile with a semi-empirical asymmetry correction for low-angle peaks. The integrated intensity is taken to be proportional to the classical Lorentz factor calculated for the X-ray case. In this paper an exact expression is given for the peak profile based upon the geometrical dimensions of the diffractometer. It is shown that the asymmetry of observed peaks is well reproduced by this expression. The angular displacement of the experimental profile with respect to the nominal Bragg angle value is larger than expected. Values for the correction to the classical Lorentz factor for the integrated intensity are given. The exact peak profile expression has been incorporated into a Rietveld profile analysis refinement program. (Auth.)

  7. Random effects model for the reliability management of modules of a fighter aircraft

    Energy Technology Data Exchange (ETDEWEB)

    Sohn, So Young [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: sohns@yonsei.ac.kr; Yoon, Kyung Bok [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: ykb@yonsei.ac.kr; Chang, In Sang [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: isjang@yonsei.ac.kr

    2006-04-15

    The operational availability of fighter aircrafts plays an important role in the national defense. Low operational availability of fighter aircrafts can cause many problems and ROKA (Republic of Korea Airforce) needs proper strategies to improve the current practice of reliability management by accurately forecasting both MTBF (mean time between failure) and MTTR (mean time to repair). In this paper, we develop a random effects model to forecast both MTBF and MTTR of installed modules of fighter aircrafts based on their characteristics and operational conditions. Advantage of using such a random effects model is the ability of accommodating not only the individual characteristics of each module and operational conditions but also the uncertainty caused by random error that cannot be explained by them. Our study is expected to contribute to ROKA in improving operational availability of fighter aircrafts and establishing effective logistics management.

  8. Study on the Light Scattering from Random Rough Surfaces by Kirrhoff Approximation

    Directory of Open Access Journals (Sweden)

    Keding Yan

    2014-07-01

    Full Text Available In order to study the space distribution characteristics of light scattering from random rough surfaces, the linear filtering method is used to generate a series of Gaussian randomly rough surfaces, and the Kirchhoff Approximation is used to calculate the scattered light intensity distribution from random metal and dielectric rough surfaces. The three characteristics of the scattered light intensity distribution peak, the intensity distribution width and the position of peak are reviewed. Numerical calculation results show that significant differences between scattering characteristics of metal surfaces and the dielectric surfaces exist. The light scattering characteristics are jointly influenced by the slope distribution and reflectance of surface element. The scattered light intensity distribution is affected by common influence of surface local slope distribution and surface local reflectivity. The results can provide a basis theory for the research to lidar target surface scattering characteristics.

  9. Simulating Urban Growth Using a Random Forest-Cellular Automata (RF-CA Model

    Directory of Open Access Journals (Sweden)

    Courage Kamusoko

    2015-04-01

    Full Text Available Sustainable urban planning and management require reliable land change models, which can be used to improve decision making. The objective of this study was to test a random forest-cellular automata (RF-CA model, which combines random forest (RF and cellular automata (CA models. The Kappa simulation (KSimulation, figure of merit, and components of agreement and disagreement statistics were used to validate the RF-CA model. Furthermore, the RF-CA model was compared with support vector machine cellular automata (SVM-CA and logistic regression cellular automata (LR-CA models. Results show that the RF-CA model outperformed the SVM-CA and LR-CA models. The RF-CA model had a Kappa simulation (KSimulation accuracy of 0.51 (with a figure of merit statistic of 47%, while SVM-CA and LR-CA models had a KSimulation accuracy of 0.39 and −0.22 (with figure of merit statistics of 39% and 6%, respectively. Generally, the RF-CA model was relatively accurate at allocating “non-built-up to built-up” changes as reflected by the correct “non-built-up to built-up” components of agreement of 15%. The performance of the RF-CA model was attributed to the relatively accurate RF transition potential maps. Therefore, this study highlights the potential of the RF-CA model for simulating urban growth.

  10. Statistical shape model with random walks for inner ear segmentation

    DEFF Research Database (Denmark)

    Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma

    2016-01-01

    is required. We propose a new framework for segmentation of micro-CT cochlear images using random walks combined with a statistical shape model (SSM). The SSM allows us to constrain the less contrasted areas and ensures valid inner ear shape outputs. Additionally, a topology preservation method is proposed...

  11. Conditional Random Fields versus Hidden Markov Models for activity recognition in temporal sensor data

    NARCIS (Netherlands)

    van Kasteren, T.L.M.; Noulas, A.K.; Kröse, B.J.A.; Smit, G.J.M.; Epema, D.H.J.; Lew, M.S.

    2008-01-01

    Conditional Random Fields are a discriminative probabilistic model which recently gained popularity in applications that require modeling nonindependent observation sequences. In this work, we present the basic advantages of this model over generative models and argue about its suitability in the

  12. Method and apparatus for current-output peak detection

    Science.gov (United States)

    De Geronimo, Gianluigi

    2017-01-24

    A method and apparatus for a current-output peak detector. A current-output peak detector circuit is disclosed and works in two phases. The peak detector circuit includes switches to switch the peak detector circuit from the first phase to the second phase upon detection of the peak voltage of an input voltage signal. The peak detector generates a current output with a high degree of accuracy in the second phase.

  13. Rényi Entropies from Random Quenches in Atomic Hubbard and Spin Models

    Science.gov (United States)

    Elben, A.; Vermersch, B.; Dalmonte, M.; Cirac, J. I.; Zoller, P.

    2018-02-01

    We present a scheme for measuring Rényi entropies in generic atomic Hubbard and spin models using single copies of a quantum state and for partitions in arbitrary spatial dimensions. Our approach is based on the generation of random unitaries from random quenches, implemented using engineered time-dependent disorder potentials, and standard projective measurements, as realized by quantum gas microscopes. By analyzing the properties of the generated unitaries and the role of statistical errors, with respect to the size of the partition, we show that the protocol can be realized in existing quantum simulators and used to measure, for instance, area law scaling of entanglement in two-dimensional spin models or the entanglement growth in many-body localized systems.

  14. Gamma-Ray Peak Integration: Accuracy and Precision

    International Nuclear Information System (INIS)

    Richard M. Lindstrom

    2000-01-01

    The accuracy of singlet gamma-ray peak areas obtained by a peak analysis program is immaterial. If the same algorithm is used for sample measurement as for calibration and if the peak shapes are similar, then biases in the integration method cancel. Reproducibility is the only important issue. Even the uncertainty of the areas computed by the program is trivial because the true standard uncertainty can be experimentally assessed by repeated measurements of the same source. Reproducible peak integration was important in a recent standard reference material certification task. The primary tool used for spectrum analysis was SUM, a National Institute of Standards and Technology interactive program to sum peaks and subtract a linear background, using the same channels to integrate all 20 spectra. For comparison, this work examines other peak integration programs. Unlike some published comparisons of peak performance in which synthetic spectra were used, this experiment used spectra collected for a real (though exacting) analytical project, analyzed by conventional software used in routine ways. Because both components of the 559- to 564-keV doublet are from 76 As, they were integrated together with SUM. The other programs, however, deconvoluted the peaks. A sensitive test of the fitting algorithm is the ratio of reported peak areas. In almost all the cases, this ratio was much more variable than expected from the reported uncertainties reported by the program. Other comparisons to be reported indicate that peak integration is still an imperfect tool in the analysis of gamma-ray spectra

  15. A semi-empirical analysis of strong-motion peaks in terms of seismic source, propagation path, and local site conditions

    Science.gov (United States)

    Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.

    1992-09-01

    A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.

  16. Analysis of the diffraction peaks in the ZrCr2 system

    International Nuclear Information System (INIS)

    Quiroga, A.A.; Esquivel, M.R.

    2009-01-01

    In this work, the crystalline structures of Cr and Zr are characterized by X-ray Diffraction (XRD). The diffraction peaks are simulated using a numerical convolution of the Gauss and Lorentz functions. The simulation of the model is verified using empirical measurements of the diffraction peaks. From these results, the microstructure parameters of Zr and Cr are obtained: crystallite size (d) and strain (s). The advances obtained are used in the design of the synthesis of AB 2 intermetallics applied to thermal compression of hydrogen (Tch). (author)

  17. Potts Model with Invisible Colors : Random-Cluster Representation and Pirogov–Sinai Analysis

    NARCIS (Netherlands)

    Enter, Aernout C.D. van; Iacobelli, Giulio; Taati, Siamak

    We study a recently introduced variant of the ferromagnetic Potts model consisting of a ferromagnetic interaction among q “visible” colors along with the presence of r non-interacting “invisible” colors. We introduce a random-cluster representation for the model, for which we prove the existence of

  18. Combining automated peak tracking in SAR by NMR with structure-based backbone assignment from 15N-NOESY

    KAUST Repository

    Jang, Richard; Gao, Xin; Li, Ming

    2012-01-01

    Background: Chemical shift mapping is an important technique in NMR-based drug screening for identifying the atoms of a target protein that potentially bind to a drug molecule upon the molecule's introduction in increasing concentrations. The goal is to obtain a mapping of peaks with known residue assignment from the reference spectrum of the unbound protein to peaks with unknown assignment in the target spectrum of the bound protein. Although a series of perturbed spectra help to trace a path from reference peaks to target peaks, a one-to-one mapping generally is not possible, especially for large proteins, due to errors, such as noise peaks, missing peaks, missing but then reappearing, overlapped, and new peaks not associated with any peaks in the reference. Due to these difficulties, the mapping is typically done manually or semi-automatically, which is not efficient for high-throughput drug screening.Results: We present PeakWalker, a novel peak walking algorithm for fast-exchange systems that models the errors explicitly and performs many-to-one mapping. On the proteins: hBclXL, UbcH5B, and histone H1, it achieves an average accuracy of over 95% with less than 1.5 residues predicted per target peak. Given these mappings as input, we present PeakAssigner, a novel combined structure-based backbone resonance and NOE assignment algorithm that uses just 15N-NOESY, while avoiding TOCSY experiments and 13C-labeling, to resolve the ambiguities for a one-to-one mapping. On the three proteins, it achieves an average accuracy of 94% or better.Conclusions: Our mathematical programming approach for modeling chemical shift mapping as a graph problem, while modeling the errors directly, is potentially a time- and cost-effective first step for high-throughput drug screening based on limited NMR data and homologous 3D structures. 2012 Jang et al.; licensee BioMed Central Ltd.

  19. Combining automated peak tracking in SAR by NMR with structure-based backbone assignment from 15N-NOESY

    KAUST Repository

    Jang, Richard

    2012-03-21

    Background: Chemical shift mapping is an important technique in NMR-based drug screening for identifying the atoms of a target protein that potentially bind to a drug molecule upon the molecule\\'s introduction in increasing concentrations. The goal is to obtain a mapping of peaks with known residue assignment from the reference spectrum of the unbound protein to peaks with unknown assignment in the target spectrum of the bound protein. Although a series of perturbed spectra help to trace a path from reference peaks to target peaks, a one-to-one mapping generally is not possible, especially for large proteins, due to errors, such as noise peaks, missing peaks, missing but then reappearing, overlapped, and new peaks not associated with any peaks in the reference. Due to these difficulties, the mapping is typically done manually or semi-automatically, which is not efficient for high-throughput drug screening.Results: We present PeakWalker, a novel peak walking algorithm for fast-exchange systems that models the errors explicitly and performs many-to-one mapping. On the proteins: hBclXL, UbcH5B, and histone H1, it achieves an average accuracy of over 95% with less than 1.5 residues predicted per target peak. Given these mappings as input, we present PeakAssigner, a novel combined structure-based backbone resonance and NOE assignment algorithm that uses just 15N-NOESY, while avoiding TOCSY experiments and 13C-labeling, to resolve the ambiguities for a one-to-one mapping. On the three proteins, it achieves an average accuracy of 94% or better.Conclusions: Our mathematical programming approach for modeling chemical shift mapping as a graph problem, while modeling the errors directly, is potentially a time- and cost-effective first step for high-throughput drug screening based on limited NMR data and homologous 3D structures. 2012 Jang et al.; licensee BioMed Central Ltd.

  20. Peak cladding temperature in a spent fuel storage or transportation cask

    International Nuclear Information System (INIS)

    Li, J.; Murakami, H.; Liu, Y.; Gomez, P.E.A.; Gudipati, M.; Greiner, M.

    2007-01-01

    From reactor discharge to eventual disposition, spent nuclear fuel assemblies from a commercial light water reactor are typically exposed to a variety of environments under which the peak cladding temperature (PCT) is an important parameter that can affect the characteristics and behavior of the cladding and, thus, the functions of the spent fuel during storage, transportation, and disposal. Three models have been identified to calculate the peak cladding temperature of spent fuel assemblies in a storage or transportation cask: a coupled effective thermal conductivity and edge conductance model developed by Manteufel and Todreas, an effective thermal conductivity model developed by Bahney and Lotz, and a computational fluid dynamics model. These models were used to estimate the PCT for spent fuel assemblies for light water reactors under helium, nitrogen, and vacuum environments with varying decay heat loads and temperature boundary conditions. The results show that the vacuum environment is more challening than the other gas environments in that the PCT limit is exceeded at a lower boundary temperature for a given decay heat load of the spent fuel assembly. This paper will highlight the PCT calculations, including a comparison of the PCTs obtained by different models.

  1. Randomized random walk on a random walk

    International Nuclear Information System (INIS)

    Lee, P.A.

    1983-06-01

    This paper discusses generalizations of the model introduced by Kehr and Kunter of the random walk of a particle on a one-dimensional chain which in turn has been constructed by a random walk procedure. The superimposed random walk is randomised in time according to the occurrences of a stochastic point process. The probability of finding the particle in a particular position at a certain instant is obtained explicitly in the transform domain. It is found that the asymptotic behaviour for large time of the mean-square displacement of the particle depends critically on the assumed structure of the basic random walk, giving a diffusion-like term for an asymmetric walk or a square root law if the walk is symmetric. Many results are obtained in closed form for the Poisson process case, and these agree with those given previously by Kehr and Kunter. (author)

  2. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  3. Kinetics of transformations nucleated on random parallel planes: analytical modelling and computer simulation

    International Nuclear Information System (INIS)

    Rios, Paulo R; Assis, Weslley L S; Ribeiro, Tatiana C S; Villa, Elena

    2012-01-01

    In a classical paper, Cahn derived expressions for the kinetics of transformations nucleated on random planes and lines. He used those as a model for nucleation on the boundaries, edges and vertices of a polycrystal consisting of equiaxed grains. In this paper it is demonstrated that Cahn's expression for random planes may be used in situations beyond the scope envisaged in Cahn's original paper. For instance, we derived an expression for the kinetics of transformations nucleated on random parallel planes that is identical to that formerly obtained by Cahn considering random planes. Computer simulation of transformations nucleated on random parallel planes is carried out. It is shown that there is excellent agreement between simulated results and analytical solutions. Such an agreement is to be expected if both the simulation and the analytical solution are correct. (paper)

  4. Economic effects of peak oil

    International Nuclear Information System (INIS)

    Lutz, Christian; Lehr, Ulrike; Wiebe, Kirsten S.

    2012-01-01

    Assuming that global oil production peaked, this paper uses scenario analysis to show the economic effects of a possible supply shortage and corresponding rise in oil prices in the next decade on different sectors in Germany and other major economies such as the US, Japan, China, the OPEC or Russia. Due to the price-inelasticity of oil demand the supply shortage leads to a sharp increase in oil prices in the second scenario, with high effects on GDP comparable to the magnitude of the global financial crises in 2008/09. Oil exporting countries benefit from high oil prices, whereas oil importing countries are negatively affected. Generally, the effects in the third scenario are significantly smaller than in the second, showing that energy efficiency measures and the switch to renewable energy sources decreases the countries' dependence on oil imports and hence reduces their vulnerability to oil price shocks on the world market. - Highlights: ► National and sectoral economic effects of peak oil until 2020 are modelled. ► The price elasticity of oil demand is low resulting in high price fluctuations. ► Oil shortage strongly affects transport and indirectly all other sectors. ► Global macroeconomic effects are comparable to the 2008/2009 crisis. ► Country effects depend on oil imports and productivity, and economic structures.

  5. 93-106, 2015 93 Multilevel random effect and marginal models

    African Journals Online (AJOL)

    injected by the candidate vaccine have a lower or higher risk for the occurrence of ... outcome relationship and test whether subjects inject- ... contains an agent that resembles a disease-causing ... to have different random effect variability at each cat- ... In the marginal models settings, the responses are ... Behavior as usual.

  6. Gravitational lensing by eigenvalue distributions of random matrix models

    Science.gov (United States)

    Martínez Alonso, Luis; Medina, Elena

    2018-05-01

    We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.

  7. Random resistor network model of minimal conductivity in graphene.

    Science.gov (United States)

    Cheianov, Vadim V; Fal'ko, Vladimir I; Altshuler, Boris L; Aleiner, Igor L

    2007-10-26

    Transport in undoped graphene is related to percolating current patterns in the networks of n- and p-type regions reflecting the strong bipolar charge density fluctuations. Finite transparency of the p-n junctions is vital in establishing the macroscopic conductivity. We propose a random resistor network model to analyze scaling dependencies of the conductance on the doping and disorder, the quantum magnetoresistance and the corresponding dephasing rate.

  8. China's "Exported Carbon" Peak: Patterns, Drivers, and Implications

    Science.gov (United States)

    Mi, Zhifu; Meng, Jing; Green, Fergus; Coffman, D'Maris; Guan, Dabo

    2018-05-01

    Over the past decade, China has entered a "new normal" phase in economic development, with its role in global trade flows changing significantly. This study estimates the driving forces of Chinese export-embodied carbon emissions in the new normal phase, based on environmentally extended multiregional input-output modeling and structural decomposition analysis. We find that Chinese export-embodied CO2 emissions peaked in 2008 at a level of 1,657 million tones. The subsequent decline in CO2 emissions was mainly due to the changing structure of Chinese production. The peak in Chinese export-embodied emissions is encouraging from the perspective of global climate change mitigation, as it implies downward pressure on global CO2 emissions. However, more attention should focus on ensuring that countries that may partly replace China as major production bases increase their exports using low-carbon inputs.

  9. Derrida's Generalized Random Energy models; 4, Continuous state branching and coalescents

    CERN Document Server

    Bovier, A

    2003-01-01

    In this paper we conclude our analysis of Derrida's Generalized Random Energy Models (GREM) by identifying the thermodynamic limit with a one-parameter family of probability measures related to a continuous state branching process introduced by Neveu. Using a construction introduced by Bertoin and Le Gall in terms of a coherent family of subordinators related to Neveu's branching process, we show how the Gibbs geometry of the limiting Gibbs measure is given in terms of the genealogy of this process via a deterministic time-change. This construction is fully universal in that all different models (characterized by the covariance of the underlying Gaussian process) differ only through that time change, which in turn is expressed in terms of Parisi's overlap distribution. The proof uses strongly the Ghirlanda-Guerra identities that impose the structure of Neveu's process as the only possible asymptotic random mechanism.

  10. Emissions Scenarios and Fossil-fuel Peaking

    Science.gov (United States)

    Brecha, R.

    2008-12-01

    Intergovernmental Panel on Climate Change (IPCC) emissions scenarios are based on detailed energy system models in which demographics, technology and economics are used to generate projections of future world energy consumption, and therefore, of greenhouse gas emissions. Built into the assumptions for these scenarios are estimates for ultimately recoverable resources of various fossil fuels. There is a growing chorus of critics who believe that the true extent of recoverable fossil resources is much smaller than the amounts taken as a baseline for the IPCC scenarios. In a climate optimist camp are those who contend that "peak oil" will lead to a switch to renewable energy sources, while others point out that high prices for oil caused by supply limitations could very well lead to a transition to liquid fuels that actually increase total carbon emissions. We examine a third scenario in which high energy prices, which are correlated with increasing infrastructure, exploration and development costs, conspire to limit the potential for making a switch to coal or natural gas for liquid fuels. In addition, the same increasing costs limit the potential for expansion of tar sand and shale oil recovery. In our qualitative model of the energy system, backed by data from short- and medium-term trends, we have a useful way to gain a sense of potential carbon emission bounds. A bound for 21st century emissions is investigated based on two assumptions: first, that extractable fossil-fuel resources follow the trends assumed by "peak oil" adherents, and second, that little is done in the way of climate mitigation policies. If resources, and perhaps more importantly, extraction rates, of fossil fuels are limited compared to assumptions in the emissions scenarios, a situation can arise in which emissions are supply-driven. However, we show that even in this "peak fossil-fuel" limit, carbon emissions are high enough to surpass 550 ppm or 2°C climate protection guardrails. Some

  11. Random broadcast on random geometric graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Elsasser, Robert [UNIV OF PADERBORN; Friedrich, Tobias [ICSI/BERKELEY; Sauerwald, Tomas [ICSI/BERKELEY

    2009-01-01

    In this work, we consider the random broadcast time on random geometric graphs (RGGs). The classic random broadcast model, also known as push algorithm, is defined as: starting with one informed node, in each succeeding round every informed node chooses one of its neighbors uniformly at random and informs it. We consider the random broadcast time on RGGs, when with high probability: (i) RGG is connected, (ii) when there exists the giant component in RGG. We show that the random broadcast time is bounded by {Omicron}({radical} n + diam(component)), where diam(component) is a diameter of the entire graph, or the giant component, for the regimes (i), or (ii), respectively. In other words, for both regimes, we derive the broadcast time to be {Theta}(diam(G)), which is asymptotically optimal.

  12. Using observation-level random effects to model overdispersion in count data in ecology and evolution

    Directory of Open Access Journals (Sweden)

    Xavier A. Harrison

    2014-10-01

    Full Text Available Overdispersion is common in models of count data in ecology and evolutionary biology, and can occur due to missing covariates, non-independent (aggregated data, or an excess frequency of zeroes (zero-inflation. Accounting for overdispersion in such models is vital, as failing to do so can lead to biased parameter estimates, and false conclusions regarding hypotheses of interest. Observation-level random effects (OLRE, where each data point receives a unique level of a random effect that models the extra-Poisson variation present in the data, are commonly employed to cope with overdispersion in count data. However studies investigating the efficacy of observation-level random effects as a means to deal with overdispersion are scarce. Here I use simulations to show that in cases where overdispersion is caused by random extra-Poisson noise, or aggregation in the count data, observation-level random effects yield more accurate parameter estimates compared to when overdispersion is simply ignored. Conversely, OLRE fail to reduce bias in zero-inflated data, and in some cases increase bias at high levels of overdispersion. There was a positive relationship between the magnitude of overdispersion and the degree of bias in parameter estimates. Critically, the simulations reveal that failing to account for overdispersion in mixed models can erroneously inflate measures of explained variance (r2, which may lead to researchers overestimating the predictive power of variables of interest. This work suggests use of observation-level random effects provides a simple and robust means to account for overdispersion in count data, but also that their ability to minimise bias is not uniform across all types of overdispersion and must be applied judiciously.

  13. The shifting nature of vegetation controls on peak snowpack with varying slope and aspect

    Science.gov (United States)

    Biederman, J. A.; Harpold, A. A.; Broxton, P. D.; Brooks, P. D.

    2012-12-01

    The controls on peak seasonal snowpack are known to shift between forested and open environments as well as with slope and aspect. Peak snowpack is predicted well by interception models under uniformly dense canopy, while topography, wind and radiation are strong predictors in open areas. However, many basins have complex mosaics of forest canopy and small gaps, where snowpack controls involve complex interactions among climate, topography and forest structure. In this presentation we use a new fully distributed tree-scale model to investigate vegetation controls on snowpack for a range of slope and aspect, and we evaluate the energy balance in forest canopy and gap environments. The model is informed by airborne LiDAR and ground-based observations of climate, vegetation and snowpack. It represents interception, snow distribution by wind, latent and sensible heat fluxes, and radiative fluxes above and below the canopy at a grid scale of 1 m square on an hourly time step. First, the model is minimally calibrated using continuous records of snow depth and snow water equivalent (SWE). Next, the model is evaluated using distributed observations at peak accumulation. Finally, the domain is synthetically altered to introduce ranges of slope and aspect. Northerly aspects accumulate greater peak SWE than southerly aspects (e.g. 275 mm vs. 250 mm at a slope of 28 %) but show lower spatial variability (e. g. CV = 0.14 vs. CV = 0.17 at slope of 28 %). On northerly aspects, most of the snowpack remains shaded by vegetation, whereas on southerly aspects the northern portions of gaps and southern forest edges receive direct insolation during late winter. This difference in net radiation makes peak SWE in forest gaps and adjacent forest edges more sensitive to topography than SWE in areas under dense canopy. Tree-scale modeling of snow dynamics over synthetic terrain offers extensive possibilities to test interactions among vegetation and topographic controls.

  14. Partitioning into hazard subregions for regional peaks-over-threshold modeling of heavy precipitation

    Science.gov (United States)

    Carreau, J.; Naveau, P.; Neppel, L.

    2017-05-01

    The French Mediterranean is subject to intense precipitation events occurring mostly in autumn. These can potentially cause flash floods, the main natural danger in the area. The distribution of these events follows specific spatial patterns, i.e., some sites are more likely to be affected than others. The peaks-over-threshold approach consists in modeling extremes, such as heavy precipitation, by the generalized Pareto (GP) distribution. The shape parameter of the GP controls the probability of extreme events and can be related to the hazard level of a given site. When interpolating across a region, the shape parameter should reproduce the observed spatial patterns of the probability of heavy precipitation. However, the shape parameter estimators have high uncertainty which might hide the underlying spatial variability. As a compromise, we choose to let the shape parameter vary in a moderate fashion. More precisely, we assume that the region of interest can be partitioned into subregions with constant hazard level. We formalize the model as a conditional mixture of GP distributions. We develop a two-step inference strategy based on probability weighted moments and put forward a cross-validation procedure to select the number of subregions. A synthetic data study reveals that the inference strategy is consistent and not very sensitive to the selected number of subregions. An application on daily precipitation data from the French Mediterranean shows that the conditional mixture of GPs outperforms two interpolation approaches (with constant or smoothly varying shape parameter).

  15. Challenges in modelling the random structure correctly in growth mixture models and the impact this has on model mixtures.

    Science.gov (United States)

    Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E

    2014-06-01

    Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.

  16. Hubbert's Peak -- A Physicist's View

    Science.gov (United States)

    McDonald, Richard

    2011-04-01

    Oil, as used in agriculture and transportation, is the lifeblood of modern society. It is finite in quantity and will someday be exhausted. In 1956, Hubbert proposed a theory of resource production and applied it successfully to predict peak U.S. oil production in 1970. Bartlett extended this work in publications and lectures on the finite nature of oil and its production peak and depletion. Both Hubbert and Bartlett place peak world oil production at a similar time, essentially now. Central to these analyses are estimates of total ``oil in place'' obtained from engineering studies of oil reservoirs as this quantity determines the area under the Hubbert's Peak. Knowing the production history and the total oil in place allows us to make estimates of reserves, and therefore future oil availability. We will then examine reserves data for various countries, in particular OPEC countries, and see if these data tell us anything about the future availability of oil. Finally, we will comment on synthetic oil and the possibility of carbon-neutral synthetic oil for a sustainable future.

  17. The geomorphic structure of the runoff peak

    Directory of Open Access Journals (Sweden)

    R. Rigon

    2011-06-01

    Full Text Available This paper develops a theoretical framework to investigate the core dependence of peak flows on the geomorphic properties of river basins. Based on the theory of transport by travel times, and simple hydrodynamic characterization of floods, this new framework invokes the linearity and invariance of the hydrologic response to provide analytical and semi-analytical expressions for peak flow, time to peak, and area contributing to the peak runoff. These results are obtained for the case of constant-intensity hyetograph using the Intensity-Duration-Frequency (IDF curves to estimate extreme flow values as a function of the rainfall return period. Results show that, with constant-intensity hyetographs, the time-to-peak is greater than rainfall duration and usually shorter than the basin concentration time. Moreover, the critical storm duration is shown to be independent of rainfall return period as well as the area contributing to the flow peak. The same results are found when the effects of hydrodynamic dispersion are accounted for. Further, it is shown that, when the effects of hydrodynamic dispersion are negligible, the basin area contributing to the peak discharge does not depend on the channel velocity, but is a geomorphic propriety of the basin. As an example this framework is applied to three watersheds. In particular, the runoff peak, the critical rainfall durations and the time to peak are calculated for all links within a network to assess how they increase with basin area.

  18. Phase structure of the O(n) model on a random lattice for n > 2

    DEFF Research Database (Denmark)

    Durhuus, B.; Kristjansen, C.

    1997-01-01

    We show that coarse graining arguments invented for the analysis of multi-spin systems on a randomly triangulated surface apply also to the O(n) model on a random lattice. These arguments imply that if the model has a critical point with diverging string susceptibility, then either γ = +1....../2 or there exists a dual critical point with negative string susceptibility exponent, γ̃, related to γ by γ = γ̃/γ̃-1. Exploiting the exact solution of the O(n) model on a random lattice we show that both situations are realized for n > 2 and that the possible dual pairs of string susceptibility exponents are given...... by (γ̃, γ) = (-1/m, 1/m+1), m = 2, 3, . . . We also show that at the critical points with positive string susceptibility exponent the average number of loops on the surface diverges while the average length of a single loop stays finite....

  19. Study of RNA structures with a connection to random matrix theory

    International Nuclear Information System (INIS)

    Bhadola, Pradeep; Deo, Nivedita

    2015-01-01

    This manuscript investigates the level of complexity and thermodynamic properties of the real RNA structures and compares the properties with the random RNA sequences. A discussion on the similarities of thermodynamical properties of the real structures with the non linear random matrix model of RNA folding is presented. The structural information contained in the PDB file is exploited to get the base pairing information. The complexity of an RNA structure is defined by a topological quantity called genus which is calculated from the base pairing information. Thermodynamic analysis of the real structures is done numerically. The real structures have a minimum free energy which is very small compared to the randomly generated sequences of the same length. This analysis suggests that there are specific patterns in the structures which are preserved during the evolution of the sequences and certain sequences are discarded by the evolutionary process. Further analyzing the sequences of a fixed length reveal that the RNA structures exist in ensembles i.e. although all the sequences in the ensemble have different series of nucleotides (sequence) they fold into structures that have the same pairs of hydrogen bonding as well as the same minimum free energy. The specific heat of the RNA molecule is numerically estimated at different lengths. The specific heat curve with temperature shows a bump and for some RNA, a double peak behavior is observed. The same behavior is seen in the study of the random matrix model with non linear interaction of RNA folding. The bump in the non linear matrix model can be controlled by the change in the interaction strength.

  20. Climate-related variation in plant peak biomass and growth phenology across Pacific Northwest tidal marshes

    Science.gov (United States)

    Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.

    2018-03-01

    The interannual variability of tidal marsh plant phenology is largely unknown and may have important ecological consequences. Marsh plants are critical to the biogeomorphic feedback processes that build estuarine soils, maintain marsh elevation relative to sea level, and sequester carbon. We calculated Tasseled Cap Greenness, a metric of plant biomass, using remotely sensed data available in the Landsat archive to assess how recent climate variation has affected biomass production and plant phenology across three maritime tidal marshes in the Pacific Northwest of the United States. First, we used clipped vegetation plots at one of our sites to confirm that tasseled cap greenness provided a useful measure of aboveground biomass (r2 = 0.72). We then used multiple measures of biomass each growing season over 20-25 years per study site and developed models to test how peak biomass and the date of peak biomass varied with 94 climate and sea-level metrics using generalized linear models and Akaike Information Criterion (AIC) model selection. Peak biomass was positively related to total annual precipitation, while the best predictor for date of peak biomass was average growing season temperature, with the peak 7.2 days earlier per degree C. Our study provides insight into how plants in maritime tidal marshes respond to interannual climate variation and demonstrates the utility of time-series remote sensing data to assess ecological responses to climate stressors.

  1. Mineral Resources: Reserves, Peak Production and the Future

    Directory of Open Access Journals (Sweden)

    Lawrence D. Meinert

    2016-02-01

    Full Text Available The adequacy of mineral resources in light of population growth and rising standards of living has been a concern since the time of Malthus (1798, but many studies erroneously forecast impending peak production or exhaustion because they confuse reserves with “all there is”. Reserves are formally defined as a subset of resources, and even current and potential resources are only a small subset of “all there is”. Peak production or exhaustion cannot be modeled accurately from reserves. Using copper as an example, identified resources are twice as large as the amount projected to be needed through 2050. Estimates of yet-to-be discovered copper resources are up to 40-times more than currently-identified resources, amounts that could last for many centuries. Thus, forecasts of imminent peak production due to resource exhaustion in the next 20–30 years are not valid. Short-term supply problems may arise, however, and supply-chain disruptions are possible at any time due to natural disasters (earthquakes, tsunamis, hurricanes or political complications. Needed to resolve these problems are education and exploration technology development, access to prospective terrain, better recycling and better accounting of externalities associated with production (pollution, loss of ecosystem services and water and energy use.

  2. Splitting of the excitonic peak in quantum wells with interfacial roughness

    International Nuclear Information System (INIS)

    Castella, H.; Wilkins, J.W.

    1998-01-01

    Excitons in a quantum well depend on the interfacial roughness resulting from its growth. The interface is characterized by islands of size ξ separated by one monolayer steps across which the confining potential decreases by V 0 for wider wells. A natural length is the localization length ξ 0 =πℎ/√ (2MV 0 ) characterizing the minimum size island to confine an exciton. For small islands (ξ 0 ), the absorption spectrum has a single exciton peak. As the island size ξ exceeds the localization length ξ 0 , the peak gradually splits into a doublet. Generally the spectra exhibit the following features: (1) the shape is very sensitive to ξ/ξ 0 and depends only weakly on the ratio of island size to exciton radius; (2) in the small island regime ξ 0 , the asymmetric shape of the exciton peak is correctly described by a model of white-noise potential, except for the position of the peak which still depends on the correlation length of the disorder. copyright 1998 The American Physical Society

  3. Logistic curves, extraction costs and effective peak oil

    International Nuclear Information System (INIS)

    Brecha, Robert J.

    2012-01-01

    Debates about the possibility of a near-term maximum in world oil production have become increasingly prominent over the past decade, with the focus often being on the quantification of geologically available and technologically recoverable amounts of oil in the ground. Economically, the important parameter is not a physical limit to resources in the ground, but whether market price signals and costs of extraction will indicate the efficiency of extracting conventional or nonconventional resources as opposed to making substitutions over time for other fuels and technologies. We present a hybrid approach to the peak-oil question with two models in which the use of logistic curves for cumulative production are supplemented with data on projected extraction costs and historical rates of capacity increase. While not denying the presence of large quantities of oil in the ground, even with foresight, rates of production of new nonconventional resources are unlikely to be sufficient to make up for declines in availability of conventional oil. Furthermore we show how the logistic-curve approach helps to naturally explain high oil prices even when there are significant quantities of low-cost oil yet to be extracted. - Highlights: ► Extraction cost information together with logistic curves to model oil extraction. ► Two models of extraction sequence for different oil resources. ► Importance of time-delay and extraction rate limits for new resources. ► Model results qualitatively reproduce observed extraction cost dynamics. ► Confirmation of “effective” peak oil, even though resources are in ground.

  4. Random defect lines in conformal minimal models

    International Nuclear Information System (INIS)

    Jeng, M.; Ludwig, A.W.W.

    2001-01-01

    We analyze the effect of adding quenched disorder along a defect line in the 2D conformal minimal models using replicas. The disorder is realized by a random applied magnetic field in the Ising model, by fluctuations in the ferromagnetic bond coupling in the tricritical Ising model and tricritical three-state Potts model (the phi 12 operator), etc. We find that for the Ising model, the defect renormalizes to two decoupled half-planes without disorder, but that for all other models, the defect renormalizes to a disorder-dominated fixed point. Its critical properties are studied with an expansion in ε∝1/m for the mth Virasoro minimal model. The decay exponents X N =((N)/(2))1-((9(3N-4))/(4(m+1) 2 ))+O((3)/(m+1)) 3 of the Nth moment of the two-point function of phi 12 along the defect are obtained to 2-loop order, exhibiting multifractal behavior. This leads to a typical decay exponent X typ =((1)/(2))1+((9)/((m+1) 2 ))+O((3)/(m+1)) 3 . One-point functions are seen to have a non-self-averaging amplitude. The boundary entropy is larger than that of the pure system by order 1/m 3 . As a byproduct of our calculations, we also obtain to 2-loop order the exponent X-tilde N =N1-((2)/(9π 2 ))(3N-4)(q-2) 2 +O(q-2) 3 of the Nth moment of the energy operator in the q-state Potts model with bulk bond disorder

  5. Phase transitions in the random field Ising model in the presence of a transverse field

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, A.; Chakrabarti, B.K. [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India); Stinchcombe, R.B. [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India); Department of Physics, Oxford (United Kingdom)

    1996-09-07

    We have studied the phase transition behaviour of the random field Ising model in the presence of a transverse (or tunnelling) field. The mean field phase diagram has been studied in detail, and in particular the nature of the transition induced by the tunnelling (transverse) field at zero temperature. Modified hyper-scaling relation for the zero-temperature transition has been derived using the Suzuki-Trotter formalism and a modified 'Harris criterion'. Mapping of the model to a randomly diluted antiferromagnetic Ising model in uniform longitudinal and transverse field is also given. (author)

  6. Peak Oil profiles through the lens of a general equilibrium assessment

    International Nuclear Information System (INIS)

    Waisman, Henri; Rozenberg, Julie; Sassi, Olivier; Hourcade, Jean-Charles

    2012-01-01

    This paper disentangles the interactions between oil production profiles, the dynamics of oil prices and growth trends. We do so through a general equilibrium model in which Peak Oil endogenously emerges from the interplay between the geological, technical, macroeconomic and geopolitical determinants of supply and demand under non-perfect expectations. We analyze the macroeconomic effects of oil production profiles and demonstrate that Peak Oil dates that differ only slightly may lead to very different time profiles of oil prices, exportation flows and economic activity. We investigate Middle-East's trade-off between different pricing trajectories in function of two alternative objectives (maximisation of oil revenues or households’ welfare) and assess its impact on OECD growth trajectories. A sensitivity analysis highlights the respective roles of the amount of resources, inertia on the deployment of non conventional oil and short-term oil price dynamics on Peak Oil dates and long-term oil prices. It also examines the effects of these assumptions on OECD growth and Middle-East strategic tradeoffs. - Highlights: ► Geological determinants behind Hubbert curves in a general equilibrium framework. ► We endogenize the interactions between Peak Oil dates, oil prices and growth trends. ► Close Peak Oil dates lead to different trends of oil prices, exportation and growth. ► Low short-term prices benefit to the long-term macroeconomy of oil exporters. ► High short-term prices hedge oil importers against economic tensions after Peak Oil.

  7. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Science.gov (United States)

    Chlumecký, Martin; Buchtele, Josef; Richta, Karel

    2017-10-01

    The efficient calibration of rainfall-runoff models is a difficult issue, even for experienced hydrologists. Therefore, fast and high-quality model calibration is a valuable improvement. This paper describes a novel methodology and software for the optimisation of a rainfall-runoff modelling using a genetic algorithm (GA) with a newly prepared concept of a random number generator (HRNG), which is the core of the optimisation. The GA estimates model parameters using evolutionary principles, which requires a quality number generator. The new HRNG generates random numbers based on hydrological information and it provides better numbers compared to pure software generators. The GA enhances the model calibration very well and the goal is to optimise the calibration of the model with a minimum of user interaction. This article focuses on improving the internal structure of the GA, which is shielded from the user. The results that we obtained indicate that the HRNG provides a stable trend in the output quality of the model, despite various configurations of the GA. In contrast to previous research, the HRNG speeds up the calibration of the model and offers an improvement of rainfall-runoff modelling.

  8. Theoretical basis to measure the impact of short-lasting control of an infectious disease on the epidemic peak

    Directory of Open Access Journals (Sweden)

    Omori Ryosuke

    2011-01-01

    Full Text Available Abstract Background While many pandemic preparedness plans have promoted disease control effort to lower and delay an epidemic peak, analytical methods for determining the required control effort and making statistical inferences have yet to be sought. As a first step to address this issue, we present a theoretical basis on which to assess the impact of an early intervention on the epidemic peak, employing a simple epidemic model. Methods We focus on estimating the impact of an early control effort (e.g. unsuccessful containment, assuming that the transmission rate abruptly increases when control is discontinued. We provide analytical expressions for magnitude and time of the epidemic peak, employing approximate logistic and logarithmic-form solutions for the latter. Empirical influenza data (H1N1-2009 in Japan are analyzed to estimate the effect of the summer holiday period in lowering and delaying the peak in 2009. Results Our model estimates that the epidemic peak of the 2009 pandemic was delayed for 21 days due to summer holiday. Decline in peak appears to be a nonlinear function of control-associated reduction in the reproduction number. Peak delay is shown to critically depend on the fraction of initially immune individuals. Conclusions The proposed modeling approaches offer methodological avenues to assess empirical data and to objectively estimate required control effort to lower and delay an epidemic peak. Analytical findings support a critical need to conduct population-wide serological survey as a prior requirement for estimating the time of peak.

  9. Simulating double-peak hydrographs from single storms over mixed-use watersheds

    Science.gov (United States)

    Yang Yang; Theodore A. Endreny; David J. Nowak

    2015-01-01

    Two-peak hydrographs after a single rain event are observed in watersheds and storms with distinct volumes contributing as fast and slow runoff. The authors developed a hydrograph model able to quantify these separate runoff volumes to help in estimation of runoff processes and residence times used by watershed managers. The model uses parallel application of two...

  10. Runners with Patellofemoral Pain Exhibit Greater Peak Patella Cartilage Stress Compared to Pain-Free Runners.

    Science.gov (United States)

    Liao, Tzu-Chieh; Keyak, Joyce H; Powers, Christopher M

    2018-02-27

    The purpose of this study is to determine whether recreational runners with patellofemoral pain (PFP) exhibit greater peak patella cartilage stress compared to pain-free runners. A secondary purpose was to determine the kinematic and/or kinetic predictors of peak patella cartilage stress during running. Twenty-two female recreational runners participated (12 with PFP and 10 pain-free controls). Patella cartilage stress profiles were quantified using subject-specific finite element models simulating the maximum knee flexion angle during stance phase of running. Input parameters to the finite element model included subject-specific patellofemoral joint geometry, quadriceps muscle forces, and lower extremity kinematics in the frontal and transverse planes. Tibiofemoral joint kinematics and kinetics were quantified to determine the best predictor of stress using stepwise regression analysis. Compared to the pain-free runners, those with PFP exhibited greater peak hydrostatic pressure (PFP vs. control, 21.2 ± 5.6 MPa vs. 16.5 ± 4.6 MPa) and maximum shear stress (11.3 ± 4.6 MPa vs. 8.7 ± 2.3 MPa). Knee external rotation was the best predictor of peak hydrostatic pressure and peak maximum shear stress (38% and 25% of variances, respectively) followed by the knee extensor moment (21% and 25% of variances, respectively). Runners with PFP exhibit greater peak patella cartilage stress during running compared to pain-free individuals. The combination of knee external rotation and a high knee extensor moment best predicted elevated peak stress during running.

  11. A Random-Walk-Model for heavy metal particles in natural waters; Ein Random-Walk-Modell fuer Schwermetallpartikel in natuerlichen Gewaessern

    Energy Technology Data Exchange (ETDEWEB)

    Wollschlaeger, A.

    1996-12-31

    The presented particle tracking model is for the numerical calculation of heavy metal transport in natural waters. The Navier-Stokes-Equations are solved with the Finite-Element-Method. The advective movement of the particles is interpolated from the velocities on the discrete mesh. The influence of turbulence is simulated with a Random-Walk-Model where particles are distributed due to a given probability function. Both parts are added and lead to the new particle position. The characteristics of the heavy metals are assigned to the particules as their attributes. Dissolved heavy metals are transported only by the flow. Heavy metals which are bound to particulate matter have an additional settling velocity. The sorption and the remobilization processes are approximated through a probability law which maintains the proportionality ratio between dissolved heavy metals and those which are bound to particulate matter. At the bed heavy metals bound to particulate matter are subjected to deposition and erosion processes. The model treats these processes by considering the absorption intensity of the heavy metals to the bottom sediments. Calculations of the Weser estuary show that the particle tracking model allows the simulation of the heavy metal behaviour even under complex flow conditions. (orig.) [Deutsch] Das vorgestellte Partikelmodell dient zur numerischen Berechnung des Schwermetalltransports in natuerlichen Gewaessern. Die Navier-Stokes-Gleichungen werden mit der Methode der Finiten Elemente geloest. Die advektive Bewegung der Teilchen ergibt sich aus der Interpolation der Geschwindigkeiten auf dem diskreten Netz. Der Einfluss der Turbulenz wird mit einem Random-Walk-Modell simuliert, bei dem sich die Partikel anhand einer vorgegebenen Wahrscheinlichkeitsfunktion verteilen. Beide Bewegungsanteile werden zusammengefasst und ergeben die neue Partikelposition. Die Eigenschaften der Schwermetalle werden den Partikeln als Attribute zugeordnet. Geloeste Schwermetalle

  12. Accounting for perception in random regret choice models: Weberian and generalized Weberian specifications

    NARCIS (Netherlands)

    Jang, S.; Rasouli, S.; Timmermans, H.J.P.

    2016-01-01

    Recently, regret-based choice models have been introduced in the travel behavior research community as an alternative to expected/random utility models. The fundamental proposition underlying regret theory is that individuals minimize the amount of regret they (are expected to) experience when

  13. Experimental Investigation of the Peak Shear Strength Criterion Based on Three-Dimensional Surface Description

    Science.gov (United States)

    Liu, Quansheng; Tian, Yongchao; Ji, Peiqi; Ma, Hao

    2018-04-01

    The three-dimensional (3D) morphology of joints is enormously important for the shear mechanical properties of rock. In this study, three-dimensional morphology scanning tests and direct shear tests are conducted to establish a new peak shear strength criterion. The test results show that (1) surface morphology and normal stress exert significant effects on peak shear strength and distribution of the damage area. (2) The damage area is located at the steepest zone facing the shear direction; as the normal stress increases, it extends from the steepest zone toward a less steep zone. Via mechanical analysis, a new formula for the apparent dip angle is developed. The influence of the apparent dip angle and the average joint height on the potential contact area is discussed, respectively. A new peak shear strength criterion, mainly applicable to specimens under compression, is established by using new roughness parameters and taking the effects of normal stress and the rock mechanical properties into account. A comparison of this newly established model with the JRC-JCS model and the Grasselli's model shows that the new one could apparently improve the fitting effect. Compared with earlier models, the new model is simpler and more precise. All the parameters in the new model have clear physical meanings and can be directly determined from the scanned data. In addition, the indexes used in the new model are more rational.

  14. Passive radio frequency peak power multiplier

    Science.gov (United States)

    Farkas, Zoltan D.; Wilson, Perry B.

    1977-01-01

    Peak power multiplication of a radio frequency source by simultaneous charging of two high-Q resonant microwave cavities by applying the source output through a directional coupler to the cavities and then reversing the phase of the source power to the coupler, thereby permitting the power in the cavities to simultaneously discharge through the coupler to the load in combination with power from the source to apply a peak power to the load that is a multiplication of the source peak power.

  15. Effect of high-intensity training versus moderate training on peak oxygen uptake and chronotropic response in heart transplant recipients

    DEFF Research Database (Denmark)

    Dall, C H; Snoer, M; Christensen, S

    2014-01-01

    In heart transplant (HTx) recipients, there has been reluctance to recommend high-intensity interval training (HIIT) due to denervation and chronotropic impairment of the heart. We compared the effects of 12 weeks' HIIT versus continued moderate exercise (CON) on exercise capacity and chronotropic...... response in stable HTx recipients >12 months after transplantation in a randomized crossover trial. The study was completed by 16 HTx recipients (mean age 52 years, 75% males). Baseline peak oxygen uptake (VO2peak ) was 22.9 mL/kg/min. HIIT increased VO2peak by 4.9 ± 2.7 mL/min/kg (17%) and CON by 2.6 ± 2.......2 mL/kg/min (10%) (significantly higher in HIIT; p HIIT, systolic blood pressure decreased significantly (p = 0.037) with no significant change in CON (p = 0.241; between group difference p = 0.027). Peak heart rate (HRpeak ) increased significantly by 4.3 beats per minute (p = 0...

  16. Peak load arrangements : Assessment of Nordel guidelines

    Energy Technology Data Exchange (ETDEWEB)

    2009-07-01

    Two Nordic countries, Sweden and Finland, have legislation that empowers the TSO to acquire designated peak load resources to mitigate the risk for shortage situations during the winter. In Denmark, the system operator procures resources to maintain a satisfactory level of security of supply. In Norway the TSO has set up a Regulation Power Option Market (RKOM) to secure a satisfactory level of operational reserves at all times, also in winter with high load demand. Only the arrangements in Finland and Sweden fall under the heading of Peak Load Arrangements defined in Nordel Guidelines. NordREG has been invited by the Electricity Market Group (EMG) to evaluate Nordel's proposal for 'Guidelines for transitional Peak Load Arrangements'. The EMG has also financed a study made by EC Group to support NordREG in the evaluation of the proposal. The study has been taken into account in NordREG's evaluation. In parallel to the EMG task, the Swedish regulator, the Energy Markets Inspectorate, has been given the task by the Swedish government to investigate a long term solution of the peak load issue. The Swedish and Finnish TSOs have together with Nord Pool Spot worked on finding a harmonized solution for activation of the peak load reserves in the market. An agreement accepted by the relevant authorities was reached in early January 2009, and the arrangement has been implemented since 19th January 2009. NordREG views that the proposed Nordel guidelines have served as a starting point for the presently agreed procedure. However, NordREG does not see any need to further develop the Nordel guidelines for peak load arrangements. NordREG agrees with Nordel that the market should be designed to solve peak load problems through proper incentives to market players. NordREG presumes that the relevant authorities in each country will take decisions on the need for any peak load arrangement to ensure security of supply. NordREG proposes that such decisions should be

  17. A random point process model for the score in sport matches

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2009-01-01

    Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf

  18. Peak capacity, peak-capacity production rate, and boiling point resolution for temperature-programmed GC with very high programming rates

    Science.gov (United States)

    Grall; Leonard; Sacks

    2000-02-01

    Recent advances in column heating technology have made possible very fast linear temperature programming for high-speed gas chromatography. A fused-silica capillary column is contained in a tubular metal jacket, which is resistively heated by a precision power supply. With very rapid column heating, the rate of peak-capacity production is significantly enhanced, but the total peak capacity and the boiling-point resolution (minimum boiling-point difference required for the separation of two nonpolar compounds on a nonpolar column) are reduced relative to more conventional heating rates used with convection-oven instruments. As temperature-programming rates increase, elution temperatures also increase with the result that retention may become insignificant prior to elution. This results in inefficient utilization of the down-stream end of the column and causes a loss in the rate of peak-capacity production. The rate of peak-capacity production is increased by the use of shorter columns and higher carrier gas velocities. With high programming rates (100-600 degrees C/min), column lengths of 6-12 m and average linear carrier gas velocities in the 100-150 cm/s range are satisfactory. In this study, the rate of peak-capacity production, the total peak capacity, and the boiling point resolution are determined for C10-C28 n-alkanes using 6-18 m long columns, 50-200 cm/s average carrier gas velocities, and 60-600 degrees C/min programming rates. It was found that with a 6-meter-long, 0.25-mm i.d. column programmed at a rate of 600 degrees C/min, a maximum peak-capacity production rate of 6.1 peaks/s was obtained. A total peak capacity of about 75 peaks was produced in a 37-s long separation spanning a boiling-point range from n-C10 (174 degrees C) to n-C28 (432 degrees C).

  19. Learning of couplings for random asymmetric kinetic Ising models revisited: random correlation matrices and learning curves

    International Nuclear Information System (INIS)

    Bachschmid-Romano, Ludovica; Opper, Manfred

    2015-01-01

    We study analytically the performance of a recently proposed algorithm for learning the couplings of a random asymmetric kinetic Ising model from finite length trajectories of the spin dynamics. Our analysis shows the importance of the nontrivial equal time correlations between spins induced by the dynamics for the speed of learning. These correlations become more important as the spin’s stochasticity is decreased. We also analyse the deviation of the estimation error (paper)

  20. Toward single-mode random lasing within a submicrometre-sized spherical ZnO particle film

    International Nuclear Information System (INIS)

    Niyuki, Ryo; Fujiwara, Hideki; Sasaki, Keiji; Ishikawa, Yoshie; Koshizaki, Naoto; Tsuji, Takeshi

    2016-01-01

    We had recently reported unique random laser action such as quasi-single-mode and low-threshold lasing from a submicrometre-sized spherical ZnO nanoparticle film with polymer particles as defects. The present study demonstrates a novel approach to realize single-mode random lasing by adjusting the sizes of the defect particles. From the dependence of random lasing properties on defect size, we find that the average number of lasing peaks can be modified by the defect size, while other lasing properties such as lasing wavelengths and thresholds remain unchanged. These results suggest that lasing wavelengths and thresholds are determined by the resonant properties of the surrounding scatterers, while the defect size stochastically determines the number of lasing peaks. Therefore, if we optimize the sizes of the defects and scatterers, we can intentionally induce single-mode lasing even in a random structure (Fujiwara et al 2013 Appl. Phys. Lett. 102 061110). (paper)

  1. Relativistic jet feedback - II. Relationship to gigahertz peak spectrum and compact steep spectrum radio galaxies

    Science.gov (United States)

    Bicknell, Geoffrey V.; Mukherjee, Dipanjan; Wagner, Alexander Y.; Sutherland, Ralph S.; Nesvadba, Nicole P. H.

    2018-04-01

    We propose that Gigahertz Peak Spectrum (GPS) and Compact Steep Spectrum (CSS) radio sources are the signposts of relativistic jet feedback in evolving galaxies. Our simulations of relativistic jets interacting with a warm, inhomogeneous medium, utilizing cloud densities and velocity dispersions in the range derived from optical observations, show that free-free absorption can account for the ˜ GHz peak frequencies and low-frequency power laws inferred from the radio observations. These new computational models replace a power-law model for the free-free optical depth a more fundamental model involving disrupted log-normal distributions of warm gas. One feature of our new models is that at early stages, the low-frequency spectrum is steep but progressively flattens as a result of a broader distribution of optical depths, suggesting that the steep low-frequency spectra discovered by Callingham et al. may possibly be attributed to young sources. We also investigate the inverse correlation between peak frequency and size and find that the initial location on this correlation is determined by the average density of the warm ISM. The simulated sources track this correlation initially but eventually fall below it, indicating the need for a more extended ISM than presently modelled. GPS and CSS sources can potentially provide new insights into the phenomenon of AGN feedback since their peak frequencies and spectra are indicative of the density, turbulent structure, and distribution of gas in the host galaxy.

  2. Activated aging dynamics and effective trap model description in the random energy model

    Science.gov (United States)

    Baity-Jesi, M.; Biroli, G.; Cammarota, C.

    2018-01-01

    We study the out-of-equilibrium aging dynamics of the random energy model (REM) ruled by a single spin-flip Metropolis dynamics. We focus on the dynamical evolution taking place on time-scales diverging with the system size. Our aim is to show to what extent the activated dynamics displayed by the REM can be described in terms of an effective trap model. We identify two time regimes: the first one corresponds to the process of escaping from a basin in the energy landscape and to the subsequent exploration of high energy configurations, whereas the second one corresponds to the evolution from a deep basin to the other. By combining numerical simulations with analytical arguments we show why the trap model description does not hold in the former but becomes exact in the second.

  3. Some results of the spectra of random Schroedinger operators and their application to random point interaction models in one and three dimensions

    International Nuclear Information System (INIS)

    Kirsch, W.; Martinelli, F.

    1981-01-01

    After the derivation of weak conditions under which the potential for the Schroedinger operator is well defined the authers state an ergodicity assumption of this potential which ensures that the spectrum of this operator is a fixed non random set. Then random point interaction Hamiltonians are considered in this framework. Finally the authors consider a model where for sufficiently small fluctuations around the equilibrium positions a finite number of gaps appears. (HSI)

  4. Bayesian analysis for exponential random graph models using the adaptive exchange sampler

    KAUST Repository

    Jin, Ick Hoon

    2013-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the issue of intractable normalizing constants encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  5. Random Walk Model for the Growth of Monolayer in Dip Pen Nanolithography

    International Nuclear Information System (INIS)

    Kim, H; Ha, S; Jang, J

    2013-01-01

    By using a simple random-walk model, we simulate the growth of a self-assembled monolayer (SAM) pattern generated in dip pen nanolithography (DPN). In this model, the SAM pattern grows mainly via the serial pushing of molecules deposited from the tip. We examine various SAM patterns, such as lines, crosses, and letters by changing the tip scan speed.

  6. Transverse spin correlations of the random transverse-field Ising model

    Science.gov (United States)

    Iglói, Ferenc; Kovács, István A.

    2018-03-01

    The critical behavior of the random transverse-field Ising model in finite-dimensional lattices is governed by infinite disorder fixed points, several properties of which have already been calculated by the use of the strong disorder renormalization-group (SDRG) method. Here we extend these studies and calculate the connected transverse-spin correlation function by a numerical implementation of the SDRG method in d =1 ,2 , and 3 dimensions. At the critical point an algebraic decay of the form ˜r-ηt is found, with a decay exponent being approximately ηt≈2 +2 d . In d =1 the results are related to dimer-dimer correlations in the random antiferromagnetic X X chain and have been tested by numerical calculations using free-fermionic techniques.

  7. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  8. Experimental Study on Peak Pressure of Shock Waves in Quasi-Shallow Water

    Directory of Open Access Journals (Sweden)

    Zhenxiong Wang

    2015-01-01

    Full Text Available Based on the similarity laws of the explosion, this research develops similarity requirements of the small-scale experiments of underwater explosions and establishes a regression model for peak pressure of underwater shock waves under experimental condition. Small-scale experiments are carried out with two types of media at the bottom of the water and for different water depths. The peak pressure of underwater shock waves at different measuring points is acquired. A formula consistent with the similarity law of explosions is obtained and an analysis of the regression precision of the formula confirms its accuracy. Significance experiment indicates that the influence of distance between measuring points and charge on peak pressure of underwater shock wave is the greatest and that of water depth is the least within the range of geometric parameters. An analysis of data from experiments with different media at the bottom of the water reveals an influence on the peak pressure, as the peak pressure of a shock wave in a body of water with a bottom soft mud and rocks is about 1.33 times that of the case where the bottom material is only soft mud.

  9. Examples of mixed-effects modeling with crossed random effects and with binomial data

    NARCIS (Netherlands)

    Quené, H.; van den Bergh, H.

    2008-01-01

    Psycholinguistic data are often analyzed with repeated-measures analyses of variance (ANOVA), but this paper argues that mixed-effects (multilevel) models provide a better alternative method. First, models are discussed in which the two random factors of participants and items are crossed, and not

  10. Depletion benchmarks calculation of random media using explicit modeling approach of RMC

    International Nuclear Information System (INIS)

    Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan

    2016-01-01

    Highlights: • Explicit modeling of RMC is applied to depletion benchmark for HTGR fuel element. • Explicit modeling can provide detailed burnup distribution and burnup heterogeneity. • The results would serve as a supplement for the HTGR fuel depletion benchmark. • The method of adjacent burnup regions combination is proposed for full-core problems. • The combination method can reduce memory footprint, keeping the computing accuracy. - Abstract: Monte Carlo method plays an important role in accurate simulation of random media, owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been implemented in RMC to simulate the particle transport in the dispersed fuels, in which the explicit modeling method is regarded as the best choice. In this paper, the explicit modeling method is applied to the depletion benchmark for HTGR fuel element, and the method of combination of adjacent burnup regions has been proposed and investigated. The results show that the explicit modeling can provide detailed burnup distribution of individual TRISO particles, and this work would serve as a supplement for the HTGR fuel depletion benchmark calculations. The combination of adjacent burnup regions can effectively reduce the memory footprint while keeping the computational accuracy.

  11. Peak experiences of psilocybin users and non-users.

    Science.gov (United States)

    Cummins, Christina; Lyke, Jennifer

    2013-01-01

    Maslow (1970) defined peak experiences as the most wonderful experiences of a person's life, which may include a sense of awe, well-being, or transcendence. Furthermore, recent research has suggested that psilocybin can produce experiences subjectively rated as uniquely meaningful and significant (Griffiths et al. 2006). It is therefore possible that psilocybin may facilitate or change the nature of peak experiences in users compared to non-users. This study was designed to compare the peak experiences of psilocybin users and non-users, to evaluate the frequency of peak experiences while under the influence of psilocybin, and to assess the perceived degree of alteration of consciousness during these experiences. Participants were recruited through convenience and snowball sampling from undergraduate classes and at a musical event. Participants were divided into three groups, those who reported a peak experience while under the influence of psilocybin (psilocybin peak experience: PPE), participants who had used psilocybin but reported their peak experiences did not occur while they were under the influence of psilocybin (non-psilocybin peak experience: NPPE), and participants who had never used psilocybin (non-user: NU). A total of 101 participants were asked to think about their peak experiences and complete a measure evaluating the degree of alteration of consciousness during that experience. Results indicated that 47% of psilocybin users reported their peak experience occurred while using psilocybin. In addition, there were significant differences among the three groups on all dimensions of alteration of consciousness. Future research is necessary to identify factors that influence the peak experiences of psilocybin users in naturalistic settings and contribute to the different characteristics of peak experiences of psilocybin users and non-users.

  12. Extragalactic Peaked-spectrum Radio Sources at Low Frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Callingham, J. R.; Gaensler, B. M.; Sadler, E. M.; Lenc, E. [Sydney Institute for Astronomy (SIfA), School of Physics, The University of Sydney, NSW 2006 (Australia); Ekers, R. D.; Bell, M. E. [CSIRO Astronomy and Space Science (CASS), Marsfield, NSW 2122 (Australia); Line, J. L. B.; Hancock, P. J.; Kapińska, A. D.; McKinley, B.; Procopio, P. [ARC Centre of Excellence for All-Sky Astrophysics (CAASTRO) (Australia); Hurley-Walker, N.; Tingay, S. J.; Franzen, T. M. O.; Morgan, J. [International Centre for Radio Astronomy Research (ICRAR), Curtin University, Bentley, WA 6102 (Australia); Dwarakanath, K. S. [Raman Research Institute (RRI), Bangalore 560080 (India); For, B.-Q. [International Centre for Radio Astronomy Research (ICRAR), The University of Western Australia, Crawley, WA 6009 (Australia); Hindson, L.; Johnston-Hollitt, M. [School of Chemical and Physical Sciences, Victoria University of Wellington, Wellington 6140 (New Zealand); Offringa, A. R., E-mail: joseph.callingham@sydney.edu.au [Netherlands Institute for Radio Astronomy (ASTRON), Dwingeloo (Netherlands); and others

    2017-02-20

    We present a sample of 1483 sources that display spectral peaks between 72 MHz and 1.4 GHz, selected from the GaLactic and Extragalactic All-sky Murchison Widefield Array (GLEAM) survey. The GLEAM survey is the widest fractional bandwidth all-sky survey to date, ideal for identifying peaked-spectrum sources at low radio frequencies. Our peaked-spectrum sources are the low-frequency analogs of gigahertz-peaked spectrum (GPS) and compact-steep spectrum (CSS) sources, which have been hypothesized to be the precursors to massive radio galaxies. Our sample more than doubles the number of known peaked-spectrum candidates, and 95% of our sample have a newly characterized spectral peak. We highlight that some GPS sources peaking above 5 GHz have had multiple epochs of nuclear activity, and we demonstrate the possibility of identifying high-redshift ( z > 2) galaxies via steep optically thin spectral indices and low observed peak frequencies. The distribution of the optically thick spectral indices of our sample is consistent with past GPS/CSS samples but with a large dispersion, suggesting that the spectral peak is a product of an inhomogeneous environment that is individualistic. We find no dependence of observed peak frequency with redshift, consistent with the peaked-spectrum sample comprising both local CSS sources and high-redshift GPS sources. The 5 GHz luminosity distribution lacks the brightest GPS and CSS sources of previous samples, implying that a convolution of source evolution and redshift influences the type of peaked-spectrum sources identified below 1 GHz. Finally, we discuss sources with optically thick spectral indices that exceed the synchrotron self-absorption limit.

  13. Uncertainty of the peak flow reconstruction of the 1907 flood in the Ebro River in Xerta (NE Iberian Peninsula)

    Science.gov (United States)

    Ruiz-Bellet, Josep Lluís; Castelltort, Xavier; Balasch, J. Carles; Tuset, Jordi

    2017-02-01

    There is no clear, unified and accepted method to estimate the uncertainty of hydraulic modelling results. In historical floods reconstruction, due to the lower precision of input data, the magnitude of this uncertainty could reach a high value. With the objectives of giving an estimate of the peak flow error of a typical historical flood reconstruction with the model HEC-RAS and of providing a quick, simple uncertainty assessment that an end user could easily apply, the uncertainty of the reconstructed peak flow of a major flood in the Ebro River (NE Iberian Peninsula) was calculated with a set of local sensitivity analyses on six input variables. The peak flow total error was estimated at ±31% and water height was found to be the most influential variable on peak flow, followed by Manning's n. However, the latter, due to its large uncertainty, was the greatest contributor to peak flow total error. Besides, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation; all three methods gave similar peak flows. Manning's equation gave almost the same result than HEC-RAS. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed.

  14. The dislocation-internal friction peak γ in tantalum

    International Nuclear Information System (INIS)

    Baur, J.; Benoit, W.; Schultz, H.

    1989-01-01

    Torsion-pendulum measurements were carried out on high-purity single crystal specimens of tantalum, having extremely low oxygen contents ( 2 peak, which appears close to γ is small traces of oxygen are presents. The γ 2 peak was formerly explained as a ''dislocation-enhanced Snoek peak''. The γ peak recovers at the peak temperature, whereas the γ 2 peak is more stable. On the basis of their results, and making use of earlier investigations of Rodrian and Schultz, the authors suggest that γ 2 is modified γ relaxation, related to screw-dislocation segments, stabilized by oxygen-decorated kinks. The stability of the γ 2 peak allows an accurate determination of the activation energy, found to be 1.00 +- 0.03 eV. This value is distinctly lower than the activation energy of the oxygen Snoek effect (1.10 eV) and is related here to the mechanism of ''kink-pair formation'' in screw dislocations, as the original γ peak. The numerical value is compatible with recent values derived from flow-stress measurements. The peak γ 2 shows increasing stability with increasing oxygen content. This is explained by single- and multi-decorated kinks

  15. Deriving Genomic Breeding Values for Residual Feed Intake from Covariance Functions of Random Regression Models

    DEFF Research Database (Denmark)

    Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne

    2014-01-01

    Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI...

  16. Large Deviations for the Annealed Ising Model on Inhomogeneous Random Graphs: Spins and Degrees

    Science.gov (United States)

    Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; Hofstad, Remco van der

    2018-04-01

    We prove a large deviations principle for the total spin and the number of edges under the annealed Ising measure on generalized random graphs. We also give detailed results on how the annealing over the Ising model changes the degrees of the vertices in the graph and show how it gives rise to interesting correlated random graphs.

  17. Zero temperature landscape of the random sine-Gordon model

    International Nuclear Information System (INIS)

    Sanchez, A.; Bishop, A.R.; Cai, D.

    1997-01-01

    We present a preliminary summary of the zero temperature properties of the two-dimensional random sine-Gordon model of surface growth on disordered substrates. We found that the properties of this model can be accurately computed by using lattices of moderate size as the behavior of the model turns out to be independent of the size above certain length (∼ 128 x 128 lattices). Subsequently, we show that the behavior of the height difference correlation function is of (log r) 2 type up to a certain correlation length (ξ ∼ 20), which rules out predictions of log r behavior for all temperatures obtained by replica-variational techniques. Our results open the way to a better understanding of the complex landscape presented by this system, which has been the subject of very many (contradictory) analysis

  18. The Little-Hopfield model on a sparse random graph

    International Nuclear Information System (INIS)

    Castillo, I Perez; Skantzos, N S

    2004-01-01

    We study the Hopfield model on a random graph in scaling regimes where the average number of connections per neuron is a finite number and the spin dynamics is governed by a synchronous execution of the microscopic update rule (Little-Hopfield model). We solve this model within replica symmetry, and by using bifurcation analysis we prove that the spin-glass/paramagnetic and the retrieval/paramagnetic transition lines of our phase diagram are identical to those of sequential dynamics. The first-order retrieval/spin-glass transition line follows by direct evaluation of our observables using population dynamics. Within the accuracy of numerical precision and for sufficiently small values of the connectivity parameter we find that this line coincides with the corresponding sequential one. Comparison with simulation experiments shows excellent agreement

  19. Measurements and simulations for peak electrical load reduction in cooling dominated climate

    International Nuclear Information System (INIS)

    Sadineni, Suresh B.; Boehm, Robert F.

    2012-01-01

    Peak electric demand due to cooling load in the Desert Southwest region of the US has been an issue for the electrical energy suppliers. To address this issue, a consortium has been formed between the University of Nevada Las Vegas, Pulte Homes (home builder) and NV Energy (local utility) in order to reduce the peak load by more than 65%. The implemented strategies that were used to accomplish that goal consist of energy efficiency in homes, onsite electricity generation through roof integrated PV, direct load control, and battery storage at the substation level. The simulation models developed using building energy analysis software were validated against measured data. The electrical energy demand for the upgraded home during peak period (1:00–7:00 PM) decreased by approximately 37% and 9% compared to a code standard home of the same size, due to energy efficiency and PV generation, respectively. The total decrease in electrical demand due to energy efficiency and PV generation during the peak period is 46%. Additionally, a 2.2 °C increase in thermostat temperature from 23.9 °C to 26.1 °C between 4:00 PM and 7:00 PM has further decreased the average demand during the peak period by 69% of demand from a standard home. -- Highlights: ► A study to demonstrate peak load reductions of 65% at the substation. ► A new residential energy efficient community named Villa Trieste is being developed. ► The peak demand from the homes has decreased by 37% through energy efficiency. ► A 1.8 kWp system along with energy efficiency measures decreased peak by 46%.

  20. THE LATE PEAKING AFTERGLOW OF GRB 100418A

    International Nuclear Information System (INIS)

    Marshall, F. E.; Holland, S. T.; Sakamoto, T.; Antonelli, L. A.; Burrows, D. N.; Siegel, M. H.; Covino, S.; Fugazza, D.; De Pasquale, M.; Oates, S. R.; Evans, P. A.; O'Brien, P. T.; Osborne, J. P.; Pagani, C.; Liang, E. W.; Wu, X. F.; Zhang, B.

    2011-01-01

    GRB 100418A is a long gamma-ray burst (GRB) at redshift z = 0.6235 discovered with the Swift Gamma-ray Burst Explorer with unusual optical and X-ray light curves. After an initial short-lived, rapid decline in X-rays, the optical and X-ray light curves observed with Swift are approximately flat or rising slightly out to at least ∼7 x 10 3 s after the trigger, peak at ∼5 x 10 4 s, and then follow an approximately power-law decay. Such a long optical plateau and late peaking is rarely seen in GRB afterglows. Observations with Rapid Eye Mount during a gap in the Swift coverage indicate a bright optical flare at ∼2.5 x 10 4 s. The long plateau phase of the afterglow is interpreted using either a model with continuous injection of energy into the forward shock of the burst or a model in which the jet of the burst is viewed off-axis. In both models the isotropic kinetic energy in the late afterglow after the plateau phase is ≥10 2 times the 10 51 erg of the prompt isotropic gamma-ray energy release. The energy injection model is favored because the off-axis jet model would require the intrinsic T 90 for the GRB jet viewed on-axis to be very short, ∼10 ms, and the intrinsic isotropic gamma-ray energy release and the true jet energy to be much higher than the typical values of known short GRBs. The non-detection of a jet break up to t ∼ 2 x 10 6 s indicates a jet half-opening angle of at least ∼14 0 , and a relatively high-collimation-corrected jet energy of E jet ≥ 10 52 erg.