Random walkers with extreme value memory: modelling the peak-end rule
Harris, Rosemary J.
2015-05-01
Motivated by the psychological literature on the ‘peak-end rule’ for remembered experience, we perform an analysis within a random walk framework of a discrete choice model where agents’ future choices depend on the peak memory of their past experiences. In particular, we use this approach to investigate whether increased noise/disruption always leads to more switching between decisions. Here extreme value theory illuminates different classes of dynamics indicating that the long-time behaviour is dependent on the scale used for reflection; this could have implications, for example, in questionnaire design.
Statistics of peaks of Gaussian random fields
International Nuclear Information System (INIS)
Bardeen, J.M.; Bond, J.R.; Kaiser, N.; Szalay, A.S.; Stanford Univ., CA; California Univ., Berkeley; Cambridge Univ., England; Fermi National Accelerator Lab., Batavia, IL)
1986-01-01
A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of upcrossing points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima. 67 references
Phase diagrams of a spin-1/2 transverse Ising model with three-peak random field distribution
International Nuclear Information System (INIS)
Bassir, A.; Bassir, C.E.; Benyoussef, A.; Ez-Zahraouy, H.
1996-07-01
The effect of the transverse magnetic field on the phase diagrams structures of the Ising model in a random longitudinal magnetic field with a trimodal symmetric distribution is investigated within a finite cluster approximation. We find that a small magnetizations ordered phase (small ordered phase) disappears completely for a sufficiently large value of the transverse field or/and large value of the concentration of the disorder of the magnetic field. Multicritical behaviour and reentrant phenomena are discussed. The regions where the tricritical, reentrant phenomena and the small ordered phase persist are delimited as a function of the transverse field and the concentration p. Longitudinal magnetizations are also presented. (author). 33 refs, 6 figs
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.
OccuPeak: ChIP-Seq peak calling based on internal background modelling
de Boer, Bouke A.; van Duijvenboden, Karel; van den Boogaard, Malou; Christoffels, Vincent M.; Barnett, Phil; Ruijter, Jan M.
2014-01-01
ChIP-seq has become a major tool for the genome-wide identification of transcription factor binding or histone modification sites. Most peak-calling algorithms require input control datasets to model the occurrence of background reads to account for local sequencing and GC bias. However, the
Lukyanov, Alexey; Lubchenko, Vassiliy
2017-09-01
We develop a computationally efficient algorithm for generating high-quality structures for amorphous materials exhibiting distorted octahedral coordination. The computationally costly step of equilibrating the simulated melt is relegated to a much more efficient procedure, viz., generation of a random close-packed structure, which is subsequently used to generate parent structures for octahedrally bonded amorphous solids. The sites of the so-obtained lattice are populated by atoms and vacancies according to the desired stoichiometry while allowing one to control the number of homo-nuclear and hetero-nuclear bonds and, hence, effects of the mixing entropy. The resulting parent structure is geometrically optimized using quantum-chemical force fields; by varying the extent of geometric optimization of the parent structure, one can partially control the degree of octahedrality in local coordination and the strength of secondary bonding. The present methodology is applied to the archetypal chalcogenide alloys AsxSe1-x. We find that local coordination in these alloys interpolates between octahedral and tetrahedral bonding but in a non-obvious way; it exhibits bonding motifs that are not characteristic of either extreme. We consistently recover the first sharp diffraction peak (FSDP) in our structures and argue that the corresponding mid-range order stems from the charge density wave formed by regions housing covalent and weak, secondary interactions. The number of secondary interactions is determined by a delicate interplay between octahedrality and tetrahedrality in the covalent bonding; many of these interactions are homonuclear. The present results are consistent with the experimentally observed dependence of the FSDP on arsenic content, pressure, and temperature and its correlation with photodarkening and the Boson peak. They also suggest that the position of the FSDP can be used to infer the effective particle size relevant for the configurational equilibration in
Modeling the probability distribution of peak discharge for infiltrating hillslopes
Baiamonte, Giorgio; Singh, Vijay P.
2017-07-01
Hillslope response plays a fundamental role in the prediction of peak discharge at the basin outlet. The peak discharge for the critical duration of rainfall and its probability distribution are needed for designing urban infrastructure facilities. This study derives the probability distribution, denoted as GABS model, by coupling three models: (1) the Green-Ampt model for computing infiltration, (2) the kinematic wave model for computing discharge hydrograph from the hillslope, and (3) the intensity-duration-frequency (IDF) model for computing design rainfall intensity. The Hortonian mechanism for runoff generation is employed for computing the surface runoff hydrograph. Since the antecedent soil moisture condition (ASMC) significantly affects the rate of infiltration, its effect on the probability distribution of peak discharge is investigated. Application to a watershed in Sicily, Italy, shows that with the increase of probability, the expected effect of ASMC to increase the maximum discharge diminishes. Only for low values of probability, the critical duration of rainfall is influenced by ASMC, whereas its effect on the peak discharge seems to be less for any probability. For a set of parameters, the derived probability distribution of peak discharge seems to be fitted by the gamma distribution well. Finally, an application to a small watershed, with the aim to test the possibility to arrange in advance the rational runoff coefficient tables to be used for the rational method, and a comparison between peak discharges obtained by the GABS model with those measured in an experimental flume for a loamy-sand soil were carried out.
System dynamics model of Hubbert Peak for China's oil
International Nuclear Information System (INIS)
Tao Zaipu; Li Mingyu
2007-01-01
American geophysicist M. King Hubbert in 1956 first introduced a logistic equation to estimate the peak and lifetime production for oil of USA. Since then, a fierce debate ensued on the so-called Hubbert Peak, including also its methodology. This paper proposes to use the generic STELLA model to simulate Hubbert Peak, particularly for the Chinese oil production. This model is demonstrated as being robust. We used three scenarios to estimate the Chinese oil peak: according to scenario 1 of this model, the Hubbert Peak for China's crude oil production appears to be in 2019 with a value of 199.5 million tonnes, which is about 1.1 times the 2005 output. Before the peak comes, Chinese oil output will grow by about 1-2% annually, after the peak, however, the output will fall. By 2040, the annual production of Chinese crude oil would be equivalent to the level of 1990. During the coming 20 years, the crude oil demand of China will probably grow at the rate of 2-3% annually, and the gap between domestic supply and total demand may be more than half of this demand
Group Elevator Peak Scheduling Based on Robust Optimization Model
Directory of Open Access Journals (Sweden)
ZHANG, J.
2013-08-01
Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.
Hubbert's Oil Peak Revisited by a Simulation Model
International Nuclear Information System (INIS)
Giraud, P.N.; Sutter, A.; Denis, T.; Leonard, C.
2010-01-01
As conventional oil reserves are declining, the debate on the oil production peak has become a burning issue. An increasing number of papers refer to Hubbert's peak oil theory to forecast the date of the production peak, both at regional and world levels. However, in our views, this theory lacks micro-economic foundations. Notably, it does not assume that exploration and production decisions in the oil industry depend on market prices. In an attempt to overcome these shortcomings, we have built an adaptative model, accounting for the behavior of one agent, standing for the competitive exploration-production industry, subjected to incomplete but improving information on the remaining reserves. Our work yields challenging results on the reasons for an Hubbert type peak oil, lying mainly 'above the ground', both at regional and world levels, and on the shape of the production and marginal cost trajectories. (authors)
Time-frequency peak filtering for random noise attenuation of magnetic resonance sounding signal
Lin, Tingting; Zhang, Yang; Yi, Xiaofeng; Fan, Tiehu; Wan, Ling
2018-05-01
When measuring in a geomagnetic field, the method of magnetic resonance sounding (MRS) is often limited because of the notably low signal-to-noise ratio (SNR). Most current studies focus on discarding spiky noise and power-line harmonic noise cancellation. However, the effects of random noise should not be underestimated. The common method for random noise attenuation is stacking, but collecting multiple recordings merely to suppress random noise is time-consuming. Moreover, stacking is insufficient to suppress high-level random noise. Here, we propose the use of time-frequency peak filtering for random noise attenuation, which is performed after the traditional de-spiking and power-line harmonic removal method. By encoding the noisy signal with frequency modulation and estimating the instantaneous frequency using the peak of the time-frequency representation of the encoded signal, the desired MRS signal can be acquired from only one stack. The performance of the proposed method is tested on synthetic envelope signals and field data from different surveys. Good estimations of the signal parameters are obtained at different SNRs. Moreover, an attempt to use the proposed method to handle a single recording provides better results compared to 16 stacks. Our results suggest that the number of stacks can be appropriately reduced to shorten the measurement time and improve the measurement efficiency.
Institute of Scientific and Technical Information of China (English)
Li XIE; Lihua XIE
2007-01-01
We consider the stability of a random Riccati equation with a Markovian binary jump coefficient. More specifically, we are concerned with the boundedness of the solution of a random Riccati difference equation arising from Kalman filtering with measurement losses. A sufficient condition for the peak covariance stability is obtained which has a simpler form and is shown to be less conservative in some cases than a very recent result in existing literature. Furthermore, we show that a known sufficient condition is also necessary when the observability index equals one.
Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan
2016-01-01
Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.
Lievens, Klaus; Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter
2016-09-01
In civil engineering and architecture, the availability of high strength materials and advanced calculation techniques enables the construction of slender footbridges, generally highly sensitive to human-induced excitation. Due to the inherent random character of the human-induced walking load, variability on the pedestrian characteristics must be considered in the response simulation. To assess the vibration serviceability of the footbridge, the statistics of the stochastic dynamic response are evaluated by considering the instantaneous peak responses in a time range. Therefore, a large number of time windows are needed to calculate the mean value and standard deviation of the instantaneous peak values. An alternative method to evaluate the statistics is based on the standard deviation of the response and a characteristic frequency as proposed in wind engineering applications. In this paper, the accuracy of this method is evaluated for human-induced vibrations. The methods are first compared for a group of pedestrians crossing a lightly damped footbridge. Small differences of the instantaneous peak value were found by the method using second order statistics. Afterwards, a TMD tuned to reduce the peak acceleration to a comfort value, was added to the structure. The comparison between both methods in made and the accuracy is verified. It is found that the TMD parameters are tuned sufficiently and good agreements between the two methods are found for the estimation of the instantaneous peak response for a strongly damped structure.
Peak-counts blood flow model-errors and limitations
International Nuclear Information System (INIS)
Mullani, N.A.; Marani, S.K.; Ekas, R.D.; Gould, K.L.
1984-01-01
The peak-counts model has several advantages, but its use may be limited due to the condition that the venous egress may not be negligible at the time of peak-counts. Consequently, blood flow measurements by the peak-counts model will depend on the bolus size, bolus duration, and the minimum transit time of the bolus through the region of interest. The effect of bolus size on the measurement of extraction fraction and blood flow was evaluated by injecting 1 to 30ml of rubidium chloride in the femoral vein of a dog and measuring the myocardial activity with a beta probe over the heart. Regional blood flow measurements were not found to vary with bolus sizes up to 30ml. The effect of bolus duration was studied by injecting a 10cc bolus of tracer at different speeds in the femoral vein of a dog. All intravenous injections undergo a broadening of the bolus duration due to the transit time of the tracer through the lungs and the heart. This transit time was found to range from 4-6 second FWHM and dominates the duration of the bolus to the myocardium for up to 3 second injections. A computer simulation has been carried out in which the different parameters of delay time, extraction fraction, and bolus duration can be changed to assess the errors in the peak-counts model. The results of the simulations show that the error will be greatest for short transit time delays and for low extraction fractions
How does economic theory explain the Hubbert peak oil model?
International Nuclear Information System (INIS)
Reynes, F.; Okullo, S.; Hofkes, M.
2010-01-01
The aim of this paper is to provide an economic foundation for bell shaped oil extraction trajectories, consistent with Hubbert's peak oil model. There are several reasons why it is important to get insight into the economic foundations of peak oil. As production decisions are expected to depend on economic factors, a better comprehension of the economic foundations of oil extraction behaviour is fundamental to predict production and price over the coming years. The investigation made in this paper helps us to get a better understanding of the different mechanisms that may be at work in the case of OPEC and non-OPEC producers. We show that profitability is the main driver behind production plans. Changes in profitability due to divergent trajectories between costs and oil price may give rise to a Hubbert production curve. For this result we do not need to introduce a demand or an exploration effect as is generally assumed in the literature.
Modeling the peak of emergence in systems: Design and katachi.
Cardier, Beth; Goranson, H T; Casas, Niccolo; Lundberg, Patric; Erioli, Alessio; Takaki, Ryuji; Nagy, Dénes; Ciavarra, Richard; Sanford, Larry D
2017-12-01
It is difficult to model emergence in biological systems using reductionist paradigms. A requirement for computational modeling is that individual entities can be recorded parametrically and related logically, but their transformation into whole systems cannot be captured this way. The problem stems from an inability to formally represent the implicit influences that inform emergent organization, such as context, shifts in causal agency or scale, and self-reference. This lack hampers biological systems modeling and its computational counterpart, indicating a need for new fundamental abstraction frameworks that support system-level characteristics. We develop an approach that formally captures these characteristics, focusing on the way they come together to enable transformation at the 'peak' of the emergent process. An example from virology is presented, in which two seemingly antagonistic systems - the herpes cold sore virus and its host - are capable of altering their basic biological objectives to achieve a new equilibrium. The usual barriers to modeling this process are overcome by incorporating mechanisms from practices centered on its emergent peak: design and katachi. In the Japanese science of form, katachi refers to the emergence of intrinsic structure from real situations, where an optimal balance between implicit influences is achieved. Design indicates how such optimization is guided by principles of flow. These practices leverage qualities of situated abstraction, which we understand through the intuitive method of physicist Kôdi Husimi. Early results indicate that this approach can capture the functional transformations of biological emergence, whilst being reasonably computable. Due to its geometric foundations and narrative-based extension to logic, the method will also generate speculative predictions. This research forms the foundations of a new biomedical modeling platform, which is discussed. Copyright © 2017. Published by Elsevier Ltd.
International Nuclear Information System (INIS)
Cadini, F.; De Sanctis, J.; Cherubini, A.; Zio, E.; Riva, M.; Guadagnini, A.
2012-01-01
Highlights: ► Uncertainty quantification problem associated with the radionuclide migration. ► Groundwater transport processes simulated within a randomly heterogeneous aquifer. ► Development of an automatic sensitivity analysis for flow and transport parameters. ► Proposal of a Nominal Range Sensitivity Analysis approach. ► Analysis applied to the performance assessment of a nuclear waste repository. - Abstract: We consider the problem of quantification of uncertainty associated with radionuclide transport processes within a randomly heterogeneous aquifer system in the context of performance assessment of a near-surface radioactive waste repository. Radionuclide migration is simulated at the repository scale through a Monte Carlo scheme. The saturated groundwater flow and transport equations are then solved at the aquifer scale for the assessment of the expected radionuclide peak concentration at a location of interest. A procedure is presented to perform the sensitivity analysis of this target environmental variable to key parameters that characterize flow and transport processes in the subsurface. The proposed procedure is exemplified through an application to a realistic case study.
MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH
International Nuclear Information System (INIS)
Bard, D.; Kratochvil, J. M.; Dawson, W.
2016-01-01
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance
Gigahertz-peaked Spectra Pulsars and Thermal Absorption Model
Energy Technology Data Exchange (ETDEWEB)
Kijak, J.; Basu, R.; Lewandowski, W.; Rożko, K. [Janusz Gil Institute of Astronomy, University of Zielona Góra, ul. Z. Szafrana 2, PL-65-516 Zielona Góra (Poland); Dembska, M., E-mail: jkijak@astro.ia.uz.zgora.pl [DLR Institute of Space Systems, Robert-Hooke-Str. 7 D-28359 Bremen (Germany)
2017-05-10
We present the results of our radio interferometric observations of pulsars at 325 and 610 MHz using the Giant Metrewave Radio Telescope. We used the imaging method to estimate the flux densities of several pulsars at these radio frequencies. The analysis of the shapes of the pulsar spectra allowed us to identify five new gigahertz-peaked spectra (GPS) pulsars. Using the hypothesis that the spectral turnovers are caused by thermal free–free absorption in the interstellar medium, we modeled the spectra of all known objects of this kind. Using the model, we were able to put some observational constraints on the physical parameters of the absorbing matter, which allows us to distinguish between the possible sources of absorption. We also discuss the possible effects of the existence of GPS pulsars on future search surveys, showing that the optimal frequency range for finding such objects would be from a few GHz (for regular GPS sources) to possibly 10 GHz for pulsars and radio magnetars exhibiting very strong absorption.
Scaling of peak flows with constant flow velocity in random self-similar networks
Directory of Open Access Journals (Sweden)
R. Mantilla
2011-07-01
Full Text Available A methodology is presented to understand the role of the statistical self-similar topology of real river networks on scaling, or power law, in peak flows for rainfall-runoff events. We created Monte Carlo generated sets of ensembles of 1000 random self-similar networks (RSNs with geometrically distributed interior and exterior generators having parameters p_{i} and p_{e}, respectively. The parameter values were chosen to replicate the observed topology of real river networks. We calculated flow hydrographs in each of these networks by numerically solving the link-based mass and momentum conservation equation under the assumption of constant flow velocity. From these simulated RSNs and hydrographs, the scaling exponents β and φ characterizing power laws with respect to drainage area, and corresponding to the width functions and flow hydrographs respectively, were estimated. We found that, in general, φ > β, which supports a similar finding first reported for simulations in the river network of the Walnut Gulch basin, Arizona. Theoretical estimation of β and φ in RSNs is a complex open problem. Therefore, using results for a simpler problem associated with the expected width function and expected hydrograph for an ensemble of RSNs, we give heuristic arguments for theoretical derivations of the scaling exponents β^{(E} and φ^{(E} that depend on the Horton ratios for stream lengths and areas. These ratios in turn have a known dependence on the parameters of the geometric distributions of RSN generators. Good agreement was found between the analytically conjectured values of β^{(E} and φ^{(E} and the values estimated by the simulated ensembles of RSNs and hydrographs. The independence of the scaling exponents φ^{(E} and φ with respect to the value of flow velocity and runoff intensity implies an interesting connection between unit
Scaling of peak flows with constant flow velocity in random self-similar networks
Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.
2011-01-01
A methodology is presented to understand the role of the statistical self-similar topology of real river networks on scaling, or power law, in peak flows for rainfall-runoff events. We created Monte Carlo generated sets of ensembles of 1000 random self-similar networks (RSNs) with geometrically distributed interior and exterior generators having parameters pi and pe, respectively. The parameter values were chosen to replicate the observed topology of real river networks. We calculated flow hydrographs in each of these networks by numerically solving the link-based mass and momentum conservation equation under the assumption of constant flow velocity. From these simulated RSNs and hydrographs, the scaling exponents β and φ characterizing power laws with respect to drainage area, and corresponding to the width functions and flow hydrographs respectively, were estimated. We found that, in general, φ > β, which supports a similar finding first reported for simulations in the river network of the Walnut Gulch basin, Arizona. Theoretical estimation of β and φ in RSNs is a complex open problem. Therefore, using results for a simpler problem associated with the expected width function and expected hydrograph for an ensemble of RSNs, we give heuristic arguments for theoretical derivations of the scaling exponents β(E) and φ(E) that depend on the Horton ratios for stream lengths and areas. These ratios in turn have a known dependence on the parameters of the geometric distributions of RSN generators. Good agreement was found between the analytically conjectured values of β(E) and φ(E) and the values estimated by the simulated ensembles of RSNs and hydrographs. The independence of the scaling exponents φ(E) and φ with respect to the value of flow velocity and runoff intensity implies an interesting connection between unit hydrograph theory and flow dynamics. Our results provide a reference framework to study scaling exponents under more complex scenarios
Modeling Peak Oil and the Geological Constraints on Oil Production
Okullo, S.J.; Reynes, F.; Hofkes, M.W.
2014-01-01
We propose a model to reconcile the theory of inter-temporal non-renewable resource depletion with well-known stylized facts concerning the exploitation of exhaustible resources such as oil. Our approach introduces geological constraints into a Hotelling type extraction-exploration model. We show
Modeling peak oil and the geological constraints on oil production
Okullo, S.J.; Reynès, F.; Hofkes, M.W.
2015-01-01
We propose a model to reconcile the theory of inter-temporal non-renewable resource depletion with well-known stylized facts concerning the exploitation of exhaustible resources such as oil. Our approach introduces geological constraints into a Hotelling type extraction-exploration model. We show
Felder, Guido; Zischg, Andreas; Weingartner, Rolf
2015-04-01
Estimating peak discharges with very low probabilities is still accompanied by large uncertainties. Common estimation methods are usually based on extreme value statistics applied to observed time series or to hydrological model outputs. However, such methods assume the system to be stationary and do not specifically consider non-stationary effects. Observed time series may exclude events where peak discharge is damped by retention effects, as this process does not occur until specific thresholds, possibly beyond those of the highest measured event, are exceeded. Hydrological models can be complemented and parameterized with non-linear functions. However, in such cases calibration depends on observed data and non-stationary behaviour is not deterministically calculated. Our study discusses the option of considering retention effects on extreme peak discharges by coupling hydrological and hydraulic models. This possibility is tested by forcing the semi-distributed deterministic hydrological model PREVAH with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). The procedure ensures that the estimated extreme peak discharge does not exceed the physical limit given by the riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered.
Alternative model of random surfaces
International Nuclear Information System (INIS)
Ambartzumian, R.V.; Sukiasian, G.S.; Savvidy, G.K.; Savvidy, K.G.
1992-01-01
We analyse models of triangulated random surfaces and demand that geometrically nearby configurations of these surfaces must have close actions. The inclusion of this principle drives us to suggest a new action, which is a modified Steiner functional. General arguments, based on the Minkowski inequality, shows that the maximal distribution to the partition function comes from surfaces close to the sphere. (orig.)
Randomized Item Response Theory Models
Fox, Gerardus J.A.
2005-01-01
The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by
A fluid dynamical flow model for the central peak in the rotation curve of disk galaxies
International Nuclear Information System (INIS)
Bhattacharyya, T.; Basu, B.
1980-01-01
The rotation curve of the central region in some disk galaxies shows a linear rise, terminating at a peak (primary peak) which is then vollowed by a deep minimum. The curve then again rises to another peak at more or less half-way across the galactic radius. This latter peak is considered as the peak of the rotation curve in all large-scale analysis of galactic structure. The primary peak is usually ignored for the purpose. In this work an attempt has been made to look at the primary peak as the manifestation of the post-explosion flow pattern of gas in the deep central region of galaxies. Solving hydrodynamical equations of motion, a flow model has been derived which imitates very closely the actually observed linear rotational velocity, followed by the falling branch of the curve to minimum. The theoretical flow model has been compared with observed results for nine galaxies. The agreement obtained is extremely encouraging. The distance of the primary peak from the galactic centre has been shown to be correlated with the angular velocity in the linear part of the rotation curve. Here also, agreement is very good between theoretical and observed results. It is concluded that the distance of the primary peak from the centre not only speaks of the time that has elapsed since the explosion occurred in the nucleus, it also speaks of the potential capability of the nucleus of the galaxy for repeating explosions through some efficient process of mass replenishment at the core. (orig.)
Modelling of peak temperature during friction stir processing of magnesium alloy AZ91
Vaira Vignesh, R.; Padmanaban, R.
2018-02-01
Friction stir processing (FSP) is a solid state processing technique with potential to modify the properties of the material through microstructural modification. The study of heat transfer in FSP aids in the identification of defects like flash, inadequate heat input, poor material flow and mixing etc. In this paper, transient temperature distribution during FSP of magnesium alloy AZ91 was simulated using finite element modelling. The numerical model results were validated using the experimental results from the published literature. The model was used to predict the peak temperature obtained during FSP for various process parameter combinations. The simulated peak temperature results were used to develop a statistical model. The effect of process parameters namely tool rotation speed, tool traverse speed and shoulder diameter of the tool on the peak temperature was investigated using the developed statistical model. It was found that peak temperature was directly proportional to tool rotation speed and shoulder diameter and inversely proportional to tool traverse speed.
Random Intercept and Random Slope 2-Level Multilevel Models
Directory of Open Access Journals (Sweden)
Rehan Ahmad Khan
2012-11-01
Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.
Xing, Yafei; Macq, Benoit
2017-11-01
With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.
Modeling superhydrophobic surfaces comprised of random roughness
Samaha, M. A.; Tafreshi, H. Vahedi; Gad-El-Hak, M.
2011-11-01
We model the performance of superhydrophobic surfaces comprised of randomly distributed roughness that resembles natural surfaces, or those produced via random deposition of hydrophobic particles. Such a fabrication method is far less expensive than ordered-microstructured fabrication. The present numerical simulations are aimed at improving our understanding of the drag reduction effect and the stability of the air-water interface in terms of the microstructure parameters. For comparison and validation, we have also simulated the flow over superhydrophobic surfaces made up of aligned or staggered microposts for channel flows as well as streamwise or spanwise ridge configurations for pipe flows. The present results are compared with other theoretical and experimental studies. The numerical simulations indicate that the random distribution of surface roughness has a favorable effect on drag reduction, as long as the gas fraction is kept the same. The stability of the meniscus, however, is strongly influenced by the average spacing between the roughness peaks, which needs to be carefully examined before a surface can be recommended for fabrication. Financial support from DARPA, contract number W91CRB-10-1-0003, is acknowledged.
Dispersion-convolution model for simulating peaks in a flow injection system.
Pai, Su-Cheng; Lai, Yee-Hwong; Chiao, Ling-Yun; Yu, Tiing
2007-01-12
A dispersion-convolution model is proposed for simulating peak shapes in a single-line flow injection system. It is based on the assumption that an injected sample plug is expanded due to a "bulk" dispersion mechanism along the length coordinate, and that after traveling over a distance or a period of time, the sample zone will develop into a Gaussian-like distribution. This spatial pattern is further transformed to a temporal coordinate by a convolution process, and finally a temporal peak image is generated. The feasibility of the proposed model has been examined by experiments with various coil lengths, sample sizes and pumping rates. An empirical dispersion coefficient (D*) can be estimated by using the observed peak position, height and area (tp*, h* and At*) from a recorder. An empirical temporal shift (Phi*) can be further approximated by Phi*=D*/u2, which becomes an important parameter in the restoration of experimental peaks. Also, the dispersion coefficient can be expressed as a second-order polynomial function of the pumping rate Q, for which D*(Q)=delta0+delta1Q+delta2Q2. The optimal dispersion occurs at a pumping rate of Qopt=sqrt[delta0/delta2]. This explains the interesting "Nike-swoosh" relationship between the peak height and pumping rate. The excellent coherence of theoretical and experimental peak shapes confirms that the temporal distortion effect is the dominating reason to explain the peak asymmetry in flow injection analysis.
Smooth random change point models.
van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E
2011-03-15
Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.
Prediction on the Peak of the CO2 Emissions in China Using the STIRPAT Model
Directory of Open Access Journals (Sweden)
Li Li
2016-01-01
Full Text Available Climate change has threatened our economic, environmental, and social sustainability seriously. The world has taken active measures in dealing with climate change to mitigate carbon emissions. Predicting the carbon emissions peak has become a global focus, as well as a leading target for China’s low carbon development. China has promised its carbon emissions will have peaked by around 2030, with the intention of peaking earlier. Scholars generally have studied the influencing factors of carbon emissions. However, research on carbon emissions peaks is not extensive. Therefore, by setting a low scenario, a middle scenario, and a high scenario, this paper predicts China’s carbon emissions peak from 2015 to 2035 based on the data from 1998 to 2014 using the Stochastic Impacts by Regression on Population, Affluence, and Technology (STIRPAT model. The results show that in the low, middle, and high scenarios China will reach its carbon emissions peak in 2024, 2027, and 2030, respectively. Thus, this paper puts forward the large-scale application of technology innovation to improve energy efficiency and optimize energy structure and supply and demand. China should use industrial policy and human capital investment to stimulate the rapid development of low carbon industries and modern agriculture and service industries to help China to reach its carbon emissions peak by around 2030 or earlier.
A quantitative trait locus mixture model that avoids spurious LOD score peaks.
Feenstra, Bjarke; Skovgaard, Ib M
2004-06-01
In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.
Wei, Zhongbao; Meng, Shujuan; Tseng, King Jet; Lim, Tuti Mariana; Soong, Boon Hee; Skyllas-Kazacos, Maria
2017-03-01
An accurate battery model is the prerequisite for reliable state estimate of vanadium redox battery (VRB). As the battery model parameters are time varying with operating condition variation and battery aging, the common methods where model parameters are empirical or prescribed offline lacks accuracy and robustness. To address this issue, this paper proposes to use an online adaptive battery model to reproduce the VRB dynamics accurately. The model parameters are online identified with both the recursive least squares (RLS) and the extended Kalman filter (EKF). Performance comparison shows that the RLS is superior with respect to the modeling accuracy, convergence property, and computational complexity. Based on the online identified battery model, an adaptive peak power estimator which incorporates the constraints of voltage limit, SOC limit and design limit of current is proposed to fully exploit the potential of the VRB. Experiments are conducted on a lab-scale VRB system and the proposed peak power estimator is verified with a specifically designed "two-step verification" method. It is shown that different constraints dominate the allowable peak power at different stages of cycling. The influence of prediction time horizon selection on the peak power is also analyzed.
Multi-model comparison of CO2 emissions peaking in China: Lessons from CEMF01 study
Directory of Open Access Journals (Sweden)
Oleg Lugovoy
2018-03-01
Full Text Available The paper summarizes results of the China Energy Modeling Forum's (CEMF first study. Carbon emissions peaking scenarios, consistent with China's Paris commitment, have been simulated with seven national and industry-level energy models and compared. The CO2 emission trends in the considered scenarios peak from 2015 to 2030 at the level of 9–11 Gt. Sector-level analysis suggests that total emissions pathways before 2030 will be determined mainly by dynamics of emissions in the electric power industry and transportation sector. Both sectors will experience significant increase in demand, but have low-carbon alternative options for development. Based on a side-by-side comparison of modeling input and results, conclusions have been drawn regarding the sources of emissions projections differences, which include data, views on economic perspectives, or models' structure and theoretical framework. Some suggestions have been made regarding energy models' development priorities for further research. Keywords: Carbon emissions projections, Climate change, CO2 emissions peak, China's Paris commitment, Top-Down energy models, Bottom-Up energy models, Multi model comparative study, China Energy Modeling Forum (CEMF
Generalization of Random Intercept Multilevel Models
Directory of Open Access Journals (Sweden)
Rehan Ahmad Khan
2013-10-01
Full Text Available The concept of random intercept models in a multilevel model developed by Goldstein (1986 has been extended for k-levels. The random variation in intercepts at individual level is marginally split into components by incorporating higher levels of hierarchy in the single level model. So, one can control the random variation in intercepts by incorporating the higher levels in the model.
Bertuzzo, E.; Mari, L.; Righetto, L.; Casagrandi, R.; Gatto, M.; Rodriguez-Iturbe, I.; Rinaldo, A.
2010-12-01
The seasonality of cholera and its relation with environmental drivers are receiving increasing interest and research efforts, yet they remain unsatisfactorily understood. A striking example is the observed annual cycle of cholera incidence in the Bengal region which exhibits two peaks despite the main environmental drivers that have been linked to the disease (air and sea surface temperature, zooplankton density, river discharge) follow a synchronous single-peak annual pattern. A first outbreak, mainly affecting the coastal regions, occurs in spring and it is followed, after a period of low incidence during summer, by a second, usually larger, peak in autumn also involving regions situated farther inland. A hydroclimatological explanation for this unique seasonal cycle has been recently proposed: the low river spring flows favor the intrusion of brackish water (the natural environment of the causative agent of the disease) which, in turn, triggers the first outbreak. The summer rising river discharges have a temporary dilution effect and prompt the repulsion of contaminated water which lowers the disease incidence. However, the monsoon flooding, together with the induced crowding of the population and the failure of the sanitation systems, can possibly facilitate the spatial transmission of the disease and promote the autumn outbreak. We test this hypothesis using a mechanistic, spatially explicit model of cholera epidemic. The framework directly accounts for the role of the river network in transporting and redistributing cholera bacteria among human communities as well as for the annual fluctuation of the river flow. The model is forced with the actual environmental drivers of the region, namely river flow and temperature. Our results show that these two drivers, both having a single peak in the summer, can generate a double peak cholera incidence pattern. Besides temporal patterns, the model is also able to qualitatively reproduce spatial patterns characterized
Modeling of Lightning Strokes Using Two-Peaked Channel-Base Currents
Directory of Open Access Journals (Sweden)
V. Javor
2012-01-01
Full Text Available Lightning electromagnetic field is obtained by using “engineering” models of lightning return strokes and new channel-base current functions and the results are presented in this paper. Experimentally measured channel-base currents are approximated not only with functions having two-peaked waveshapes but also with the one-peaked function so as usually used in the literature. These functions are simple to be applied in any “engineering” or electromagnetic model as well. For the three “engineering” models: transmission line model (without the peak current decay, transmission line model with linear decay, and transmission line model with exponential decay with height, the comparison of electric and magnetic field components at different distances from the lightning channel-base is presented in the case of a perfectly conducting ground. Different heights of lightning channels are also considered. These results enable analysis of advantages/shortages of the used return stroke models according to the electromagnetic field features to be achieved, as obtained by measurements.
Modeling for the management of peak loads on a radiology image management network
International Nuclear Information System (INIS)
Dwyer, S.J.; Cox, G.G.; Templeton, A.W.; Cook, L.T.; Anderson, W.H.; Hensley, K.S.
1987-01-01
The design of a radiology image management network for a radiology department can now be assisted by a queueing model. The queueing model requires that the designers specify the following parameters: the number of tasks to be accomplished (acquisition of image data, transmission of data, archiving of data, displaying and manipulation of data, and generation of hard copies); the average times to complete each task; the patient scheduled arrival times; and the number/type of computer nodes interfaced to the network (acquisition nodes, interactive diagnostic display stations, archiving nodes, hard copy nodes, and gateways to hospital systems). The outcomes from the queuering model include mean throughput data rates and identified bottlenecks, and peak throughput data rates and identified bottlenecks. This exhibit presents the queueing model and illustrates its use in managing peak loads on an image management network
Peaks, plateaus, canyons, and craters: The complex geometry of simple mid-domain effect models
DEFF Research Database (Denmark)
Colwell, Robert K.; Gotelli, Nicholas J.; Rahbek, Carsten
2009-01-01
dye algorithm to place assemblages of species of uniform We used a spreading dye algorithm to place assemblages of species of uniform range size in one-dimensional or two-dimensional bounded domains. In some models, we allowed dispersal to introduce range discontinuity. Results: As uniform range size...... increases from small to medium, a flat pattern of species As uniform range size increases from small to medium, a flat pattern of species richness is replaced by a pair of peripheral peaks, separated by a valley (one-dimensional models), or by a cratered ring (two-dimensional models) of species richness...... of a uniform size generate more complex patterns, including peaks, plateaus, canyons, and craters of species richness....
Infinite Random Graphs as Statistical Mechanical Models
DEFF Research Database (Denmark)
Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria
2011-01-01
We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe a ...
Blanch, E.; Altadill, D.
2009-04-01
Geomagnetic storms disturb the quiet behaviour of the ionosphere, its electron density and the electron density peak height, hmF2. Many works have been done to predict the variations of the electron density but few efforts have been dedicated to predict the variations the hmF2 under disturbed helio-geomagnetic conditions. We present the results of the analyses of the F2 layer peak height disturbances occurred during intense geomagnetic storms for one solar cycle. The results systematically show a significant peak height increase about 2 hours after the beginning of the main phase of the geomagnetic storm, independently of both the local time position of the station at the onset of the storm and the intensity of the storm. An additional uplift is observed in the post sunset sector. The duration of the uplift and the height increase are dependent of the intensity of the geomagnetic storm, the season and the local time position of the station at the onset of the storm. An empirical model has been developed to predict the electron density peak height disturbances in response to solar wind conditions and local time which can be used for nowcasting and forecasting the hmF2 disturbances for the middle latitude ionosphere. This being an important output for EURIPOS project operational purposes.
Boezen, H M; Schouten, J. P.; Postma, D S; Rijcken, B
1994-01-01
Peak expiratory flow (PEF) variability can be considered as an index of bronchial lability. Population studies on PEF variability are few. The purpose of the current paper is to describe the distribution of PEF variability in a random population sample of adults with a wide age range (20-70 yrs),
How would peak rainfall intensity affect runoff predictions using conceptual water balance models?
Directory of Open Access Journals (Sweden)
B. Yu
2015-06-01
Full Text Available Most hydrological models use continuous daily precipitation and potential evapotranspiration for streamflow estimation. With the projected increase in mean surface temperature, hydrological processes are set to intensify irrespective of the underlying changes to the mean precipitation. The effect of an increase in rainfall intensity on the long-term water balance is, however, not adequately accounted for in the commonly used hydrological models. This study follows from a previous comparative analysis of a non-stationary daily series of stream flow of a forested watershed (River Rimbaud in the French Alps (area = 1.478 km2 (1966–2006. Non-stationarity in the recorded stream flow occurred as a result of a severe wild fire in 1990. Two daily models (AWBM and SimHyd were initially calibrated for each of three distinct phases in relation to the well documented land disturbance. At the daily and monthly time scales, both models performed satisfactorily with the Nash–Sutcliffe coefficient of efficiency (NSE varying from 0.77 to 0.92. When aggregated to the annual time scale, both models underestimated the flow by about 22% with a reduced NSE at about 0.71. Exploratory data analysis was undertaken to relate daily peak hourly rainfall intensity to the discrepancy between the observed and modelled daily runoff amount. Preliminary results show that the effect of peak hourly rainfall intensity on runoff prediction is insignificant, and model performance is unlikely to improve when peak daily precipitation is included. Trend analysis indicated that the large decrease of precipitation when daily precipitation amount exceeded 10–20 mm may have contributed greatly to the decrease in stream flow of this forested watershed.
Zhao, F.; Veldkamp, T.; Frieler, K.; Schewe, J.; Ostberg, S.; Willner, S. N.; Schauberger, B.; Gosling, S.; Mueller Schmied, H.; Portmann, F. T.; Leng, G.; Huang, M.; Liu, X.; Tang, Q.; Hanasaki, N.; Biemans, H.; Gerten, D.; Satoh, Y.; Pokhrel, Y. N.; Stacke, T.; Ciais, P.; Chang, J.; Ducharne, A.; Guimberteau, M.; Wada, Y.; Kim, H.; Yamazaki, D.
2017-12-01
Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge—which is crucial in flood simulations—has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a project. The runoff simulations were used as input for the global river routing model CaMa-Flood. The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about 2/3 of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.
Wijetunge, Chalini D; Saeed, Isaam; Boughton, Berin A; Roessner, Ute; Halgamuge, Saman K
2015-01-01
Mass Spectrometry (MS) is a ubiquitous analytical tool in biological research and is used to measure the mass-to-charge ratio of bio-molecules. Peak detection is the essential first step in MS data analysis. Precise estimation of peak parameters such as peak summit location and peak area are critical to identify underlying bio-molecules and to estimate their abundances accurately. We propose a new method to detect and quantify peaks in mass spectra. It uses dual-tree complex wavelet transformation along with Stein's unbiased risk estimator for spectra smoothing. Then, a new method, based on the modified Asymmetric Pseudo-Voigt (mAPV) model and hierarchical particle swarm optimization, is used for peak parameter estimation. Using simulated data, we demonstrated the benefit of using the mAPV model over Gaussian, Lorentz and Bi-Gaussian functions for MS peak modelling. The proposed mAPV model achieved the best fitting accuracy for asymmetric peaks, with lower percentage errors in peak summit location estimation, which were 0.17% to 4.46% less than that of the other models. It also outperformed the other models in peak area estimation, delivering lower percentage errors, which were about 0.7% less than its closest competitor - the Bi-Gaussian model. In addition, using data generated from a MALDI-TOF computer model, we showed that the proposed overall algorithm outperformed the existing methods mainly in terms of sensitivity. It achieved a sensitivity of 85%, compared to 77% and 71% of the two benchmark algorithms, continuous wavelet transformation based method and Cromwell respectively. The proposed algorithm is particularly useful for peak detection and parameter estimation in MS data with overlapping peak distributions and asymmetric peaks. The algorithm is implemented using MATLAB and the source code is freely available at http://mapv.sourceforge.net.
A new approach for modeling the peak utility impacts from a proposed CUAC standard
Energy Technology Data Exchange (ETDEWEB)
LaCommare, Kristina Hamachi; Gumerman, Etan; Marnay, Chris; Chan, Peter; Coughlin, Katie
2004-08-01
This report describes a new Berkeley Lab approach for modeling the likely peak electricity load reductions from proposed energy efficiency programs in the National Energy Modeling System (NEMS). This method is presented in the context of the commercial unitary air conditioning (CUAC) energy efficiency standards. A previous report investigating the residential central air conditioning (RCAC) load shapes in NEMS revealed that the peak reduction results were lower than expected. This effect was believed to be due in part to the presence of the squelch, a program algorithm designed to ensure changes in the system load over time are consistent with the input historic trend. The squelch applies a system load-scaling factor that scales any differences between the end-use bottom-up and system loads to maintain consistency with historic trends. To obtain more accurate peak reduction estimates, a new approach for modeling the impact of peaky end uses in NEMS-BT has been developed. The new approach decrements the system load directly, reducing the impact of the squelch on the final results. This report also discusses a number of additional factors, in particular non-coincidence between end-use loads and system loads as represented within NEMS, and their impacts on the peak reductions calculated by NEMS. Using Berkeley Lab's new double-decrement approach reduces the conservation load factor (CLF) on an input load decrement from 25% down to 19% for a SEER 13 CUAC trial standard level, as seen in NEMS-BT output. About 4 GW more in peak capacity reduction results from this new approach as compared to Berkeley Lab's traditional end-use decrement approach, which relied solely on lowering end use energy consumption. The new method has been fully implemented and tested in the Annual Energy Outlook 2003 (AEO2003) version of NEMS and will routinely be applied to future versions. This capability is now available for use in future end-use efficiency or other policy analysis
The Impact of the Twin Peaks Model on the Insurance Industry
Directory of Open Access Journals (Sweden)
Daleen Millard
2017-02-01
Full Text Available Financial regulation in South Africa changes constantly. In the quest to find the ideal regulatory framework for optimal consumer protection, rules change all the time and international trends have an important influence on lawmakers nationally. The Financial Sector Regulation Bill, also known as the "Twin Peaks" Bill, is the latest invention from the table of the legislature, and some expect this Bill to have far-reaching consequences for the financial services industry. The question is, of course, whether the current dispensation will change so quickly and so dramatically that it will literally be the end of the world as we know it or whether there will be a gradual shift in emphasis away from the so-called silo regulatory approach to an approach that distinguishes between prudential regulation on the one hand and market conduct regulation on the other. A further question is whether insurance as a financial service will change dramatically in the light of the expected twin peak dispensation. The purpose of this paper is to discuss the implications of the FSR Bill for the insurance industry. Instead of analysing the Bill feature for feature, the method that will be used in this enquiry is to identify trends and issues from 2014 and to discuss whether the Twin Peaks model, once implemented, can successfully eradicate similar problems in future. The impact of Twin Peaks will of course have to be tested, but at this point in time it may be very useful to take an educated guess by using recent cases as examples. Recent cases before the courts, the Enforcement Committee and the FAIS Ombud will be discussed not only as examples of the most prevalent issues of the past year or so, but also as examples of how consumer issues and systemic risks are currently being dealt with and how this may change with the implementation of the FSR Bill.
DEFF Research Database (Denmark)
Coman, Paul Tiberiu; Veje, Christian
2013-01-01
Numerical model and analysis of peak temperature reduction in LiFePO4 battery packs using phase change materials......Numerical model and analysis of peak temperature reduction in LiFePO4 battery packs using phase change materials...
International Nuclear Information System (INIS)
Chevallier, Bruno; Moncomble, Jean-Eudes; Sigonney, Pierre; Vially, Rolland; Bosseboeuf, Didier; Chateau, Bertrand
2012-01-01
This article reports a workshop which addressed several energy issues like the objectives and constraints of energy mix scenarios, the differences between the approaches in different countries, the cost of new technologies implemented for this purposes, how these technologies will be developed and marketed, which will be the environmental and societal acceptability of these technical choices. Different aspects and issues have been more precisely presented and discussed: the peak oil, development of shale gases and their cost (will non conventional hydrocarbons modify the peak oil and be socially accepted?), energy efficiency (its benefits, its reality in France and other countries, its position in front of the challenge of energy transition), and strategies in the transport sector (challenges for mobility, evolution towards a model of sustainable mobility)
Interdependent demands, regulatory constraint, and peak-load pricing. [Assessment of Bailey's model
Energy Technology Data Exchange (ETDEWEB)
Nguyen, D T; Macgregor-Reid, G J
1977-06-01
A model of a regulated firm which includes an analysis of peak-load pricing has been formulated by E. E. Bailey in which three alternative modes of regulation on a profit-maximizing firm are considered. The main conclusion reached is that under a regulation limiting the rate of return on capital investment, price reductions are received solely by peak-users and that when regulation limiting the profit per unit of output or the return on costs is imposed, there are price reductions for all users. Bailey has expressly assumed that the demands in different periods are interdependent but has somehow failed to derive the correct price and welfare implications of this empirically highly relevant assumption. Her conclusions would have been perfectly correct for marginal revenues but are quite incorrect for prices, even if her assumption that price exceeds marginal revenues in every period holds. This present paper derives fully and rigorously the implications of regulation for prices, outputs, capacity, and social welfare for a profit-maximizing firm with interdependent demands. In section II, Bailey's model is reproduced and the optimal conditions are given. In section III, it is demonstrated that under the conditions of interdependent demands assumed by Bailey herself, her often-quoted conclusion concerning the effects of the return-on-investment regulation on the off-peak price is invalid. In section IV, the effects of the return-on-investment regulation on the optimal prices, outputs, capacity, and social welfare both for the case in which the demands in different periods are substitutes and for the case in which they are complements are examined. In section V, the pricing and welfare implications of the return-on-investment regulation are compared with the two other modes of regulation considered by Bailey. Section VI is a summary of all sections. (MCW)
Random matrix model for disordered conductors
Indian Academy of Sciences (India)
In the interpretation of transport properties of mesoscopic systems, the multichannel ... One defines the random matrix model with N eigenvalues 0. λТ ..... With heuristic arguments, using the ideas pertaining to Dyson Coulomb gas analogy,.
The random walk model of intrafraction movement
International Nuclear Information System (INIS)
Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M
2013-01-01
The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction Gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-Gaussian corrections from the random walk model. (paper)
The random walk model of intrafraction movement.
Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M
2013-04-07
The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-gaussian corrections from the random walk model.
A New-Trend Model-Based to Solve the Peak Power Problems in OFDM Systems
Directory of Open Access Journals (Sweden)
Ashraf A. Eltholth
2008-01-01
Full Text Available The high peak to average power ration (PAR levels of orthogonal frequency division multiplexing (OFDM signals attract the attention of many researchers during the past decade. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exists to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this paper, we propose a new trend in mitigating the peak power problem in OFDM system based on modeling the effects of clipping and amplifier nonlinearities in an OFDM system. We showed that the distortion due to these effects is highly related to the dynamic range itself rather than the clipping level or the saturation level of the nonlinear amplifier, and thus we propose two criteria to reduce the dynamic range of the OFDM, namely, the use of MSK modulation and the use of Hadamard transform. Computer simulations of the OFDM system using Matlab are completely matched with the deduced model in terms of OFDM signal quality metrics such as BER, ACPR, and EVM. Also simulation results show that even the reduction of PAR using the two proposed criteria is not significat, and the reduction in the amount of distortion due to HPA is truley delightful.
Entropy Characterization of Random Network Models
Directory of Open Access Journals (Sweden)
Pedro J. Zufiria
2017-06-01
Full Text Available This paper elaborates on the Random Network Model (RNM as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc. and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.
A theoretical model for predicting the Peak Cutting Force of conical picks
Directory of Open Access Journals (Sweden)
Gao Kuidong
2014-01-01
Full Text Available In order to predict the PCF (Peak Cutting Force of conical pick in rock cutting process, a theoretical model is established based on elastic fracture mechanics theory. The vertical fracture model of rock cutting fragment is also established based on the maximum tensile criterion. The relation between vertical fracture angle and associated parameters (cutting parameter and ratio B of rock compressive strength to tensile strength is obtained by numerical analysis method and polynomial regression method, and the correctness of rock vertical fracture model is verified through experiments. Linear regression coefficient between the PCF of prediction and experiments is 0.81, and significance level less than 0.05 shows that the model for predicting the PCF is correct and reliable. A comparative analysis between the PCF obtained from this model and Evans model reveals that the result of this prediction model is more reliable and accurate. The results of this work could provide some guidance for studying the rock cutting theory of conical pick and designing the cutting mechanism.
International Nuclear Information System (INIS)
Yu, L.; Li, Y.P.; Huang, G.H.
2016-01-01
In this study, a FSSOM (fuzzy-stochastic simulation-optimization model) is developed for planning EPS (electric power systems) with considering peak demand under uncertainty. FSSOM integrates techniques of SVR (support vector regression), Monte Carlo simulation, and FICMP (fractile interval chance-constrained mixed-integer programming). In FSSOM, uncertainties expressed as fuzzy boundary intervals and random variables can be effectively tackled. In addition, SVR coupled Monte Carlo technique is used for predicting the peak-electricity demand. The FSSOM is applied to planning EPS for the City of Qingdao, China. Solutions of electricity generation pattern to satisfy the city's peak demand under different probability levels and p-necessity levels have been generated. Results reveal that the city's electricity supply from renewable energies would be low (only occupying 8.3% of the total electricity generation). Compared with the energy model without considering peak demand, the FSSOM can better guarantee the city's power supply and thus reduce the system failure risk. The findings can help decision makers not only adjust the existing electricity generation/supply pattern but also coordinate the conflict interaction among system cost, energy supply security, pollutant mitigation, as well as constraint-violation risk. - Highlights: • FSSOM (Fuzzy-stochastic simulation-optimization model) is developed for planning EPS. • It can address uncertainties as fuzzy-boundary intervals and random variables. • FSSOM can satisfy peak-electricity demand and optimize power allocation. • Solutions under different probability levels and p-necessity levels are analyzed. • Results create tradeoff among system cost and peak-electricity demand violation risk.
Peak Shaving Considering Streamflow Uncertainties | Iwuagwu ...
African Journals Online (AJOL)
The main thrust of this paper is peak shaving with a Stochastic hydro model. In peak sharing, the amount of hydro energy scheduled may be a minimum but it serves to replace less efficient thermal units. The sample system is die Kainji hydro plant and the thermal units of the National Electric Power Authority. The random ...
Swartz, M.; Allkofer, Y.; Bortoletto, D.; Cremaldi, L.; Cucciarelli, S.; Dorokhov, A.; Hoermann, C.; Kim, D.; Konecki, M.; Kotlinski, D.; Prokofiev, Kirill; Regenfus, Christian; Rohe, T.; Sanders, D.A.; Son, S.; Speer, T.
2006-01-01
We show that doubly peaked electric fields are necessary to describe grazing-angle charge collection measurements of irradiated silicon pixel sensors. A model of irradiated silicon based upon two defect levels with opposite charge states and the trapping of charge carriers can be tuned to produce a good description of the measured charge collection profiles in the fluence range from 0.5x10^{14} Neq/cm^2 to 5.9x10^{14} Neq/cm^2. The model correctly predicts the variation in the profiles as the temperature is changed from -10C to -25C. The measured charge collection profiles are inconsistent with the linearly-varying electric fields predicted by the usual description based upon a uniform effective doping density. This observation calls into question the practice of using effective doping densities to characterize irradiated silicon.
A Generalized Random Regret Minimization Model
Chorus, C.G.
2013-01-01
This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM
Computer simulations of the random barrier model
DEFF Research Database (Denmark)
Schrøder, Thomas; Dyre, Jeppe
2002-01-01
A brief review of experimental facts regarding ac electronic and ionic conduction in disordered solids is given followed by a discussion of what is perhaps the simplest realistic model, the random barrier model (symmetric hopping model). Results from large scale computer simulations are presented...
Using computational modeling to compare X-ray tube Practical Peak Voltage for Dental Radiology
International Nuclear Information System (INIS)
Holanda Cassiano, Deisemar; Arruda Correa, Samanda Cristine; Monteiro de Souza, Edmilson; Silva, Ademir Xaxier da; Pereira Peixoto, José Guilherme; Tadeu Lopes, Ricardo
2014-01-01
The Practical Peak Voltage-PPV has been adopted to measure the voltage applied to an X-ray tube. The PPV was recommended by the IEC document and accepted and published in the TRS no. 457 code of practice. The PPV is defined and applied to all forms of waves and is related to the spectral distribution of X-rays and to the properties of the image. The calibration of X-rays tubes was performed using the MCNPX Monte Carlo code. An X-ray tube for Dental Radiology (operated from a single phase power supply) and an X-ray tube used as a reference (supplied from a constant potential power supply) were used in simulations across the energy range of interest of 40 kV to 100 kV. Results obtained indicated a linear relationship between the tubes involved. - Highlights: • Computational Model was developed to X-ray tube Practical Peak Voltage for Dental Radiology. • The calibration of X-rays tubes was performed using the MCNPX Monte Carlo code. • The energy range was 40–100 kV. • Results obtained indicated a linear relationship between the Dental Radiology and reference X-ray tubes
RMBNToolbox: random models for biochemical networks
Directory of Open Access Journals (Sweden)
Niemi Jari
2007-05-01
Full Text Available Abstract Background There is an increasing interest to model biochemical and cell biological networks, as well as to the computational analysis of these models. The development of analysis methodologies and related software is rapid in the field. However, the number of available models is still relatively small and the model sizes remain limited. The lack of kinetic information is usually the limiting factor for the construction of detailed simulation models. Results We present a computational toolbox for generating random biochemical network models which mimic real biochemical networks. The toolbox is called Random Models for Biochemical Networks. The toolbox works in the Matlab environment, and it makes it possible to generate various network structures, stoichiometries, kinetic laws for reactions, and parameters therein. The generation can be based on statistical rules and distributions, and more detailed information of real biochemical networks can be used in situations where it is known. The toolbox can be easily extended. The resulting network models can be exported in the format of Systems Biology Markup Language. Conclusion While more information is accumulating on biochemical networks, random networks can be used as an intermediate step towards their better understanding. Random networks make it possible to study the effects of various network characteristics to the overall behavior of the network. Moreover, the construction of artificial network models provides the ground truth data needed in the validation of various computational methods in the fields of parameter estimation and data analysis.
Experimental discrimination of ion stopping models near the Bragg peak in highly ionized matter
Cayzac, W.; Frank, A.; Ortner, A.; Bagnoud, V.; Basko, M. M.; Bedacht, S.; Bläser, C.; Blažević, A.; Busold, S.; Deppert, O.; Ding, J.; Ehret, M.; Fiala, P.; Frydrych, S.; Gericke, D. O.; Hallo, L.; Helfrich, J.; Jahn, D.; Kjartansson, E.; Knetsch, A.; Kraus, D.; Malka, G.; Neumann, N. W.; Pépitone, K.; Pepler, D.; Sander, S.; Schaumann, G.; Schlegel, T.; Schroeter, N.; Schumacher, D.; Seibert, M.; Tauschwitz, An.; Vorberger, J.; Wagner, F.; Weih, S.; Zobus, Y.; Roth, M.
2017-01-01
The energy deposition of ions in dense plasmas is a key process in inertial confinement fusion that determines the α-particle heating expected to trigger a burn wave in the hydrogen pellet and resulting in high thermonuclear gain. However, measurements of ion stopping in plasmas are scarce and mostly restricted to high ion velocities where theory agrees with the data. Here, we report experimental data at low projectile velocities near the Bragg peak, where the stopping force reaches its maximum. This parameter range features the largest theoretical uncertainties and conclusive data are missing until today. The precision of our measurements, combined with a reliable knowledge of the plasma parameters, allows to disprove several standard models for the stopping power for beam velocities typically encountered in inertial fusion. On the other hand, our data support theories that include a detailed treatment of strong ion-electron collisions. PMID:28569766
Modeling of GE Appliances in GridLAB-D: Peak Demand Reduction
Energy Technology Data Exchange (ETDEWEB)
Fuller, Jason C.; Vyakaranam, Bharat GNVSR; Prakash Kumar, Nirupama; Leistritz, Sean M.; Parker, Graham B.
2012-04-29
The widespread adoption of demand response enabled appliances and thermostats can result in significant reduction to peak electrical demand and provide potential grid stabilization benefits. GE has developed a line of appliances that will have the capability of offering several levels of demand reduction actions based on information from the utility grid, often in the form of price. However due to a number of factors, including the number of demand response enabled appliances available at any given time, the reduction of diversity factor due to the synchronizing control signal, and the percentage of consumers who may override the utility signal, it can be difficult to predict the aggregate response of a large number of residences. The effects of these behaviors can be modeled and simulated in open-source software, GridLAB-D, including evaluation of appliance controls, improvement to current algorithms, and development of aggregate control methodologies. This report is the first in a series of three reports describing the potential of GE's demand response enabled appliances to provide benefits to the utility grid. The first report will describe the modeling methodology used to represent the GE appliances in the GridLAB-D simulation environment and the estimated potential for peak demand reduction at various deployment levels. The second and third reports will explore the potential of aggregated group actions to positively impact grid stability, including frequency and voltage regulation and spinning reserves, and the impacts on distribution feeder voltage regulation, including mitigation of fluctuations caused by high penetration of photovoltaic distributed generation and the effects on volt-var control schemes.
Maheshwari, Rajesh; Tracy, Mark; Hinder, Murray; Wright, Audrey
2017-08-01
The aim of this study was to compare mask leak with three different peak inspiratory pressure (PIP) settings during T-piece resuscitator (TPR; Neopuff) mask ventilation on a neonatal manikin model. Participants were neonatal unit staff members. They were instructed to provide mask ventilation with a TPR with three PIP settings (20, 30, 40 cm H 2 O) chosen in a random order. Each episode was for 2 min with 2-min rest period. Flow rate and positive end-expiratory pressure (PEEP) were kept constant. Airway pressure, inspiratory and expiratory tidal volumes, mask leak, respiratory rate and inspiratory time were recorded. Repeated measures analysis of variance was used for statistical analysis. A total of 12 749 inflations delivered by 40 participants were analysed. There were no statistically significant differences (P > 0.05) in the mask leak with the three PIP settings. No statistically significant differences were seen in respiratory rate and inspiratory time with the three PIP settings. There was a significant rise in PEEP as the PIP increased. Failure to achieve the desired PIP was observed especially at the higher settings. In a neonatal manikin model, the mask leak does not vary as a function of the PIP when the flow rate is constant. With a fixed rate and inspiratory time, there seems to be a rise in PEEP with increasing PIP. © 2017 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).
Directory of Open Access Journals (Sweden)
Wenxin Niu
2014-01-01
Full Text Available Objectives. (1 To systematically review peak vertical ground reaction force (PvGRF during two-leg drop landing from specific drop height (DH, (2 to construct a mathematical model describing correlations between PvGRF and DH, and (3 to analyze the effects of some factors on the pooled PvGRF regardless of DH. Methods. A computerized bibliographical search was conducted to extract PvGRF data on a single foot when participants landed with both feet from various DHs. An innovative mathematical model was constructed to analyze effects of gender, landing type, shoes, ankle stabilizers, surface stiffness and sample frequency on PvGRF based on the pooled data. Results. Pooled PvGRF and DH data of 26 articles showed that the square root function fits their relationship well. An experimental validation was also done on the regression equation for the medicum frequency. The PvGRF was not significantly affected by surface stiffness, but was significantly higher in men than women, the platform than suspended landing, the barefoot than shod condition, and ankle stabilizer than control condition, and higher than lower frequencies. Conclusions. The PvGRF and root DH showed a linear relationship. The mathematical modeling method with systematic review is helpful to analyze the influence factors during landing movement without considering DH.
Probabilistic model for fluences and peak fluxes of solar energetic particles
International Nuclear Information System (INIS)
Nymmik, R.A.
1999-01-01
The model is intended for calculating the probability for solar energetic particles (SEP), i.e., protons and Z=2-28 ions, to have an effect on hardware and on biological and other objects in the space. The model describes the probability for the ≥10 MeV/nucleon SEP fluences and peak fluxes to occur in the near-Earth space beyond the Earth magnetosphere under varying solar activity. The physical prerequisites of the model are as follows. The occurrence of SEP is a probabilistic process. The mean SEP occurrence frequency is a power-law function of solar activity (sunspot number). The SEP size (taken to be the ≥30 MeV proton fluence size) distribution is a power-law function within a 10 5 -10 11 proton/cm 2 range. The SEP event particle energy spectra are described by a common function whose parameters are distributed log-normally. The SEP mean composition is energy-dependent and suffers fluctuations described by log-normal functions in separate events
Rahpeyma, Sahar; Halldorsson, Benedikt; Hrafnkelsson, Birgir; Jonsson, Sigurjon
2018-01-01
Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.
Rahpeyma, Sahar
2018-04-17
Knowledge of the characteristics of earthquake ground motion is fundamental for earthquake hazard assessments. Over small distances, relative to the source–site distance, where uniform site conditions are expected, the ground motion variability is also expected to be insignificant. However, despite being located on what has been characterized as a uniform lava‐rock site condition, considerable peak ground acceleration (PGA) variations were observed on stations of a small‐aperture array (covering approximately 1 km2) of accelerographs in Southwest Iceland during the Ölfus earthquake of magnitude 6.3 on May 29, 2008 and its sequence of aftershocks. We propose a novel Bayesian hierarchical model for the PGA variations accounting separately for earthquake event effects, station effects, and event‐station effects. An efficient posterior inference scheme based on Markov chain Monte Carlo (MCMC) simulations is proposed for the new model. The variance of the station effect is certainly different from zero according to the posterior density, indicating that individual station effects are different from one another. The Bayesian hierarchical model thus captures the observed PGA variations and quantifies to what extent the source and recording sites contribute to the overall variation in ground motions over relatively small distances on the lava‐rock site condition.
Bouaziz, Walid; Kanagaratnam, Lukshe; Vogel, Thomas; Schmitt, Elise; Dramé, Moustapha; Kaltenbach, Georges; Geny, Bernard; Lang, Pierre Olivier
2018-01-02
Older adults undergo a progressive decline in cardiorespiratory fitness and functional capacity. This lower peak oxygen uptake (VO 2peak ) level is associated with increased risk of frailty, dependency, loss of autonomy, and mortality from all causes. Regular physical activity and particularly aerobic training (AT) have been shown to contribute to better and healthy aging. We conducted a meta-analysis to measure the exact benefit of AT on VO 2peak in seniors aged 70 years or older. A comprehensive, systematic database search for articles was performed in Embase, Medline, PubMed Central, Science Direct, Scopus, and Web of Science using key words. Two reviewers independently assessed interventional studies for potential inclusion. Ten randomized controlled trials (RCTs) were included totaling 348 seniors aged 70 years or older. Across the trials, no high risk of bias was measured and all considered open-label arms for controls. With significant heterogeneity between the RCTs (all p seniors were, respectively, 1.72 (95% CI: 0.34-3.10) and 1.47 (95% CI: 0.60-2.34). This meta-analysis confirms the AT-associated benefits on VO 2peak in healthy and unhealthy seniors.
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
Carreau, J.; Naveau, P.; Neppel, L.
2017-05-01
The French Mediterranean is subject to intense precipitation events occurring mostly in autumn. These can potentially cause flash floods, the main natural danger in the area. The distribution of these events follows specific spatial patterns, i.e., some sites are more likely to be affected than others. The peaks-over-threshold approach consists in modeling extremes, such as heavy precipitation, by the generalized Pareto (GP) distribution. The shape parameter of the GP controls the probability of extreme events and can be related to the hazard level of a given site. When interpolating across a region, the shape parameter should reproduce the observed spatial patterns of the probability of heavy precipitation. However, the shape parameter estimators have high uncertainty which might hide the underlying spatial variability. As a compromise, we choose to let the shape parameter vary in a moderate fashion. More precisely, we assume that the region of interest can be partitioned into subregions with constant hazard level. We formalize the model as a conditional mixture of GP distributions. We develop a two-step inference strategy based on probability weighted moments and put forward a cross-validation procedure to select the number of subregions. A synthetic data study reveals that the inference strategy is consistent and not very sensitive to the selected number of subregions. An application on daily precipitation data from the French Mediterranean shows that the conditional mixture of GPs outperforms two interpolation approaches (with constant or smoothly varying shape parameter).
Modelling and computing the peaks of carbon emission with balanced growth
International Nuclear Information System (INIS)
Chang, Shuhua; Wang, Xinyu; Wang, Zheng
2016-01-01
Highlights: • We use a more practical utility function to quantify the society’s welfare. • A so-called discontinuous Galerkin method is proposed to solve the ordinary differential equation satisfied by the consumption. • The theoretical results of the discontinuous Galerkin method are obtained. • We establish a Markov model to forecast the energy mix and the industrial structure. - Abstract: In this paper, we assume that under the balanced and optimal economic growth path, the economic growth rate is equal to the consumption growth rate, from which we can obtain the ordinary differential equation governing the consumption level by solving an optimal control problem. Then, a novel numerical method, namely a so-called discontinuous Galerkin method, is applied to solve the ordinary differential equation. The error estimation and the superconvergence estimation of this method are also performed. The model’s mechanism, which makes our assumption coherent, is that once the energy intensity is given, the economic growth is determined, followed by the GDP, the energy demand and the emissions. By applying this model to China, we obtain the conclusion that under the balanced and optimal economic growth path the CO_2 emission will reach its peak in 2030 in China, which is consistent with the U.S.-China Joint Announcement on Climate Change and with other previous scientific results.
Peak quantification in surface-enhanced laser desorption/ionization by using mixture models
Dijkstra, Martijn; Roelofsen, Han; Vonk, Roel J.; Jansen, Ritsert C.
2006-01-01
Surface-enhanced laser desorption/ionization (SELDI) time of flight (TOF) is a mass spectrometry technology for measuring the composition of a sampled protein mixture. A mass spectrum contains peaks corresponding to proteins in the sample. The peak areas are proportional to the measured
Probabilistic model for untargeted peak detection in LC-MS using Bayesian statistics
Woldegebriel, M.; Vivó-Truyols, G.
2015-01-01
We introduce a novel Bayesian probabilistic peak detection algorithm for liquid chromatography mass spectroscopy (LC-MS). The final probabilistic result allows the user to make a final decision about which points in a 2 chromatogram are affected by a chromatographic peak and which ones are only
Random effect selection in generalised linear models
DEFF Research Database (Denmark)
Denwood, Matt; Houe, Hans; Forkman, Björn
We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...
Vallejo, J.J.; Hejduk, M.D.; Stamey, J. D.
2015-01-01
Satellite conjunction risk typically evaluated through the probability of collision (Pc). Considers both conjunction geometry and uncertainties in both state estimates. Conjunction events initially discovered through Joint Space Operations Center (JSpOC) screenings, usually seven days before Time of Closest Approach (TCA). However, JSpOC continues to track objects and issue conjunction updates. Changes in state estimate and reduced propagation time cause Pc to change as event develops. These changes a combination of potentially predictable development and unpredictable changes in state estimate covariance. Operationally useful datum: the peak Pc. If it can reasonably be inferred that the peak Pc value has passed, then risk assessment can be conducted against this peak value. If this value is below remediation level, then event intensity can be relaxed. Can the peak Pc location be reasonably predicted?
International Nuclear Information System (INIS)
Nomura, Yasushi
2000-01-01
In a reprocessing facility where nuclear fuel solutions are processed, one could observe a series of power peaks, with the highest peak right after a criticality accident. The criticality alarm system (CAS) is designed to detect the first power peak and warn workers near the reacting material by sounding alarms immediately. Consequently, exposure of the workers would be minimized by an immediate and effective evacuation. Therefore, in the design and installation of a CAS, it is necessary to estimate the magnitude of the first power peak and to set up the threshold point where the CAS initiates the alarm. Furthermore, it is necessary to estimate the level of potential exposure of workers in the case of accidents so as to decide the appropriateness of installing a CAS for a given compartment.A simplified evaluation model to estimate the minimum scale of the first power peak during a criticality accident is derived by theoretical considerations only for use in the design of a CAS to set up the threshold point triggering the alarm signal. Another simplified evaluation model is derived in the same way to estimate the maximum scale of the first power peak for use in judging the appropriateness for installing a CAS. Both models are shown to have adequate margin in predicting the minimum and maximum scale of criticality accidents by comparing their results with French CRiticality occurring ACcidentally (CRAC) experimental data
A random walk model to evaluate autism
Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.
2018-02-01
A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.
Thailand low and equatorial F 2-layer peak electron density and comparison with IRI-2007 model
Wichaipanich, N.; Supnithi, P.; Tsugawa, T.; Maruyama, T.
2012-06-01
Ionosonde measurements obtained at two Thailand ionospheric stations, namely Chumphon (10.72°N, 99.37°E, dip 3.0°N) and Chiang Mai (18.76°N, 98.93°E, dip 12.7°N) are used to examine the variation of the F 2-layer peak electron density ( N m F 2) which is derived from the F 2-layer critical frequency, f o f 2. Measured data from September 2004 to August 2005 (a period of low solar activity) are analyzed based on the diurnal and seasonal variation and then compared with IRI-2007 model predictions. Our results show that, in general, the diurnal and seasonal variations of the N m F 2 predicted by the IRI (URSI and CCIR options) model show a feature generally similar to the observed N m F 2. Underestimation mostly occurs in all seasons except during the September equinox and the December solstice at Chumphon, and the September equinox and the March equinox at Chiang Mai, when they overestimate those measured. The best agreement between observation and prediction occurs during the pre-sunrise to post-sunrise hours. The best agreement of the %PD values of both the options occurs during the March equinox, while the agreement is the worst during the September equinox. The N m F 2 values predicted by the CCIR option show a smaller range of deviation than the N m F 2 values predicted by the URSI option. During post-sunset to morning hours (around 21:00-09:00 LT), the observed N m F 2 at both stations are almost identical for the periods of low solar activity. However, during daytime, the observed N m F 2 at Chumphon is lower than that at Chiang Mai. The difference between these two stations can be explained by the equatorial ionospheric anomaly (EIA). These results are important for future improvements of the IRI model for N m F 2 over Southeast Asia, especially for the areas covered by Chumphon and Chiang Mai stations.
Random matrix models for phase diagrams
International Nuclear Information System (INIS)
Vanderheyden, B; Jackson, A D
2011-01-01
We describe a random matrix approach that can provide generic and readily soluble mean-field descriptions of the phase diagram for a variety of systems ranging from quantum chromodynamics to high-T c materials. Instead of working from specific models, phase diagrams are constructed by averaging over the ensemble of theories that possesses the relevant symmetries of the problem. Although approximate in nature, this approach has a number of advantages. First, it can be useful in distinguishing generic features from model-dependent details. Second, it can help in understanding the 'minimal' number of symmetry constraints required to reproduce specific phase structures. Third, the robustness of predictions can be checked with respect to variations in the detailed description of the interactions. Finally, near critical points, random matrix models bear strong similarities to Ginsburg-Landau theories with the advantage of additional constraints inherited from the symmetries of the underlying interaction. These constraints can be helpful in ruling out certain topologies in the phase diagram. In this Key Issues Review, we illustrate the basic structure of random matrix models, discuss their strengths and weaknesses, and consider the kinds of system to which they can be applied.
Chen, Bihua; Yu, Tao; Ristagno, Giuseppe; Quan, Weilun; Li, Yongqin
2014-10-01
Defibrillation current has been shown to be a clinically more relevant dosing unit than energy. However, the effects of average and peak current in determining shock outcome are still undetermined. The aim of this study was to investigate the relationship between average current, peak current and defibrillation success when different biphasic waveforms were employed. Ventricular fibrillation (VF) was electrically induced in 22 domestic male pigs. Animals were then randomized to receive defibrillation using one of two different biphasic waveforms. A grouped up-and-down defibrillation threshold-testing protocol was used to maintain the average success rate of 50% in the neighborhood. In 14 animals (Study A), defibrillations were accomplished with either biphasic truncated exponential (BTE) or rectilinear biphasic waveforms. In eight animals (Study B), shocks were delivered using two BTE waveforms that had identical peak current but different waveform durations. Both average and peak currents were associated with defibrillation success when BTE and rectilinear waveforms were investigated. However, when pathway impedance was less than 90Ω for the BTE waveform, bivariate correlation coefficient was 0.36 (p=0.001) for the average current, but only 0.21 (p=0.06) for the peak current in Study A. In Study B, a high defibrillation success (67.9% vs. 38.8%, pcurrent (14.9±2.1A vs. 13.5±1.7A, pcurrent unchanged. In this porcine model of VF, average current was better than peak current to be an adequate parameter to describe the therapeutic dosage when biphasic defibrillation waveforms were used. The institutional protocol number: P0805. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Peak-load pricing in two-step-modeling of power generation and transmission
International Nuclear Information System (INIS)
Korunig, Jens-Holger
2005-01-01
For the use of electric current transmission and distribution networks, which represent a monopolistic bottleneck, are inevitable. In the context of the liberalization of the current markets vertical separation is possible to create a competitive generator market and separated current transmission and distribution networks, which can made be available to the same conditions to all potential net users. A private, independent network carrier, however, would arrange its net from the interest of profit: He would dimension the net probably smaller and would try to orientate the transmission prices to the actual costs. From the point of view of the system operator as well as from the economic point of view time-dependent transmission prices are preferable at the sight of periodically changing demand. The question, which arises thereby, is, to what extent the property structure affects prices and quantities for the current transmission and which property structure under economical aspects (maximization of the welfare, ...) represents the best. It is to be expected, that with a higher degree of competition a higher efficiency and a larger social welfare are obtained. This is to be analysed in a two-stage model with period-dependent demand and modelling both the power generation and the transmission sector regarding different property structures. Different types of market are assumed both on the production and on the distribution stage. Those market results are examined on their welfare effects. This is to accompany with a realistic modelling above all the network area, which represents further (in contrast to the power generation) a natural monopoly: A natural monopoly has compellingly sub additive cost functions. The net sector (and in a further step also the power generation) is to be modelled with decreasing average and marginal costs (current research task). If one permits each behaviour, an enterprise, which possesses this natural monopoly, will extract monopolist
The Research of Indoor Positioning Based on Double-peak Gaussian Model
Directory of Open Access Journals (Sweden)
Lina Chen
2014-04-01
Full Text Available Location fingerprinting using Wi-Fi signals has been very popular and is a well accepted indoor positioning method. The key issue of the fingerprinting approach is generating the fingerprint radio map. Limited by the practical workload, only a few samples of the received signal strength are collected at each reference point. Unfortunately, fewer samples cannot accurately represent the actual distribution of the signal strength from each access point. This study finds most Wi- Fi signals have two peaks. According to the new finding, a double-peak Gaussian arithmetic is proposed to generate a fingerprint radio map. This approach requires little time to receive WiFi signals and it easy to estimate the parameters of the double-peak Gaussian function. Compared to the Gaussian function and histogram method to generate a fingerprint radio map, this method better approximates the occurrence signal distribution. This paper also compared the positioning accuracy using K-Nearest Neighbour theory for three radio maps, the test results show that the positioning distance error utilizing the double-peak Gaussian function is better than the other two methods.
Energy Technology Data Exchange (ETDEWEB)
Canters, R A M; Franckena, M; Van der Zee, J; Van Rhoon, G C, E-mail: r.canters@erasmusmc.nl [Department of Radiation Oncology, Erasmus MC Daniel den Hoed Cancer Centre, Rotterdam, PO Box 5201, 3008 AE Rotterdam (Netherlands)
2011-01-21
During deep hyperthermia treatment, patient pain complaints due to heating are common when maximizing power. Hence, there exists a good rationale to investigate whether the locations of predicted SAR peaks by hyperthermia treatment planning (HTP) are correlated with the locations of patient pain during treatment. A retrospective analysis was performed, using the treatment reports of 35 patients treated with deep hyperthermia controlled by extensive treatment planning. For various SAR indicators, the average distance from a SAR peak to a patient discomfort location was calculated, for each complaint. The investigated V{sub 0.1closest} (i.e. the part of the 0.1th SAR percentile closest to the patient complaint) performed the best, and leads to an average distance between the SAR peak and the complaint location of 3.9 cm. Other SAR indicators produced average distances that were all above 10 cm. Further, the predicted SAR peak location with V{sub 0.1} provides a 77% match with the region of complaint. The current study demonstrates that HTP is able to provide a global indication of the regions where hotspots during treatment will most likely occur. Further development of this technology is necessary in order to use HTP as a valuable toll for objective and advanced SAR steering. The latter is especially valid for applications that enable 3D SAR steering.
International Nuclear Information System (INIS)
Canters, R A M; Franckena, M; Van der Zee, J; Van Rhoon, G C
2011-01-01
During deep hyperthermia treatment, patient pain complaints due to heating are common when maximizing power. Hence, there exists a good rationale to investigate whether the locations of predicted SAR peaks by hyperthermia treatment planning (HTP) are correlated with the locations of patient pain during treatment. A retrospective analysis was performed, using the treatment reports of 35 patients treated with deep hyperthermia controlled by extensive treatment planning. For various SAR indicators, the average distance from a SAR peak to a patient discomfort location was calculated, for each complaint. The investigated V 0.1closest (i.e. the part of the 0.1th SAR percentile closest to the patient complaint) performed the best, and leads to an average distance between the SAR peak and the complaint location of 3.9 cm. Other SAR indicators produced average distances that were all above 10 cm. Further, the predicted SAR peak location with V 0.1 provides a 77% match with the region of complaint. The current study demonstrates that HTP is able to provide a global indication of the regions where hotspots during treatment will most likely occur. Further development of this technology is necessary in order to use HTP as a valuable toll for objective and advanced SAR steering. The latter is especially valid for applications that enable 3D SAR steering.
Predicting the peak growth velocity in the individual child: validation of a new growth model.
Busscher, I.; Kingma, I.; de Bruin, R.; Wapstra, F.H.; Verkerke, G.J.; Veldhuizen, A.G.
2012-01-01
Predicting the peak growth velocity in an individual patient with adolescent idiopathic scoliosis is essential or determining the prognosis of the disorder and timing of the (surgical) treatment. Until the present time, no accurate method has been found to predict the timing and magnitude of the
Predicting the peak growth velocity in the individual child : validation of a new growth model
Busscher, Iris; Kingma, Idsart; de Bruin, Rob; Wapstra, Frits Hein; Verkerke, Gijsvertus J.; Veldhuizen, Albert G.
Predicting the peak growth velocity in an individual patient with adolescent idiopathic scoliosis is essential or determining the prognosis of the disorder and timing of the (surgical) treatment. Until the present time, no accurate method has been found to predict the timing and magnitude of the
Predicting the peak growth velocity in the individual child: validation of a new growth model
Busscher, I.; Kingma, I.; Bruin, R.; Wapstra, F.H.; Verkerke, Gijsbertus Jacob; Veldhuizen, A.G.
2012-01-01
Predicting the peak growth velocity in an individual patient with adolescent idiopathic scoliosis is essential or determining the prognosis of the disorder and timing of the (surgical) treatment. Until the present time, no accurate method has been found to predict the timing and magnitude of the
Probabilistic Model for Untargeted Peak Detection in LC-MS Using Bayesian Statistics.
Woldegebriel, Michael; Vivó-Truyols, Gabriel
2015-07-21
We introduce a novel Bayesian probabilistic peak detection algorithm for liquid chromatography-mass spectroscopy (LC-MS). The final probabilistic result allows the user to make a final decision about which points in a chromatogram are affected by a chromatographic peak and which ones are only affected by noise. The use of probabilities contrasts with the traditional method in which a binary answer is given, relying on a threshold. By contrast, with the Bayesian peak detection presented here, the values of probability can be further propagated into other preprocessing steps, which will increase (or decrease) the importance of chromatographic regions into the final results. The present work is based on the use of the statistical overlap theory of component overlap from Davis and Giddings (Davis, J. M.; Giddings, J. Anal. Chem. 1983, 55, 418-424) as prior probability in the Bayesian formulation. The algorithm was tested on LC-MS Orbitrap data and was able to successfully distinguish chemical noise from actual peaks without any data preprocessing.
Sapteka, A. A. N. G.; Narottama, A. A. N. M.; Winarta, A.; Amerta Yasa, K.; Priambodo, P. S.; Putra, N.
2018-01-01
Solar energy utilized with solar panel is a renewable energy that needs to be studied further. The site nearest to the equator, it is not surprising, receives the highest solar energy. In this paper, a modelling of electrical characteristics of 150-Watt peak solar panels using Boltzmann sigmoid function under various temperature and irradiance is reported. Current, voltage, temperature and irradiance data in Denpasar, a city located at just south of equator, was collected. Solar power meter is used to measure irradiance level, meanwhile digital thermometer is used to measure temperature of front and back panels. Short circuit current and open circuit voltage data was also collected at different temperature and irradiance level. Statistically, the electrical characteristics of 150-Watt peak solar panel can be modelled using Boltzmann sigmoid function with good fit. Therefore, it can be concluded that Boltzmann sigmoid function might be used to determine current and voltage characteristics of 150-Watt peak solar panel under various temperature and irradiance.
Particle filters for random set models
Ristic, Branko
2013-01-01
“Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based on the Monte Carlo statistical method. The resulting algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...
Model-based dynamic multi-parameter method for peak power estimation of lithium-ion batteries
Sun, F.; Xiong, R.; He, H.; Li, W.; Aussems, J.E.E.
2012-01-01
A model-based dynamic multi-parameter method for peak power estimation is proposed for batteries and battery management systems (BMSs) used in hybrid electric vehicles (HEVs). The available power must be accurately calculated in order to not damage the battery by over charging or over discharging or
Connectivity ranking of heterogeneous random conductivity models
Rizzo, C. B.; de Barros, F.
2017-12-01
To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.
Fathallah, F A; Marras, W S; Parnianpour, M
1999-09-01
Most biomechanical assessments of spinal loading during industrial work have focused on estimating peak spinal compressive forces under static and sagittally symmetric conditions. The main objective of this study was to explore the potential of feasibly predicting three-dimensional (3D) spinal loading in industry from various combinations of trunk kinematics, kinetics, and subject-load characteristics. The study used spinal loading, predicted by a validated electromyography-assisted model, from 11 male participants who performed a series of symmetric and asymmetric lifts. Three classes of models were developed: (a) models using workplace, subject, and trunk motion parameters as independent variables (kinematic models); (b) models using workplace, subject, and measured moments variables (kinetic models); and (c) models incorporating workplace, subject, trunk motion, and measured moments variables (combined models). The results showed that peak 3D spinal loading during symmetric and asymmetric lifting were predicted equally well using all three types of regression models. Continuous 3D loading was predicted best using the combined models. When the use of such models is infeasible, the kinematic models can provide adequate predictions. Finally, lateral shear forces (peak and continuous) were consistently underestimated using all three types of models. The study demonstrated the feasibility of predicting 3D loads on the spine under specific symmetric and asymmetric lifting tasks without the need for collecting EMG information. However, further validation and development of the models should be conducted to assess and extend their applicability to lifting conditions other than those presented in this study. Actual or potential applications of this research include exposure assessment in epidemiological studies, ergonomic intervention, and laboratory task assessment.
A random matrix model of relaxation
International Nuclear Information System (INIS)
Lebowitz, J L; Pastur, L
2004-01-01
We consider a two-level system, S 2 , coupled to a general n level system, S n , via a random matrix. We derive an integral representation for the mean reduced density matrix ρ(t) of S 2 in the limit n → ∞, and we identify a model of S n which possesses some of the properties expected for macroscopic thermal reservoirs. In particular, it yields the Gibbs form for ρ(∞). We also consider an analog of the van Hove limit and obtain a master equation (Markov dynamics) for the evolution of ρ(t) on an appropriate time scale
A model of market power in electricity industries subject to peak load pricing
International Nuclear Information System (INIS)
Arellano, Maria-Soledad; Serra, Pablo
2007-01-01
This paper studies the exercise of market power in price-regulated electricity industries under peak-load pricing and merit order dispatching, but where investment decisions are taken by independent generating companies. Within this context, we show that producers can exercise market power by under-investing in base-load capacity, compared to the welfare-maximizing configuration. We also show that when there is free entry with an exogenous fixed entry cost that is later sunk, more intense competition results in higher welfare but fewer firms. (author)
Robustness of a Neural Network Model for Power Peak Factor Estimation in Protection Systems
International Nuclear Information System (INIS)
Souza, Rose Mary G.P.; Moreira, Joao M.L.
2006-01-01
This work presents results of robustness verification of artificial neural network correlations that improve the real time prediction of the power peak factor for reactor protection systems. The input variables considered in the correlation are those available in the reactor protection systems, namely, the axial power differences obtained from measured ex-core detectors, and the position of control rods. The correlations, based on radial basis function (RBF) and multilayer perceptron (MLP) neural networks, estimate the power peak factor, without faulty signals, with average errors between 0.13%, 0.19% and 0.15%, and maximum relative error of 2.35%. The robustness verification was performed for three different neural network correlations. The results show that they are robust against signal degradation, producing results with faulty signals with a maximum error of 6.90%. The average error associated to faulty signals for the MLP network is about half of that of the RBF network, and the maximum error is about 1% smaller. These results demonstrate that MLP neural network correlation is more robust than the RBF neural network correlation. The results also show that the input variables present redundant information. The axial power difference signals compensate the faulty signal for the position of a given control rod, and improves the results by about 10%. The results show that the errors in the power peak factor estimation by these neural network correlations, even in faulty conditions, are smaller than the current PWR schemes which may have uncertainties as high as 8%. Considering the maximum relative error of 2.35%, these neural network correlations would allow decreasing the power peak factor safety margin by about 5%. Such a reduction could be used for operating the reactor with a higher power level or with more flexibility. The neural network correlation has to meet requirements of high integrity software that performs safety grade actions. It is shown that the
Ising model of a randomly triangulated random surface as a definition of fermionic string theory
International Nuclear Information System (INIS)
Bershadsky, M.A.; Migdal, A.A.
1986-01-01
Fermionic degrees of freedom are added to randomly triangulated planar random surfaces. It is shown that the Ising model on a fixed graph is equivalent to a certain Majorana fermion theory on the dual graph. (orig.)
Haberlandt, U.; Radtke, I.
2014-01-01
Derived flood frequency analysis allows the estimation of design floods with hydrological modeling for poorly observed basins considering change and taking into account flood protection measures. There are several possible choices regarding precipitation input, discharge output and consequently the calibration of the model. The objective of this study is to compare different calibration strategies for a hydrological model considering various types of rainfall input and runoff output data sets and to propose the most suitable approach. Event based and continuous, observed hourly rainfall data as well as disaggregated daily rainfall and stochastically generated hourly rainfall data are used as input for the model. As output, short hourly and longer daily continuous flow time series as well as probability distributions of annual maximum peak flow series are employed. The performance of the strategies is evaluated using the obtained different model parameter sets for continuous simulation of discharge in an independent validation period and by comparing the model derived flood frequency distributions with the observed one. The investigations are carried out for three mesoscale catchments in northern Germany with the hydrological model HEC-HMS (Hydrologic Engineering Center's Hydrologic Modeling System). The results show that (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated using a small sample of extreme values works quite well for the simulation of continuous time series with moderate length but not vice versa, and (III) the best performance with small uncertainty is obtained when stochastic precipitation data and the observed probability distribution of peak flows are used for model calibration. This outcome suggests to calibrate a hydrological model directly on probability distributions of observed peak flows using stochastic rainfall as input if its purpose is the
Gao, Jihui; Holden, Joseph; Kirkby, Mike
2014-05-01
Changes to land cover can influence the velocity of overland flow. In headwater peatlands, saturation means that overland flow is a dominant source of runoff, particularly during heavy rainfall events. Human modifications in headwater peatlands may include removal of vegetation (e.g. by erosion processes, fire, pollution, overgrazing) or pro-active revegetation of peat with sedges such as Eriophorum or mosses such as Sphagnum. How these modifications affect the river flow, and in particular the flood peak, in headwater peatlands is a key problem for land management. In particular, the impact of the spatial distribution of land cover change (e.g. different locations and sizes of land cover change area) on river flow is not clear. In this presentation a new fully distributed version of TOPMODEL, which represents the effects of distributed land cover change on river discharge, was employed to investigate land cover change impacts in three UK upland peat catchments (Trout Beck in the North Pennines, the Wye in mid-Wales and the East Dart in southwest England). Land cover scenarios with three typical land covers (i.e. Eriophorum, Sphagnum and bare peat) having different surface roughness in upland peatlands were designed for these catchments to investigate land cover impacts on river flow through simulation runs of the distributed model. As a result of hypothesis testing three land cover principles emerged from the work as follows: Principle (1): Well vegetated buffer strips are important for reducing flow peaks. A wider bare peat strip nearer to the river channel gives a higher flow peak and reduces the delay to peak; conversely, a wider buffer strip with higher density vegetation (e.g. Sphagnum) leads to a lower peak and postpones the peak. In both cases, a narrower buffer strip surrounding upstream and downstream channels has a greater effect than a thicker buffer strip just based around the downstream river network. Principle (2): When the area of change is equal
Locatelli, Luca; Gabriel, Søren; Mark, Ole; Mikkelsen, Peter Steen; Arnbjerg-Nielsen, Karsten; Taylor, Heidi; Bockhorn, Britta; Larsen, Hauge; Kjølby, Morten Just; Blicher, Anne Steensen; Binning, Philip John
2015-01-01
Stormwater management using water sensitive urban design is expected to be part of future drainage systems. This paper aims to model the combination of local retention units, such as soakaways, with subsurface detention units. Soakaways are employed to reduce (by storage and infiltration) peak and volume stormwater runoff; however, large retention volumes are required for a significant peak reduction. Peak runoff can therefore be handled by combining detention units with soakaways. This paper models the impact of retrofitting retention-detention units for an existing urbanized catchment in Denmark. The impact of retrofitting a retention-detention unit of 3.3 m³/100 m² (volume/impervious area) was simulated for a small catchment in Copenhagen using MIKE URBAN. The retention-detention unit was shown to prevent flooding from the sewer for a 10-year rainfall event. Statistical analysis of continuous simulations covering 22 years showed that annual stormwater runoff was reduced by 68-87%, and that the retention volume was on average 53% full at the beginning of rain events. The effect of different retention-detention volume combinations was simulated, and results showed that allocating 20-40% of a soakaway volume to detention would significantly increase peak runoff reduction with a small reduction in the annual runoff.
Garg, Harish Kumar; Singh, Rupinder
2017-10-01
In the present work, to increase the application domain of fused deposition modelling (FDM) process, Nylon6-Fe powder based composite wire has been prepared as feed stock filament. Further for smooth functioning of feed stock filament without any change in the hardware and software of the commercial FDM setup, the mechanical properties of the newly prepared composite wire must be comparable/at par to the existing material i.e. ABS, P-430. So, keeping this in consideration; an effort has been made to model the peak elongation of in house developed feedstock filament comprising of Nylon6 and Fe powder (prepared on single screw extrusion process) for commercial FDM setup. The input parameters of single screw extruder (namely: barrel temperature, temperature of the die, speed of the screw, speed of the winding machine) and rheological property of material (melt flow index) has been modelled with peak elongation as the output by using response surface methodology. For validation of model the result of peak elongation obtained from the model equation the comparison was made with the results of actual experimentation which shows the variation of ±1 % only.
Random defect lines in conformal minimal models
International Nuclear Information System (INIS)
Jeng, M.; Ludwig, A.W.W.
2001-01-01
We analyze the effect of adding quenched disorder along a defect line in the 2D conformal minimal models using replicas. The disorder is realized by a random applied magnetic field in the Ising model, by fluctuations in the ferromagnetic bond coupling in the tricritical Ising model and tricritical three-state Potts model (the phi 12 operator), etc. We find that for the Ising model, the defect renormalizes to two decoupled half-planes without disorder, but that for all other models, the defect renormalizes to a disorder-dominated fixed point. Its critical properties are studied with an expansion in ε∝1/m for the mth Virasoro minimal model. The decay exponents X N =((N)/(2))1-((9(3N-4))/(4(m+1) 2 ))+O((3)/(m+1)) 3 of the Nth moment of the two-point function of phi 12 along the defect are obtained to 2-loop order, exhibiting multifractal behavior. This leads to a typical decay exponent X typ =((1)/(2))1+((9)/((m+1) 2 ))+O((3)/(m+1)) 3 . One-point functions are seen to have a non-self-averaging amplitude. The boundary entropy is larger than that of the pure system by order 1/m 3 . As a byproduct of our calculations, we also obtain to 2-loop order the exponent X-tilde N =N1-((2)/(9π 2 ))(3N-4)(q-2) 2 +O(q-2) 3 of the Nth moment of the energy operator in the q-state Potts model with bulk bond disorder
International Nuclear Information System (INIS)
Dunne, Lawrence J; Axelsson, Anna-Karin; Alford, Neil McN; Valant, Matjaz; Manos, George
2011-01-01
Despite considerable effort, the microscopic origin of the electrocaloric (EC) effect in ferroelectric relaxors is still intensely discussed. Ferroelectric relaxors typically display a dual-peak EC effect, whose origin is uncertain. Here we present an exact statistical mechanical matrix treatment of a lattice model of polar nanoregions forming in a neutral background and use this approach to study the characteristics of the EC effect in ferroelectric relaxors under varying electric field and pressure. The dual peaks seen in the EC properties of ferroelectric relaxors are due to the formation and ordering of polar nanoregions. The model predicts significant enhancement of the EC temperature rise with pressure which may have some contribution to the giant EC effect.
Constitutive modelling of creep-ageing behaviour of peak-aged aluminium alloy 7050
Directory of Open Access Journals (Sweden)
Yang Yo-Lun
2015-01-01
Full Text Available The creep-ageing behaviour of a peak-aged aluminium alloy 7050 was investigated under different stress levels at 174 ∘C for up to 8 h. Interrupted creep tests and tensile tests were performed to investigate the influences of creep-ageing time and applied stress on yield strength. The mechanical testing results indicate that the material exhibits an over-ageing behaviour which increases with the applied stress level during creep-ageing. As creep-ageing time approaches 8 h, the material's yield strength under different stress levels gradually converge, which suggests that the difference in mechanical properties under different stress conditions can be minimised. This feature can be advantageous in creep-age forming to the formed components such that uniformed mechanical properties across part area can be achieved. A set of constitutive equations was calibrated using the mechanical test results and the alloy-specific material constants were obtained. A good agreement is observed between the experimental and calibrated results.
Rae, A.; Poelchau, M.; Collins, G. S.; Timms, N.; Cavosie, A. J.; Lofi, J.; Salge, T.; Riller, U. P.; Ferrière, L.; Grieve, R. A. F.; Osinski, G.; Morgan, J. V.; Expedition 364 Science Party, I. I.
2017-12-01
. Our results quantitatively describe the deviatoric stress conditions of rocks in shock, which are consistent with observations of shock deformation. Our integrated analysis provides further support for the dynamic collapse model of peak-ring formation, and places dynamic constraints on the conditions of peak-ring formation.
An Analytical Model for Spectral Peak Frequency Prediction of Substrate Noise in CMOS Substrates
DEFF Research Database (Denmark)
Shen, Ming; Mikkelsen, Jan H.
2013-01-01
This paper proposes an analytical model describing the generation of switching current noise in CMOS substrates. The model eliminates the need for SPICE simulations in existing methods by conducting a transient analysis on a generic CMOS inverter and approximating the switching current waveform us...
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-04
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
DEFF Research Database (Denmark)
Mantel, Claire; Søgaard, Jacob; Bech, Søren
2016-01-01
is computed using a model of the display. Widely used objective quality metrics are applied based on the rendering models of the videos to predict the subjective evaluations. As these predictions are not satisfying, three machine learning methods are applied: partial least square regression, elastic net......This paper investigates the impact of ambient light and peak white (maximum brightness of a display) on the perceived quality of videos displayed using local backlight dimming. Two subjective tests providing quality evaluations are presented and analyzed. The analyses of variance show significant...
Random matrix model of adiabatic quantum computing
International Nuclear Information System (INIS)
Mitchell, David R.; Adami, Christoph; Lue, Waynn; Williams, Colin P.
2005-01-01
We present an analysis of the quantum adiabatic algorithm for solving hard instances of 3-SAT (an NP-complete problem) in terms of random matrix theory (RMT). We determine the global regularity of the spectral fluctuations of the instantaneous Hamiltonians encountered during the interpolation between the starting Hamiltonians and the ones whose ground states encode the solutions to the computational problems of interest. At each interpolation point, we quantify the degree of regularity of the average spectral distribution via its Brody parameter, a measure that distinguishes regular (i.e., Poissonian) from chaotic (i.e., Wigner-type) distributions of normalized nearest-neighbor spacings. We find that for hard problem instances - i.e., those having a critical ratio of clauses to variables - the spectral fluctuations typically become irregular across a contiguous region of the interpolation parameter, while the spectrum is regular for easy instances. Within the hard region, RMT may be applied to obtain a mathematical model of the probability of avoided level crossings and concomitant failure rate of the adiabatic algorithm due to nonadiabatic Landau-Zener-type transitions. Our model predicts that if the interpolation is performed at a uniform rate, the average failure rate of the quantum adiabatic algorithm, when averaged over hard problem instances, scales exponentially with increasing problem size
Forecasting peak asthma admissions in London: an application of quantile regression models
Soyiri, Ireneous N.; Reidpath, Daniel D.; Sarran, Christophe
2013-07-01
Asthma is a chronic condition of great public health concern globally. The associated morbidity, mortality and healthcare utilisation place an enormous burden on healthcare infrastructure and services. This study demonstrates a multistage quantile regression approach to predicting excess demand for health care services in the form of asthma daily admissions in London, using retrospective data from the Hospital Episode Statistics, weather and air quality. Trivariate quantile regression models (QRM) of asthma daily admissions were fitted to a 14-day range of lags of environmental factors, accounting for seasonality in a hold-in sample of the data. Representative lags were pooled to form multivariate predictive models, selected through a systematic backward stepwise reduction approach. Models were cross-validated using a hold-out sample of the data, and their respective root mean square error measures, sensitivity, specificity and predictive values compared. Two of the predictive models were able to detect extreme number of daily asthma admissions at sensitivity levels of 76 % and 62 %, as well as specificities of 66 % and 76 %. Their positive predictive values were slightly higher for the hold-out sample (29 % and 28 %) than for the hold-in model development sample (16 % and 18 %). QRMs can be used in multistage to select suitable variables to forecast extreme asthma events. The associations between asthma and environmental factors, including temperature, ozone and carbon monoxide can be exploited in predicting future events using QRMs.
van der Krogt, M.M.; Doorenbosch, C.A.M.; Harlaar, J.
2008-01-01
Accurate estimates of hamstrings lengths are useful, for example, to facilitate planning for surgical lengthening of the hamstrings in patients with cerebral palsy. In this study, three models used to estimate hamstrings length (M1: Delp, M2: Klein Horsman, M3: Hawkins and Hull) were evaluated. This
Simulation of a directed random-walk model: the effect of pseudo-random-number correlations
Shchur, L. N.; Heringa, J. R.; Blöte, H. W. J.
1996-01-01
We investigate the mechanism that leads to systematic deviations in cluster Monte Carlo simulations when correlated pseudo-random numbers are used. We present a simple model, which enables an analysis of the effects due to correlations in several types of pseudo-random-number sequences. This model provides qualitative understanding of the bias mechanism in a class of cluster Monte Carlo algorithms.
Energy Technology Data Exchange (ETDEWEB)
Török, Gabriel; Goluchová, Katerina; Urbanec, Martin, E-mail: gabriel.torok@gmail.com, E-mail: katka.g@seznam.cz, E-mail: martin.urbanec@physics.cz [Research Centre for Computational Physics and Data Processing, Institute of Physics, Faculty of Philosophy and Science, Silesian University in Opava, Bezručovo nám. 13, CZ-746, 01 Opava (Czech Republic); and others
2016-12-20
Twin-peak quasi-periodic oscillations (QPOs) are observed in the X-ray power-density spectra of several accreting low-mass neutron star (NS) binaries. In our previous work we have considered several QPO models. We have identified and explored mass–angular-momentum relations implied by individual QPO models for the atoll source 4U 1636-53. In this paper we extend our study and confront QPO models with various NS equations of state (EoS). We start with simplified calculations assuming Kerr background geometry and then present results of detailed calculations considering the influence of NS quadrupole moment (related to rotationally induced NS oblateness) assuming Hartle–Thorne spacetimes. We show that the application of concrete EoS together with a particular QPO model yields a specific mass–angular-momentum relation. However, we demonstrate that the degeneracy in mass and angular momentum can be removed when the NS spin frequency inferred from the X-ray burst observations is considered. We inspect a large set of EoS and discuss their compatibility with the considered QPO models. We conclude that when the NS spin frequency in 4U 1636-53 is close to 580 Hz, we can exclude 51 of the 90 considered combinations of EoS and QPO models. We also discuss additional restrictions that may exclude even more combinations. Namely, 13 EOS are compatible with the observed twin-peak QPOs and the relativistic precession model. However, when considering the low-frequency QPOs and Lense–Thirring precession, only 5 EOS are compatible with the model.
Dynamics of the Random Field Ising Model
Xu, Jian
The Random Field Ising Model (RFIM) is a general tool to study disordered systems. Crackling noise is generated when disordered systems are driven by external forces, spanning a broad range of sizes. Systems with different microscopic structures such as disordered mag- nets and Earth's crust have been studied under the RFIM. In this thesis, we investigated the domain dynamics and critical behavior in two dipole-coupled Ising ferromagnets Nd2Fe14B and LiHoxY 1-xF4. With Tc well above room temperature, Nd2Fe14B has shown reversible disorder when exposed to an external transverse field and crosses between two universality classes in the strong and weak disorder limits. Besides tunable disorder, LiHoxY1-xF4 has shown quantum tunneling effects arising from quantum fluctuations, providing another mechanism for domain reversal. Universality within and beyond power law dependence on avalanche size and energy were studied in LiHo0.65Y0.35 F4.
Directory of Open Access Journals (Sweden)
Yunping Qiu
2018-01-01
Full Text Available Identifying non-annotated peaks may have a significant impact on the understanding of biological systems. In silico methodologies have focused on ESI LC/MS/MS for identifying non-annotated MS peaks. In this study, we employed in silico methodology to develop an Isotopic Ratio Outlier Analysis (IROA workflow using enhanced mass spectrometric data acquired with the ultra-high resolution GC-Orbitrap/MS to determine the identity of non-annotated metabolites. The higher resolution of the GC-Orbitrap/MS, together with its wide dynamic range, resulted in more IROA peak pairs detected, and increased reliability of chemical formulae generation (CFG. IROA uses two different 13C-enriched carbon sources (randomized 95% 12C and 95% 13C to produce mirror image isotopologue pairs, whose mass difference reveals the carbon chain length (n, which aids in the identification of endogenous metabolites. Accurate m/z, n, and derivatization information are obtained from our GC/MS workflow for unknown metabolite identification, and aids in silico methodologies for identifying isomeric and non-annotated metabolites. We were able to mine more mass spectral information using the same Saccharomyces cerevisiae growth protocol (Qiu et al. Anal. Chem 2016 with the ultra-high resolution GC-Orbitrap/MS, using 10% ammonia in methane as the CI reagent gas. We identified 244 IROA peaks pairs, which significantly increased IROA detection capability compared with our previous report (126 IROA peak pairs using a GC-TOF/MS machine. For 55 selected metabolites identified from matched IROA CI and EI spectra, using the GC-Orbitrap/MS vs. GC-TOF/MS, the average mass deviation for GC-Orbitrap/MS was 1.48 ppm, however, the average mass deviation was 32.2 ppm for the GC-TOF/MS machine. In summary, the higher resolution and wider dynamic range of the GC-Orbitrap/MS enabled more accurate CFG, and the coupling of accurate mass GC/MS IROA methodology with in silico fragmentation has great
Qiu, Yunping; Moir, Robyn D; Willis, Ian M; Seethapathy, Suresh; Biniakewitz, Robert C; Kurland, Irwin J
2018-01-18
Identifying non-annotated peaks may have a significant impact on the understanding of biological systems. In silico methodologies have focused on ESI LC/MS/MS for identifying non-annotated MS peaks. In this study, we employed in silico methodology to develop an Isotopic Ratio Outlier Analysis (IROA) workflow using enhanced mass spectrometric data acquired with the ultra-high resolution GC-Orbitrap/MS to determine the identity of non-annotated metabolites. The higher resolution of the GC-Orbitrap/MS, together with its wide dynamic range, resulted in more IROA peak pairs detected, and increased reliability of chemical formulae generation (CFG). IROA uses two different 13 C-enriched carbon sources (randomized 95% 12 C and 95% 13 C) to produce mirror image isotopologue pairs, whose mass difference reveals the carbon chain length (n), which aids in the identification of endogenous metabolites. Accurate m/z, n, and derivatization information are obtained from our GC/MS workflow for unknown metabolite identification, and aids in silico methodologies for identifying isomeric and non-annotated metabolites. We were able to mine more mass spectral information using the same Saccharomyces cerevisiae growth protocol (Qiu et al. Anal. Chem 2016) with the ultra-high resolution GC-Orbitrap/MS, using 10% ammonia in methane as the CI reagent gas. We identified 244 IROA peaks pairs, which significantly increased IROA detection capability compared with our previous report (126 IROA peak pairs using a GC-TOF/MS machine). For 55 selected metabolites identified from matched IROA CI and EI spectra, using the GC-Orbitrap/MS vs. GC-TOF/MS, the average mass deviation for GC-Orbitrap/MS was 1.48 ppm, however, the average mass deviation was 32.2 ppm for the GC-TOF/MS machine. In summary, the higher resolution and wider dynamic range of the GC-Orbitrap/MS enabled more accurate CFG, and the coupling of accurate mass GC/MS IROA methodology with in silico fragmentation has great potential in
Modelling of atmospheric effects on the angular distribution of a backscattering peak
International Nuclear Information System (INIS)
Powers, B.J.; Gerstl, S.A.W.
1987-01-01
If off-nadir satellite sensing of vegetative surfaces is considered, understanding the angular distribution of the radiance exiting the atmosphere in all upward directions is of interest. Of particular interest is the discovery of those reflectance features which are invariant to atmospheric perturbations. When mono-directional radiation is incident on a vegetative scene a characteristic angular signature called the hot-spot is produced in the solar retro-direction. The remotely sensed hot-spot is modified by atmospheric extinction of the direct and reflected solar radiation, atmospheric backscattering, and the diffuse sky irradiance incident on the surface. It is demonstrated, however, by radiative transfer calculations through model atmospheres that at least one parameter which characterizes the canopy hot-spot, namely its angular half width, is invariant to atmospheric perturbations. 7 refs., 4 figs., 1 tab
Tao, Li; Zhu, Kun; Zhu, Jungao; Xu, Xiaohan; Lin, Chen; Ma, Wenjun; Lu, Haiyang; Zhao, Yanying; Lu, Yuanrong; Chen, Jia-Er; Yan, Xueqing
2017-07-07
With the development of laser technology, laser-driven proton acceleration provides a new method for proton tumor therapy. However, it has not been applied in practice because of the wide and decreasing energy spectrum of laser-accelerated proton beams. In this paper, we propose an analytical model to reconstruct the spread-out Bragg peak (SOBP) using laser-accelerated proton beams. Firstly, we present a modified weighting formula for protons of different energies. Secondly, a theoretical model for the reconstruction of SOBPs with laser-accelerated proton beams has been built. It can quickly calculate the number of laser shots needed for each energy interval of the laser-accelerated protons. Finally, we show the 2D reconstruction results of SOBPs for laser-accelerated proton beams and the ideal situation. The final results show that our analytical model can give an SOBP reconstruction scheme that can be used for actual tumor therapy.
Gao, H.; Sabo, J. L.
2016-12-01
Wetlands as the earth's kidneys provides various ecosystem services, such as absorbing pollutants, purifying freshwater, providing habitats for diverse ecosystems, sustaining species richness and biodiversity. From hydrologic perspective, wetlands can store storm-flood water in flooding seasons and release it afterwards, which will reduce flood peaks and reshape hydrograph. Therefore, as a green infrastructure and natural capital, wetlands provides a competent alternative to manage water resources in a green way, with potential to replace the widely criticized traditional gray infrastructure (i.e. dams and dikes) in certain cases. However, there are few systematic scientific tools to support our decision-making on site selection and allow us to quantitatively investigate the impacts of restored wetlands on hydrological process, not only in local scale but also in the view of entire catchment. In this study, we employed a topographic index, HAND (the Height Above the Nearest Drainage), to support our decision on potential site selection. Subsequently, a hydrological model (VIC, Variable Infiltration Capacity) was coupled with a macro-scale hydrodynamic model (CaMa-Flood, Catchment-Based Macro-scale Floodplain) to simulate the impact of wetland restoration on flood peaks and baseflow. The results demonstrated that topographic information is an essential factor to select wetland restoration location. Different reaches, wetlands area and the change of roughness coefficient should be taken into account while evaluating the impacts of wetland restoration. The simulated results also clearly illustrated that wetland restoration will increase the local storage and decrease the downstream peak flow which is beneficial for flood prevention. However, its impact on baseflow is ambiguous. Theoretically, restored wetlands will increase the baseflow due to the slower release of the stored flood water, but the increase of wetlands area may also increase the actual evaporation
Strupczewski, Witold G.; Bogdanowich, Ewa; Debele, Sisay
2016-04-01
Under Polish climate conditions the series of Annual Maxima (AM) flows are usually a mixture of peak flows of thaw- and rainfall- originated floods. The northern, lowland regions are dominated by snowmelt floods whilst in mountainous regions the proportion of rainfall floods is predominant. In many stations the majority of AM can be of snowmelt origin, but the greatest peak flows come from rainfall floods or vice versa. In a warming climate, precipitation is less likely to occur as snowfall. A shift from a snow- towards a rain-dominated regime results in a decreasing trend in mean and standard deviations of winter peak flows whilst rainfall floods do not exhibit any trace of non-stationarity. That is why a simple form of trends (i.e. linear trends) are more difficult to identify in AM time-series than in Seasonal Maxima (SM), usually winter season time-series. Hence it is recommended to analyse trends in SM, where a trend in standard deviation strongly influences the time -dependent upper quantiles. The uncertainty associated with the extrapolation of the trend makes it necessary to apply a relationship for trend which has time derivative tending to zero, e.g. we can assume a new climate equilibrium epoch approaching, or a time horizon is limited by the validity of the trend model. For both winter and summer SM time series, at least three distributions functions with trend model in the location, scale and shape parameters are estimated by means of the GAMLSS package using the ML-techniques. The resulting trend estimates in mean and standard deviation are mutually compared to the observed trends. Then, using AIC measures as weights, a multi-model distribution is constructed for each of two seasons separately. Further, assuming a mutual independence of the seasonal maxima, an AM model with time-dependent parameters can be obtained. The use of a multi-model approach can alleviate the effects of different and often contradictory trends obtained by using and identifying
International Nuclear Information System (INIS)
Lv, Song; He, Wei; Zhang, Aifeng; Li, Guiqiang; Luo, Bingqing; Liu, Xianghua
2017-01-01
Highlights: • A new CAES system for trigeneration based on electrical peak load shifting is proposed. • The theoretical models and the thermodynamics process are established and analyzed. • The relevant parameters influencing its performance have been discussed and optimized. • A novel energy and economic evaluation methods is proposed to evaluate the performance of the system. - Abstract: The compressed air energy storage (CAES) has made great contribution to both electricity and renewable energy. In the pursuit of reduced energy consumption and relieving power utility pressure effectively, a novel trigeneration system based on CAES for cooling, heating and electricity generation by electrical energy peak load shifting is proposed in this paper. The cooling power is generated by the direct expansion of compressed air, and the heating power is recovered in the process of compression and storage. Based on the working principle of the typical CAES, the theoretical analysis of the thermodynamic system models are established and the characteristics of the system are analyzed. A novel method used to evaluate energy and economic performance is proposed. A case study is conducted, and the economic-social and technical feasibility of the proposed system are discussed. The results show that the trigeneration system works efficiently at relatively low pressure, and the efficiency is expected to reach about 76.3% when air is compressed and released by 15 bar. The annual monetary cost saving annually is about 53.9%. Moreover, general considerations about the proposed system are also presented.
Peak capacity analysis of coal power in China based on full-life cycle cost model optimization
Yan, Xiaoqing; Zhang, Jinfang; Huang, Xinting
2018-02-01
13th five-year and the next period are critical for the energy and power reform of China. In order to ease the excessive power supply, policies have been introduced by National Energy Board especially toward coal power capacity control. Therefore the rational construction scale and scientific development timing for coal power are of great importance and paid more and more attentions. In this study, the comprehensive influence of coal power reduction policies is analyzed from diverse point of views. Full-life cycle cost model of coal power is established to fully reflect the external and internal cost. Then this model is introduced in an improved power planning optimization theory. The power planning and diverse scenarios production simulation shows that, in order to meet the power, electricity and peak balance of power system, China’s coal power peak capacity is within 1.15 ∼ 1.2 billion kilowatts before or after 2025. The research result is expected to be helpful to the power industry in 14th and 15th five-year periods, promoting the efficiency and safety of power system.
Force Limited Random Vibration Test of TESS Camera Mass Model
Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.
2015-01-01
The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.
A random regret minimization model of travel choice
Chorus, C.G.; Arentze, T.A.; Timmermans, H.J.P.
2008-01-01
Abstract This paper presents an alternative to Random Utility-Maximization models of travel choice. Our Random Regret-Minimization model is rooted in Regret Theory and provides several useful features for travel demand analysis. Firstly, it allows for the possibility that choices between travel
A Note on the Correlated Random Coefficient Model
DEFF Research Database (Denmark)
Kolodziejczyk, Christophe
In this note we derive the bias of the OLS estimator for a correlated random coefficient model with one random coefficient, but which is correlated with a binary variable. We provide set-identification to the parameters of interest of the model. We also show how to reduce the bias of the estimator...
A random energy model for size dependence : recurrence vs. transience
Külske, Christof
1998-01-01
We investigate the size dependence of disordered spin models having an infinite number of Gibbs measures in the framework of a simplified 'random energy model for size dependence'. We introduce two versions (involving either independent random walks or branching processes), that can be seen as
Compensatory and non-compensatory multidimensional randomized item response models
Fox, J.P.; Entink, R.K.; Avetisyan, M.
2014-01-01
Randomized response (RR) models are often used for analysing univariate randomized response data and measuring population prevalence of sensitive behaviours. There is much empirical support for the belief that RR methods improve the cooperation of the respondents. Recently, RR models have been
Olekhno, N. A.; Beltukov, Y. M.
2018-05-01
Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0
Haberlandt, Uwe; Wallner, Markus; Radtke, Imke
2013-04-01
Derived flood frequency analysis based on continuous hydrological modelling is very demanding regarding the required length and temporal resolution of precipitation input data. Often such flood predictions are obtained using long precipitation time series from stochastic approaches or from regional climate models as input. However, the calibration of the hydrological model is usually done using short time series of observed data. This inconsistent employment of different data types for calibration and application of a hydrological model increases its uncertainty. Here, it is proposed to calibrate a hydrological model directly on probability distributions of observed peak flows using model based rainfall in line with its later application. Two examples are given to illustrate the idea. The first one deals with classical derived flood frequency analysis using input data from an hourly stochastic rainfall model. The second one concerns a climate impact analysis using hourly precipitation from a regional climate model. The results show that: (I) the same type of precipitation input data should be used for calibration and application of the hydrological model, (II) a model calibrated on extreme conditions works quite well for average conditions but not vice versa, (III) the calibration of the hydrological model using regional climate model data works as an implicit bias correction method and (IV) the best performance for flood estimation is usually obtained when model based precipitation and observed probability distribution of peak flows are used for model calibration.
Allometric modelling of peak oxygen uptake in male soccer players of 8-18 years of age
Valente-dos-Santos, Joao; Coelho-e-Silva, Manuel J.; Tavares, Oscar M.; Brito, Joao; Seabra, Andre; Rebelo, Antonio; Sherar, Lauren B.; Elferink-Gemser, Marije T.; Malina, Robert M.
Background: Peak oxygen uptake (VO2peak) is routinely scaled as mL O-2 per kilogram body mass despite theoretical and statistical limitations of using ratios. Aim: To examine the contribution of maturity status and body size descriptors to ageassociated inter-individual variability in VO2peak and to
Some random models in traffic science
Energy Technology Data Exchange (ETDEWEB)
Hjorth, U.
1996-06-01
We give an overview of stochastic models for the following traffic phenomena. Models for traffic flow including gaps and capacities for lanes, crossings and roundabouts. Models for wanted and achieved speed distributions. Mode selection models including dispersed equilibrium models and traffic accident models. Also some statistical questions are discussed. 60 refs, 1 tab
A Model for Random Student Drug Testing
Nelson, Judith A.; Rose, Nancy L.; Lutz, Danielle
2011-01-01
The purpose of this case study was to examine random student drug testing in one school district relevant to: (a) the perceptions of students participating in competitive extracurricular activities regarding drug use and abuse; (b) the attitudes and perceptions of parents, school staff, and community members regarding student drug involvement; (c)…
International Nuclear Information System (INIS)
Truong, Nguyen-Vu; Wang, Liuping; Wong, Peter K.C.
2008-01-01
Power demand forecasting is of vital importance to the management and planning of power system operations which include generation, transmission, distribution, as well as system's security analysis and economic pricing processes. This paper concerns the modeling and short-term forecast of daily peak power demand in the state of Victoria, Australia. In this study, a two-dimensional wavelet based state dependent parameter (SDP) modelling approach is used to produce a compact mathematical model for this complex nonlinear dynamic system. In this approach, a nonlinear system is expressed by a set of linear regressive input and output terms (state variables) multiplied by the respective state dependent parameters that carry the nonlinearities in the form of 2-D wavelet series expansions. This model is identified based on historical data, descriptively representing the relationship and interaction between various components which affect the peak power demand of a certain day. The identified model has been used to forecast daily peak power demand in the state of Victoria, Australia in the time period from the 9th of August 2007 to the 24th of August 2007. With a MAPE (mean absolute prediction error) of 1.9%, it has clearly implied the effectiveness of the identified model. (author)
Directory of Open Access Journals (Sweden)
Qi-Min Chai
2014-12-01
Full Text Available China has achieved a political consensus around the need to transform the path of economic growth toward one that lowers carbon intensity and ultimately leads to reductions in carbon emissions, but there remain different views on pathways that could achieve such a transformation. The essential question is whether radical or incremental reforms are required in the coming decades. This study explores relevant pathways in China beyond 2020, particularly modeling the major target choices of carbon emission peaking in China around 2030 as China-US Joint Announcement by an integrated assessment model for climate change IAMC based on carbon factor theory. Here scenarios DGS-2020, LGS2025, LBS-2030 and DBS-2040 derived from the historical pathways of developed countries are developed to access the comprehensive impacts on the economy, energy and climate security for the greener development in China. The findings suggest that the period of 2025–2030 is the window of opportunity to achieve a peak in carbon emissions at a level below 12 Gt CO2 and 8.5 t per capita by reasonable trade-offs from economy growth, annually −0.2% in average and cumulatively −3% deviation to BAU in 2030. The oil and natural gas import dependence will exceed 70% and 45% respectively while the non-fossil energy and electricity share will rise to above 20% and 45%. Meantime, the electrification level in end use sectors will increase substantially and the electricity energy ratio approaching 50%, the labor and capital productivity should be double in improvements and the carbon intensity drop by 65% by 2030 compared to the 2005 level, and the cumulative emission reductions are estimated to be more than 20 Gt CO2 in 2015–2030.
Dempsey, David; Kelkar, Sharad; Davatzes, Nick; Hickman, Stephen H.; Moos, Daniel
2015-01-01
Creation of an Enhanced Geothermal System relies on stimulation of fracture permeability through self-propping shear failure that creates a complex fracture network with high surface area for efficient heat transfer. In 2010, shear stimulation was carried out in well 27-15 at Desert Peak geothermal field, Nevada, by injecting cold water at pressure less than the minimum principal stress. An order-of-magnitude improvement in well injectivity was recorded. Here, we describe a numerical model that accounts for injection-induced stress changes and permeability enhancement during this stimulation. In a two-part study, we use the coupled thermo-hydrological-mechanical simulator FEHM to: (i) construct a wellbore model for non-steady bottom-hole temperature and pressure conditions during the injection, and (ii) apply these pressures and temperatures as a source term in a numerical model of the stimulation. In this model, a Mohr-Coulomb failure criterion and empirical fracture permeability is developed to describe permeability evolution of the fractured rock. The numerical model is calibrated using laboratory measurements of material properties on representative core samples and wellhead records of injection pressure and mass flow during the shear stimulation. The model captures both the absence of stimulation at low wellhead pressure (WHP ≤1.7 and ≤2.4 MPa) as well as the timing and magnitude of injectivity rise at medium WHP (3.1 MPa). Results indicate that thermoelastic effects near the wellbore and the associated non-local stresses further from the well combine to propagate a failure front away from the injection well. Elevated WHP promotes failure, increases the injection rate, and cools the wellbore; however, as the overpressure drops off with distance, thermal and non-local stresses play an ongoing role in promoting shear failure at increasing distance from the well.
Analog model for quantum gravity effects: phonons in random fluids.
Krein, G; Menezes, G; Svaiter, N F
2010-09-24
We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.
A cluster expansion approach to exponential random graph models
International Nuclear Information System (INIS)
Yin, Mei
2012-01-01
The exponential family of random graphs are among the most widely studied network models. We show that any exponential random graph model may alternatively be viewed as a lattice gas model with a finite Banach space norm. The system may then be treated using cluster expansion methods from statistical mechanics. In particular, we derive a convergent power series expansion for the limiting free energy in the case of small parameters. Since the free energy is the generating function for the expectations of other random variables, this characterizes the structure and behavior of the limiting network in this parameter region
Premium Pricing of Liability Insurance Using Random Sum Model
Directory of Open Access Journals (Sweden)
Mujiati Dwi Kartikasari
2017-03-01
Full Text Available Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we analyze premium pricing using random sum model based on compound distribution
Energy Technology Data Exchange (ETDEWEB)
Hsiao, C.; Mountain, D.C.; Chan, M.W.L.; Tsui, K.Y. (University of Southern California, Los Angeles (USA) McMaster Univ., Hamilton, ON (Canada) Chinese Univ. of Hong Kong, Shatin)
1989-12-01
In examining the municipal peak and kilowatt-hour demand for electricity in Ontario, the issue of homogeneity across geographic regions is explored. A common model across municipalities and geographic regions cannot be supported by the data. Considered are various procedures which deal with this heterogeneity and yet reduce the multicollinearity problems associated with regional specific demand formulations. The recommended model controls for regional differences assuming that the coefficients of regional-seasonal specific factors are fixed and different while the coefficients of economic and weather variables are random draws from a common population for any one municipality by combining the information on all municipalities through a Bayes procedure. 8 tabs., 41 refs.
Conditional Monte Carlo randomization tests for regression models.
Parhat, Parwen; Rosenberger, William F; Diao, Guoqing
2014-08-15
We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Gao, Z. |; Ren, Z.; Li, Z.; Zhu, R.
2005-01-01
A peak regulation right concept and corresponding transaction mechanism for an electricity market was presented. The market was based on a power pool and independent system operator (ISO) model. Peak regulation right (PRR) was defined as a downward regulation capacity purchase option which allowed PRR owners to buy certain quantities of peak regulation capacity (PRC) at a specific price during a specified period from suppliers. The PRR owner also had the right to decide whether or not they would buy PRC from suppliers. It was the power pool's responsibility to provide competitive and fair peak regulation trading markets to participants. The introduction of PRR allowed for unit capacity regulation. The PRR and PRC were rated by the supplier, and transactions proceeded through a bidding process. PRR suppliers obtained profits by selling PRR and PRC, and obtained downward regulation fees regardless of whether purchases are made. It was concluded that the peak regulation mechanism reduced the total cost of the generating system and increased the social surplus. 6 refs., 1 tab., 3 figs
The ising model on the dynamical triangulated random surface
International Nuclear Information System (INIS)
Aleinov, I.D.; Migelal, A.A.; Zmushkow, U.V.
1990-01-01
The critical properties of Ising model on the dynamical triangulated random surface embedded in D-dimensional Euclidean space are investigated. The strong coupling expansion method is used. The transition to thermodynamical limit is performed by means of continuous fractions
Lluís Ruiz-Bellet, Josep; Castelltort, Xavier; Carles Balasch, J.; Tuset, Jordi
2016-04-01
The estimation of the uncertainty of the results of the hydraulic modelling has been deeply analysed, but no clear methodological procedures as to its determination have been formulated when applied to historical hydrology. The main objective of this study was to calculate the uncertainty of the resulting peak flow of a typical historical flood reconstruction. The secondary objective was to identify the input variables that influenced the result the most and their contribution to peak flow total error. The uncertainty of 21-23 October 1907 flood of the Ebro River (NE Iberian Peninsula) in the town of Xerta (83,000 km2) was calculated with a series of local sensitivity analyses of the main variables affecting the resulting peak flow. Besides, in order to see to what degree the result depended on the chosen model, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation. The peak flow of 1907 flood in the Ebro River in Xerta, reconstructed with HEC-RAS, was 11500 m3·s-1 and its total error was ±31%. The most influential input variable over HEC-RAS peak flow results was water height; however, the one that contributed the most to peak flow error was Manning's n, because its uncertainty was far greater than water height's. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed. The peak flow was 12000 m3·s-1 when calculated with the 2D model Iber and 11500 m3·s-1 when calculated with the Manning equation.
Simulating WTP Values from Random-Coefficient Models
Maurus Rischatsch
2009-01-01
Discrete Choice Experiments (DCEs) designed to estimate willingness-to-pay (WTP) values are very popular in health economics. With increased computation power and advanced simulation techniques, random-coefficient models have gained an increasing importance in applied work as they allow for taste heterogeneity. This paper discusses the parametrical derivation of WTP values from estimated random-coefficient models and shows how these values can be simulated in cases where they do not have a kn...
Approximating prediction uncertainty for random forest regression models
John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne
2016-01-01
Machine learning approaches such as random forest haveÂ increased for the spatial modeling and mapping of continuousÂ variables. Random forest is a non-parametric ensembleÂ approach, and unlike traditional regression approaches thereÂ is no direct quantification of prediction error. UnderstandingÂ prediction uncertainty is important when using model-basedÂ continuous maps as...
A random spatial network model based on elementary postulates
Karlinger, Michael R.; Troutman, Brent M.
1989-01-01
A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.
International Nuclear Information System (INIS)
Ostorero, L.; Moderski, R.; Stawarz, L.; Diaferio, A.; Kowalska, I.; Cheung, C.C.; Kataoka, J.; Begelman, M.C.; Wagner, S.J.
2010-01-01
In a dynamical-radiative model we recently developed to describe the physics of compact, GHz-Peaked-Spectrum (GPS) sources, the relativistic jets propagate across the inner, kpc-sized region of the host galaxy, while the electron population of the expanding lobes evolves and emits synchrotron and inverse-Compton (IC) radiation. Interstellar-medium gas clouds engulfed by the expanding lobes, and photoionized by the active nucleus, are responsible for the radio spectral turnover through free-free absorption (FFA) of the synchrotron photons. The model provides a description of the evolution of the GPS spectral energy distribution (SED) with the source expansion, predicting significant and complex high-energy emission, from the X-ray to the γ-ray frequency domain. Here, we test this model with the broad-band SEDs of a sample of eleven X-ray emitting GPS galaxies with Compact-Symmetric-Object (CSO) morphology, and show that: (i) the shape of the radio continuum at frequencies lower than the spectral turnover is indeed well accounted for by the FFA mechanism; (ii) the observed X-ray spectra can be interpreted as non-thermal radiation produced via IC scattering of the local radiation fields off the lobe particles, providing a viable alternative to the thermal, accretion-disk dominated scenario. We also show that the relation between the hydrogen column densities derived from the X-ray (N H ) and radio (N HI ) data of the sources is suggestive of a positive correlation, which, if confirmed by future observations, would provide further support to our scenario of high-energy emitting lobes.
Application of random regression models to the genetic evaluation ...
African Journals Online (AJOL)
The model included fixed regression on AM (range from 30 to 138 mo) and the effect of herd-measurement date concatenation. Random parts of the model were RRM coefficients for additive and permanent environmental effects, while residual effects were modelled to account for heterogeneity of variance by AY. Estimates ...
Random regression models for detection of gene by environment interaction
Directory of Open Access Journals (Sweden)
Meuwissen Theo HE
2007-02-01
Full Text Available Abstract Two random regression models, where the effect of a putative QTL was regressed on an environmental gradient, are described. The first model estimates the correlation between intercept and slope of the random regression, while the other model restricts this correlation to 1 or -1, which is expected under a bi-allelic QTL model. The random regression models were compared to a model assuming no gene by environment interactions. The comparison was done with regards to the models ability to detect QTL, to position them accurately and to detect possible QTL by environment interactions. A simulation study based on a granddaughter design was conducted, and QTL were assumed, either by assigning an effect independent of the environment or as a linear function of a simulated environmental gradient. It was concluded that the random regression models were suitable for detection of QTL effects, in the presence and absence of interactions with environmental gradients. Fixing the correlation between intercept and slope of the random regression had a positive effect on power when the QTL effects re-ranked between environments.
International Nuclear Information System (INIS)
Ovchinnikov, O. S.; Jesse, S.; Kalinin, S. V.; Bintacchit, P.; Trolier-McKinstry, S.
2009-01-01
An approach for the direct identification of disorder type and strength in physical systems based on recognition analysis of hysteresis loop shape is developed. A large number of theoretical examples uniformly distributed in the parameter space of the system is generated and is decorrelated using principal component analysis (PCA). The PCA components are used to train a feed-forward neural network using the model parameters as targets. The trained network is used to analyze hysteresis loops for the investigated system. The approach is demonstrated using a 2D random-bond-random-field Ising model, and polarization switching in polycrystalline ferroelectric capacitors.
A generalized model via random walks for information filtering
International Nuclear Information System (INIS)
Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng
2016-01-01
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.
A generalized model via random walks for information filtering
Energy Technology Data Exchange (ETDEWEB)
Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)
2016-08-06
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.
Money Creation in a Random Matching Model
Alexei Deviatov
2006-01-01
I study money creation in versions of the Trejos-Wright (1995) and Shi (1995) models with indivisible money and individual holdings bounded at two units. I work with the same class of policies as in Deviatov and Wallace (2001), who study money creation in that model. However, I consider an alternative notion of implementability–the ex ante pairwise core. I compute a set of numerical examples to determine whether money creation is beneficial. I find beneficial e?ects of money creation if indiv...
Random effects models in clinical research
Cleophas, T. J.; Zwinderman, A. H.
2008-01-01
BACKGROUND: In clinical trials a fixed effects research model assumes that the patients selected for a specific treatment have the same true quantitative effect and that the differences observed are residual error. If, however, we have reasons to believe that certain patients respond differently
Money creation process in a random redistribution model
Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan
2014-01-01
In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.
Utility based maintenance analysis using a Random Sign censoring model
International Nuclear Information System (INIS)
Andres Christen, J.; Ruggeri, Fabrizio; Villa, Enrique
2011-01-01
Industrial systems subject to failures are usually inspected when there are evident signs of an imminent failure. Maintenance is therefore performed at a random time, somehow dependent on the failure mechanism. A competing risk model, namely a Random Sign model, is considered to relate failure and maintenance times. We propose a novel Bayesian analysis of the model and apply it to actual data from a water pump in an oil refinery. The design of an optimal maintenance policy is then discussed under a formal decision theoretic approach, analyzing the goodness of the current maintenance policy and making decisions about the optimal maintenance time.
(Non-) Gibbsianness and Phase Transitions in Random Lattice Spin Models
Külske, C.
1999-01-01
We consider disordered lattice spin models with finite-volume Gibbs measures µΛ[η](dσ). Here σ denotes a lattice spin variable and η a lattice random variable with product distribution P describing the quenched disorder of the model. We ask: when will the joint measures limΛ↑Zd P(dη)µΛ[η](dσ) be
Shape Modelling Using Markov Random Field Restoration of Point Correspondences
DEFF Research Database (Denmark)
Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen
2003-01-01
A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized sh...
Simulating intrafraction prostate motion with a random walk model
Directory of Open Access Journals (Sweden)
Tobias Pommer, PhD
2017-07-01
Conclusions: Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during delivery of radiation therapy.
Single-cluster dynamics for the random-cluster model
Deng, Y.; Qian, X.; Blöte, H.W.J.
2009-01-01
We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q-state Potts model to noninteger values q>1. Its results for static quantities are in a satisfactory agreement with those
Application of Poisson random effect models for highway network screening.
Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer
2014-02-01
In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.
A note on moving average models for Gaussian random fields
DEFF Research Database (Denmark)
Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.
The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...
Peak Oil, Peak Coal and Climate Change
Murray, J. W.
2009-05-01
Research on future climate change is driven by the family of scenarios developed for the IPCC assessment reports. These scenarios create projections of future energy demand using different story lines consisting of government policies, population projections, and economic models. None of these scenarios consider resources to be limiting. In many of these scenarios oil production is still increasing to 2100. Resource limitation (in a geological sense) is a real possibility that needs more serious consideration. The concept of 'Peak Oil' has been discussed since M. King Hubbert proposed in 1956 that US oil production would peak in 1970. His prediction was accurate. This concept is about production rate not reserves. For many oil producing countries (and all OPEC countries) reserves are closely guarded state secrets and appear to be overstated. Claims that the reserves are 'proven' cannot be independently verified. Hubbert's Linearization Model can be used to predict when half the ultimate oil will be produced and what the ultimate total cumulative production (Qt) will be. US oil production can be used as an example. This conceptual model shows that 90% of the ultimate US oil production (Qt = 225 billion barrels) will have occurred by 2011. This approach can then be used to suggest that total global production will be about 2200 billion barrels and that the half way point will be reached by about 2010. This amount is about 5 to 7 times less than assumed by the IPCC scenarios. The decline of Non-OPEC oil production appears to have started in 2004. Of the OPEC countries, only Saudi Arabia may have spare capacity, but even that is uncertain, because of lack of data transparency. The concept of 'Peak Coal' is more controversial, but even the US National Academy Report in 2007 concluded only a small fraction of previously estimated reserves in the US are actually minable reserves and that US reserves should be reassessed using modern methods. British coal production can be
The hard-core model on random graphs revisited
International Nuclear Information System (INIS)
Barbier, Jean; Krzakala, Florent; Zhang, Pan; Zdeborová, Lenka
2013-01-01
We revisit the classical hard-core model, also known as independent set and dual to vertex cover problem, where one puts particles with a first-neighbor hard-core repulsion on the vertices of a random graph. Although the case of random graphs with small and very large average degrees respectively are quite well understood, they yield qualitatively different results and our aim here is to reconciliate these two cases. We revisit results that can be obtained using the (heuristic) cavity method and show that it provides a closed-form conjecture for the exact density of the densest packing on random regular graphs with degree K ≥ 20, and that for K > 16 the nature of the phase transition is the same as for large K. This also shows that the hard-code model is the simplest mean-field lattice model for structural glasses and jamming
Lamplighter model of a random copolymer adsorption on a line
Directory of Open Access Journals (Sweden)
L.I. Nazarov
2014-09-01
Full Text Available We present a model of an AB-diblock random copolymer sequential self-packaging with local quenched interactions on a one-dimensional infinite sticky substrate. It is assumed that the A-A and B-B contacts are favorable, while A-B are not. The position of a newly added monomer is selected in view of the local contact energy minimization. The model demonstrates a self-organization behavior with the nontrivial dependence of the total energy, E (the number of unfavorable contacts, on the number of chain monomers, N: E ~ N^3/4 for quenched random equally probable distribution of A- and B-monomers along the chain. The model is treated by mapping it onto the "lamplighter" random walk and the diffusion-controlled chemical reaction of X+X → 0 type with the subdiffusive motion of reagents.
Some Limits Using Random Slope Models to Measure Academic Growth
Directory of Open Access Journals (Sweden)
Daniel B. Wright
2017-11-01
Full Text Available Academic growth is often estimated using a random slope multilevel model with several years of data. However, if there are few time points, the estimates can be unreliable. While using random slope multilevel models can lower the variance of the estimates, these procedures can produce more highly erroneous estimates—zero and negative correlations with the true underlying growth—than using ordinary least squares estimates calculated for each student or school individually. An example is provided where schools with increasing graduation rates are estimated to have negative growth and vice versa. The estimation is worse when the underlying data are skewed. It is recommended that there are at least six time points for estimating growth if using a random slope model. A combination of methods can be used to avoid some of the aberrant results if it is not possible to have six or more time points.
The random field Blume-Capel model revisited
Santos, P. V.; da Costa, F. A.; de Araújo, J. M.
2018-04-01
We have revisited the mean-field treatment for the Blume-Capel model under the presence of a discrete random magnetic field as introduced by Kaufman and Kanner (1990). The magnetic field (H) versus temperature (T) phase diagrams for given values of the crystal field D were recovered in accordance to Kaufman and Kanner original work. However, our main goal in the present work was to investigate the distinct structures of the crystal field versus temperature phase diagrams as the random magnetic field is varied because similar models have presented reentrant phenomenon due to randomness. Following previous works we have classified the distinct phase diagrams according to five different topologies. The topological structure of the phase diagrams is maintained for both H - T and D - T cases. Although the phase diagrams exhibit a richness of multicritical phenomena we did not found any reentrant effect as have been seen in similar models.
Effects of random noise in a dynamical model of love
Energy Technology Data Exchange (ETDEWEB)
Xu Yong, E-mail: hsux3@nwpu.edu.cn [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Gu Rencai; Zhang Huiqing [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)
2011-07-15
Highlights: > We model the complexity and unpredictability of psychology as Gaussian white noise. > The stochastic system of love is considered including bifurcation and chaos. > We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.
Effects of random noise in a dynamical model of love
International Nuclear Information System (INIS)
Xu Yong; Gu Rencai; Zhang Huiqing
2011-01-01
Highlights: → We model the complexity and unpredictability of psychology as Gaussian white noise. → The stochastic system of love is considered including bifurcation and chaos. → We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.
Using Random Forest Models to Predict Organizational Violence
Levine, Burton; Bobashev, Georgly
2012-01-01
We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"
Samaha, Mohamed A.; Tafreshi, Hooman Vahedi; Gad-el-Hak, Mohamed
2011-01-01
Previous studies dedicated to modeling drag reduction and stability of the air-water interface on superhydrophobic surfaces were conducted for microfabricated coatings produced by placing hydrophobic microposts/microridges arranged on a flat surface in aligned or staggered configurations. In this paper, we model the performance of superhydrophobic surfaces comprised of randomly distributed roughness (e.g., particles or microposts) that resembles natural superhydrophobic surfaces, or those produced via random deposition of hydrophobic particles. Such fabrication method is far less expensive than microfabrication, making the technology more practical for large submerged bodies such as submarines and ships. The present numerical simulations are aimed at improving our understanding of the drag reduction effect and the stability of the air-water interface in terms of the microstructure parameters. For comparison and validation, we have also simulated the flow over superhydrophobic surfaces made up of aligned or staggered microposts for channel flows as well as streamwise or spanwise ridges configurations for pipe flows. The present results are compared with theoretical and experimental studies reported in the literature. In particular, our simulation results are compared with work of Sbragaglia and Prosperetti, and good agreement has been observed for gas fractions up to about 0.9. The numerical simulations indicate that the random distribution of surface roughness has a favorable effect on drag reduction, as long as the gas fraction is kept the same. This effect peaks at about 30% as the gas fraction increases to 0.98. The stability of the meniscus, however, is strongly influenced by the average spacing between the roughness peaks, which needs to be carefully examined before a surface can be recommended for fabrication. It was found that at a given maximum allowable pressure, surfaces with random post distribution produce less drag reduction than those made up of
Factorisations for partition functions of random Hermitian matrix models
International Nuclear Information System (INIS)
Jackson, D.M.; Visentin, T.I.
1996-01-01
The partition function Z N , for Hermitian-complex matrix models can be expressed as an explicit integral over R N , where N is a positive integer. Such an integral also occurs in connection with random surfaces and models of two dimensional quantum gravity. We show that Z N can be expressed as the product of two partition functions, evaluated at translated arguments, for another model, giving an explicit connection between the two models. We also give an alternative computation of the partition function for the φ 4 -model.The approach is an algebraic one and holds for the functions regarded as formal power series in the appropriate ring. (orig.)
Statistical properties of several models of fractional random point processes
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
Statistical shape model with random walks for inner ear segmentation
DEFF Research Database (Denmark)
Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma
2016-01-01
is required. We propose a new framework for segmentation of micro-CT cochlear images using random walks combined with a statistical shape model (SSM). The SSM allows us to constrain the less contrasted areas and ensures valid inner ear shape outputs. Additionally, a topology preservation method is proposed...
Asthma Self-Management Model: Randomized Controlled Trial
Olivera, Carolina M. X.; Vianna, Elcio Oliveira; Bonizio, Roni C.; de Menezes, Marcelo B.; Ferraz, Erica; Cetlin, Andrea A.; Valdevite, Laura M.; Almeida, Gustavo A.; Araujo, Ana S.; Simoneti, Christian S.; de Freitas, Amanda; Lizzi, Elisangela A.; Borges, Marcos C.; de Freitas, Osvaldo
2016-01-01
Information for patients provided by the pharmacist is reflected in adhesion to treatment, clinical results and patient quality of life. The objective of this study was to assess an asthma self-management model for rational medicine use. This was a randomized controlled trial with 60 asthmatic patients assigned to attend five modules presented by…
The dilute random field Ising model by finite cluster approximation
International Nuclear Information System (INIS)
Benyoussef, A.; Saber, M.
1987-09-01
Using the finite cluster approximation, phase diagrams of bond and site diluted three-dimensional simple cubic Ising models with a random field have been determined. The resulting phase diagrams have the same general features for both bond and site dilution. (author). 7 refs, 4 figs
International Nuclear Information System (INIS)
Bachschmid-Romano, Ludovica; Opper, Manfred
2015-01-01
We study analytically the performance of a recently proposed algorithm for learning the couplings of a random asymmetric kinetic Ising model from finite length trajectories of the spin dynamics. Our analysis shows the importance of the nontrivial equal time correlations between spins induced by the dynamics for the speed of learning. These correlations become more important as the spin’s stochasticity is decreased. We also analyse the deviation of the estimation error (paper)
Evolution of the concentration PDF in random environments modeled by global random walk
Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter
2013-04-01
The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and
Quantum random oracle model for quantum digital signature
Shang, Tao; Lei, Qi; Liu, Jianwei
2016-10-01
The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.
Investigating Facebook Groups through a Random Graph Model
Dinithi Pallegedara; Lei Pan
2014-01-01
Facebook disseminates messages for billions of users everyday. Though there are log files stored on central servers, law enforcement agencies outside of the U.S. cannot easily acquire server log files from Facebook. This work models Facebook user groups by using a random graph model. Our aim is to facilitate detectives quickly estimating the size of a Facebook group with which a suspect is involved. We estimate this group size according to the number of immediate friends and the number of ext...
Stochastic geometry, spatial statistics and random fields models and algorithms
2015-01-01
Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.
An Ensemble Model for Co-Seismic Landslide Susceptibility Using GIS and Random Forest Method
Directory of Open Access Journals (Sweden)
Suchita Shrestha
2017-11-01
Full Text Available The Mw 7.8 Gorkha earthquake of 25 April 2015 triggered thousands of landslides in the central part of the Nepal Himalayas. The main goal of this study was to generate an ensemble-based map of co-seismic landslide susceptibility in Sindhupalchowk District using model comparison and combination strands. A total of 2194 co-seismic landslides were identified and were randomly split into 1536 (~70%, to train data for establishing the model, and the remaining 658 (~30% for the validation of the model. Frequency ratio, evidential belief function, and weight of evidence methods were applied and compared using 11 different causative factors (peak ground acceleration, epicenter proximity, fault proximity, geology, elevation, slope, plan curvature, internal relief, drainage proximity, stream power index, and topographic wetness index to prepare the landslide susceptibility map. An ensemble of random forest was then used to overcome the various prediction limitations of the individual models. The success rates and prediction capabilities were critically compared using the area under the curve (AUC of the receiver operating characteristic curve (ROC. By synthesizing the results of the various models into a single score, the ensemble model improved accuracy and provided considerably more realistic prediction capacities (91% than the frequency ratio (81.2%, evidential belief function (83.5% methods, and weight of evidence (80.1%.
Simulating intrafraction prostate motion with a random walk model.
Pommer, Tobias; Oh, Jung Hun; Munck Af Rosenschöld, Per; Deasy, Joseph O
2017-01-01
Prostate motion during radiation therapy (ie, intrafraction motion) can cause unwanted loss of radiation dose to the prostate and increased dose to the surrounding organs at risk. A compact but general statistical description of this motion could be useful for simulation of radiation therapy delivery or margin calculations. We investigated whether prostate motion could be modeled with a random walk model. Prostate motion recorded during 548 radiation therapy fractions in 17 patients was analyzed and used for input in a random walk prostate motion model. The recorded motion was categorized on the basis of whether any transient excursions (ie, rapid prostate motion in the anterior and superior direction followed by a return) occurred in the trace and transient motion. This was separately modeled as a large step in the anterior/superior direction followed by a returning large step. Random walk simulations were conducted with and without added artificial transient motion using either motion data from all observed traces or only traces without transient excursions as model input, respectively. A general estimate of motion was derived with reasonable agreement between simulated and observed traces, especially during the first 5 minutes of the excursion-free simulations. Simulated and observed diffusion coefficients agreed within 0.03, 0.2 and 0.3 mm 2 /min in the left/right, superior/inferior, and anterior/posterior directions, respectively. A rapid increase in variance at the start of observed traces was difficult to reproduce and seemed to represent the patient's need to adjust before treatment. This could be estimated somewhat using artificial transient motion. Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during
Modeling of chromosome intermingling by partially overlapping uniform random polygons.
Blackstone, T; Scharein, R; Borgo, B; Varela, R; Diao, Y; Arsuaga, J
2011-03-01
During the early phase of the cell cycle the eukaryotic genome is organized into chromosome territories. The geometry of the interface between any two chromosomes remains a matter of debate and may have important functional consequences. The Interchromosomal Network model (introduced by Branco and Pombo) proposes that territories intermingle along their periphery. In order to partially quantify this concept we here investigate the probability that two chromosomes form an unsplittable link. We use the uniform random polygon as a crude model for chromosome territories and we model the interchromosomal network as the common spatial region of two overlapping uniform random polygons. This simple model allows us to derive some rigorous mathematical results as well as to perform computer simulations easily. We find that the probability that one uniform random polygon of length n that partially overlaps a fixed polygon is bounded below by 1 − O(1/√n). We use numerical simulations to estimate the dependence of the linking probability of two uniform random polygons (of lengths n and m, respectively) on the amount of overlapping. The degree of overlapping is parametrized by a parameter [Formula: see text] such that [Formula: see text] indicates no overlapping and [Formula: see text] indicates total overlapping. We propose that this dependence relation may be modeled as f (ε, m, n) = [Formula: see text]. Numerical evidence shows that this model works well when [Formula: see text] is relatively large (ε ≥ 0.5). We then use these results to model the data published by Branco and Pombo and observe that for the amount of overlapping observed experimentally the URPs have a non-zero probability of forming an unsplittable link.
A generalized model via random walks for information filtering
Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng
2016-08-01
There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.
Creating, generating and comparing random network models with NetworkRandomizer.
Tosadori, Gabriele; Bestvina, Ivan; Spoto, Fausto; Laudanna, Carlo; Scardoni, Giovanni
2016-01-01
Biological networks are becoming a fundamental tool for the investigation of high-throughput data in several fields of biology and biotechnology. With the increasing amount of information, network-based models are gaining more and more interest and new techniques are required in order to mine the information and to validate the results. To fill the validation gap we present an app, for the Cytoscape platform, which aims at creating randomised networks and randomising existing, real networks. Since there is a lack of tools that allow performing such operations, our app aims at enabling researchers to exploit different, well known random network models that could be used as a benchmark for validating real, biological datasets. We also propose a novel methodology for creating random weighted networks, i.e. the multiplication algorithm, starting from real, quantitative data. Finally, the app provides a statistical tool that compares real versus randomly computed attributes, in order to validate the numerical findings. In summary, our app aims at creating a standardised methodology for the validation of the results in the context of the Cytoscape platform.
Janssen, Dirk P
2012-03-01
Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.
Scaling of coercivity in a 3d random anisotropy model
Energy Technology Data Exchange (ETDEWEB)
Proctor, T.C., E-mail: proctortc@gmail.com; Chudnovsky, E.M., E-mail: EUGENE.CHUDNOVSKY@lehman.cuny.edu; Garanin, D.A.
2015-06-15
The random-anisotropy Heisenberg model is numerically studied on lattices containing over ten million spins. The study is focused on hysteresis and metastability due to topological defects, and is relevant to magnetic properties of amorphous and sintered magnets. We are interested in the limit when ferromagnetic correlations extend beyond the size of the grain inside which the magnetic anisotropy axes are correlated. In that limit the coercive field computed numerically roughly scales as the fourth power of the random anisotropy strength and as the sixth power of the grain size. Theoretical arguments are presented that provide an explanation of numerical results. Our findings should be helpful for designing amorphous and nanosintered materials with desired magnetic properties. - Highlights: • We study the random-anisotropy model on lattices containing up to ten million spins. • Irreversible behavior due to topological defects (hedgehogs) is elucidated. • Hysteresis loop area scales as the fourth power of the random anisotropy strength. • In nanosintered magnets the coercivity scales as the six power of the grain size.
Modeling random combustion of lycopodium particles and gas
Directory of Open Access Journals (Sweden)
M Bidabadi
2016-06-01
Full Text Available The random modeling combustion of lycopodium particles has been researched by many authors. In this paper, we extend this model and we also generate a different method by analyzing the effect of random distributed sources of combustible mixture. The flame structure is assumed to consist of a preheat-vaporization zone, a reaction zone and finally a post flame zone. We divide the preheat zone to different parts. We assumed that there is different distribution of particles in sections which are really random. Meanwhile, it is presumed that the fuel particles vaporize first to yield gaseous fuel. In other words, most of the fuel particles are vaporized at the end of the preheat zone. It is assumed that the Zel’dovich number is large; therefore, the reaction term in preheat zone is negligible. In this work, the effect of random distribution of particles in the preheat zone on combustion characteristics such as burning velocity, flame temperature for different particle radius is obtained.
Emergent randomness in the Jaynes-Cummings model
International Nuclear Information System (INIS)
Garraway, B M; Stenholm, S
2008-01-01
We consider the well-known Jaynes-Cummings model and ask if it can display randomness. As a solvable Hamiltonian system, it does not display chaotic behaviour in the ordinary sense. Here, however, we look at the distribution of values taken up during the total time evolution. This evolution is determined by the eigenvalues distributed as the square roots of integers and leads to a seemingly erratic behaviour. That this may display a random Gaussian value distribution is suggested by an exactly provable result by Kac. In order to reach our conclusion we use the Kac model to develop tests for the emergence of a Gaussian. Even if the consequent double limits are difficult to evaluate numerically, we find definite indications that the Jaynes-Cummings case also produces a randomness in its value distributions. Numerical methods do not establish such a result beyond doubt, but our conclusions are definite enough to suggest strongly an unexpected randomness emerging in a dynamic time evolution
A Fay-Herriot Model with Different Random Effect Variances
Czech Academy of Sciences Publication Activity Database
Hobza, Tomáš; Morales, D.; Herrador, M.; Esteban, M.D.
2011-01-01
Roč. 40, č. 5 (2011), s. 785-797 ISSN 0361-0926 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : small area estimation * Fay-Herriot model * Linear mixed model * Labor Force Survey Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.274, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/hobza-a%20fay-herriot%20model%20with%20different%20random%20effect%20variances.pdf
Random-growth urban model with geographical fitness
Kii, Masanobu; Akimoto, Keigo; Doi, Kenji
2012-12-01
This paper formulates a random-growth urban model with a notion of geographical fitness. Using techniques of complex-network theory, we study our system as a type of preferential-attachment model with fitness, and we analyze its macro behavior to clarify the properties of the city-size distributions it predicts. First, restricting the geographical fitness to take positive values and using a continuum approach, we show that the city-size distributions predicted by our model asymptotically approach Pareto distributions with coefficients greater than unity. Then, allowing the geographical fitness to take negative values, we perform local coefficient analysis to show that the predicted city-size distributions can deviate from Pareto distributions, as is often observed in actual city-size distributions. As a result, the model we propose can generate a generic class of city-size distributions, including but not limited to Pareto distributions. For applications to city-population projections, our simple model requires randomness only when new cities are created, not during their subsequent growth. This property leads to smooth trajectories of city population growth, in contrast to other models using Gibrat’s law. In addition, a discrete form of our dynamical equations can be used to estimate past city populations based on present-day data; this fact allows quantitative assessment of the performance of our model. Further study is needed to determine appropriate formulas for the geographical fitness.
Least squares estimation in a simple random coefficient autoregressive model
DEFF Research Database (Denmark)
Johansen, S; Lange, T
2013-01-01
The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p...... we prove the curious result that View the MathML source. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of View the MathML source and View the MathML source and hence the limit of View the MathML source...
The transverse spin-1 Ising model with random interactions
Energy Technology Data Exchange (ETDEWEB)
Bouziane, Touria [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco)], E-mail: touria582004@yahoo.fr; Saber, Mohammed [Department of Physics, Faculty of Sciences, University of Moulay Ismail, B.P. 11201 Meknes (Morocco); Dpto. Fisica Aplicada I, EUPDS (EUPDS), Plaza Europa, 1, San Sebastian 20018 (Spain)
2009-01-15
The phase diagrams of the transverse spin-1 Ising model with random interactions are investigated using a new technique in the effective field theory that employs a probability distribution within the framework of the single-site cluster theory based on the use of exact Ising spin identities. A model is adopted in which the nearest-neighbor exchange couplings are independent random variables distributed according to the law P(J{sub ij})=p{delta}(J{sub ij}-J)+(1-p){delta}(J{sub ij}-{alpha}J). General formulae, applicable to lattices with coordination number N, are given. Numerical results are presented for a simple cubic lattice. The possible reentrant phenomenon displayed by the system due to the competitive effects between exchange interactions occurs for the appropriate range of the parameter {alpha}.
Random unitary evolution model of quantum Darwinism with pure decoherence
Balanesković, Nenad
2015-10-01
We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.
Gravitational lensing by eigenvalue distributions of random matrix models
Martínez Alonso, Luis; Medina, Elena
2018-05-01
We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.
Random resistor network model of minimal conductivity in graphene.
Cheianov, Vadim V; Fal'ko, Vladimir I; Altshuler, Boris L; Aleiner, Igor L
2007-10-26
Transport in undoped graphene is related to percolating current patterns in the networks of n- and p-type regions reflecting the strong bipolar charge density fluctuations. Finite transparency of the p-n junctions is vital in establishing the macroscopic conductivity. We propose a random resistor network model to analyze scaling dependencies of the conductance on the doping and disorder, the quantum magnetoresistance and the corresponding dephasing rate.
Levy Random Bridges and the Modelling of Financial Information
Hoyle, Edward; Hughston, Lane P.; Macrina, Andrea
2009-01-01
The information-based asset-pricing framework of Brody, Hughston and Macrina (BHM) is extended to include a wider class of models for market information. In the BHM framework, each asset is associated with a collection of random cash flows. The price of the asset is the sum of the discounted conditional expectations of the cash flows. The conditional expectations are taken with respect to a filtration generated by a set of "information processes". The information processes carry imperfect inf...
Social aggregation in pea aphids: experiment and random walk modeling.
Directory of Open Access Journals (Sweden)
Christa Nilsen
Full Text Available From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
Directory of Open Access Journals (Sweden)
Tomaž Podobnikar
2012-03-01
Full Text Available The detection of peaks (summits as the upper parts of mountains and the delineation of their shape is commonly confirmed by inspections carried out by mountaineers. In this study the complex task of peak detection and shape delineation is solved by autometric methodological procedures, more precisely, by developing relatively simple but innovative image-processing and spatial-analysis techniques (e.g., developing inventive variables using an annular moving window in remote sensing and GIS domains. The techniques have been integrated into automated morphometric methodological procedures. The concepts of peaks and their shapes (sharp, blunt, oblong, circular and conical were parameterized based on topographic and morphologic criteria. A geomorphologically high quality DEM was used as a fundamental dataset. The results, detected peaks with delineated shapes, have been integratively enriched with numerous independent datasets (e.g., with triangulated spot heights and information (e.g., etymological information, and mountaineering criteria have been implemented to improve the judgments. This holistic approach has proved the applicability of both highly standardized and universal parameters for the geomorphologically diverse Kamnik Alps case study area. Possible applications of this research are numerous, e.g., a comprehensive quality control of DEM or significantly improved models for the spatial planning proposes.
High-temperature series expansions for random Potts models
Directory of Open Access Journals (Sweden)
M.Hellmund
2005-01-01
Full Text Available We discuss recently generated high-temperature series expansions for the free energy and the susceptibility of random-bond q-state Potts models on hypercubic lattices. Using the star-graph expansion technique, quenched disorder averages can be calculated exactly for arbitrary uncorrelated coupling distributions while keeping the disorder strength p as well as the dimension d as symbolic parameters. We present analyses of the new series for the susceptibility of the Ising (q=2 and 4-state Potts model in three dimensions up to the order 19 and 18, respectively, and compare our findings with results from field-theoretical renormalization group studies and Monte Carlo simulations.
On a Stochastic Failure Model under Random Shocks
Cha, Ji Hwan
2013-02-01
In most conventional settings, the events caused by an external shock are initiated at the moments of its occurrence. In this paper, we study a new classes of shock model, where each shock from a nonhomogeneous Poisson processes can trigger a failure of a system not immediately, as in classical extreme shock models, but with delay of some random time. We derive the corresponding survival and failure rate functions. Furthermore, we study the limiting behaviour of the failure rate function where it is applicable.
Cheung, Mike W.-L.; Cheung, Shu Fai
2016-01-01
Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…
Exergy and Exergoeconomic Model of a Ground-Based CAES Plant for Peak-Load Energy Production
Directory of Open Access Journals (Sweden)
Giampaolo Manfrida
2013-02-01
Full Text Available Compressed Air Energy Storage is recognized as a promising technology for applying energy storage to grids which are more and more challenged by the increasing contribution of renewable such as solar or wind energy. The paper proposes a medium-size ground-based CAES system, based on pressurized vessels and on a multiple-stage arrangement of compression and expansion machinery; the system includes recovery of heat from the intercoolers, and its storage as sensible heat in two separate (hot/cold water reservoirs, and regenerative reheat of the expansions. The CAES plant parameters were adapted to the requirements of existing equipment (compressors, expanders and heat exchangers. A complete exergy analysis of the plant was performed. Most component cost data were procured from the market, asking specific quotations to the industrial providers. It is thus possible to calculate the final cost of the electricity unit (kWh produced under peak-load mode, and to identify the relative contribution between the two relevant groups of capital and component inefficiencies costs.
Discrete random walk models for space-time fractional diffusion
International Nuclear Information System (INIS)
Gorenflo, Rudolf; Mainardi, Francesco; Moretti, Daniele; Pagnini, Gianni; Paradisi, Paolo
2002-01-01
A physical-mathematical approach to anomalous diffusion may be based on generalized diffusion equations (containing derivatives of fractional order in space or/and time) and related random walk models. By space-time fractional diffusion equation we mean an evolution equation obtained from the standard linear diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative of order α is part of (0,2] and skewness θ (moduleθ≤{α,2-α}), and the first-order time derivative with a Caputo derivative of order β is part of (0,1]. Such evolution equation implies for the flux a fractional Fick's law which accounts for spatial and temporal non-locality. The fundamental solution (for the Cauchy problem) of the fractional diffusion equation can be interpreted as a probability density evolving in time of a peculiar self-similar stochastic process that we view as a generalized diffusion process. By adopting appropriate finite-difference schemes of solution, we generate models of random walk discrete in space and time suitable for simulating random variables whose spatial probability density evolves in time according to this fractional diffusion equation
Random matrices and the six-vertex model
Bleher, Pavel
2013-01-01
This book provides a detailed description of the Riemann-Hilbert approach (RH approach) to the asymptotic analysis of both continuous and discrete orthogonal polynomials, and applications to random matrix models as well as to the six-vertex model. The RH approach was an important ingredient in the proofs of universality in unitary matrix models. This book gives an introduction to the unitary matrix models and discusses bulk and edge universality. The six-vertex model is an exactly solvable two-dimensional model in statistical physics, and thanks to the Izergin-Korepin formula for the model with domain wall boundary conditions, its partition function matches that of a unitary matrix model with nonpolynomial interaction. The authors introduce in this book the six-vertex model and include a proof of the Izergin-Korepin formula. Using the RH approach, they explicitly calculate the leading and subleading terms in the thermodynamic asymptotic behavior of the partition function of the six-vertex model with domain wa...
Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology
Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…
Kitzman, Dalane W.; Brubaker, Peter; Morgan, Timothy; Haykowsky, Mark; Hundley, Gregory; Kraus, William E.; Eggebeen, Joel; Nicklas, Barbara J.
2016-01-01
Importance More than 80% of patients with heart failure with preserved ejection fraction (HFPEF), the most common form of HF among older persons, are overweight/obese. Exercise intolerance is the primary symptom of chronic HFPEF and a major determinant of reduced quality-of-life (QOL). Objective To determine whether caloric restriction (Diet), or aerobic exercise training (Exercise), improves exercise capacity and QOL in obese older HFPEF patients. Design Randomized, attention-controlled, 2x2 factorial trial conducted from February 2009 November 2014. Setting Urban academic medical center. Participants 100 older (67±5 years) obese (BMI=39.3±5.6kg/m2) women (n=81) and men (n=19) with chronic, stable HFPEF enrolled from 577 patients initially screened (366 excluded by inclusion / exclusion criteria, 31 for other reasons, 80 declined participation). Twenty-six participants were randomized to Exercise alone, 24 to Diet alone, 25 to Diet+Exercise, and 25 to Control; 92 completed the trial. Interventions 20 weeks of Diet and/or Exercise; Attention Control consisted of telephone calls every 2 weeks. Main Outcomes and Measures Exercise capacity measured as peak oxygen consumption (VO2, ml/kg/min; primary outcome) and QOL measured by the Minnesota Living with HF Questionnaire (MLHF) total score (co-primary outcome; score range: 0–105, higher scores indicate worse HF-related QOL). Results By main effects analysis, peak VO2 was increased significantly by both interventions: Exercise main effect 1.2 ml/kg/min (95%CI: 0.7,1.7; pDiet main effect 1.3 ml/kg/min (95%CI: 0.8,1.8; pExercise+Diet was additive (complementary) for peak VO2 (joint effect 2.5 ml/kg/min). The change in MLHF total score was non-significant with Exercise (main effect −1 unit; 95%CI: −8,5; p=0.70) and with Diet (main effect −6 units; 95%CI: −12,1; p=0.078). The change in peak VO2 was positively correlated with the change in percent lean body mass (r=0.32; p=0.003) and the change in thigh muscle
Universality of correlation functions in random matrix models of QCD
International Nuclear Information System (INIS)
Jackson, A.D.; Sener, M.K.; Verbaarschot, J.J.M.
1997-01-01
We demonstrate the universality of the spectral correlation functions of a QCD inspired random matrix model that consists of a random part having the chiral structure of the QCD Dirac operator and a deterministic part which describes a schematic temperature dependence. We calculate the correlation functions analytically using the technique of Itzykson-Zuber integrals for arbitrary complex supermatrices. An alternative exact calculation for arbitrary matrix size is given for the special case of zero temperature, and we reproduce the well-known Laguerre kernel. At finite temperature, the microscopic limit of the correlation functions are calculated in the saddle-point approximation. The main result of this paper is that the microscopic universality of correlation functions is maintained even though unitary invariance is broken by the addition of a deterministic matrix to the ensemble. (orig.)
Nonparametric Estimation of Distributions in Random Effects Models
Hart, Jeffrey D.
2011-01-01
We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.
Prediction of Geological Subsurfaces Based on Gaussian Random Field Models
Energy Technology Data Exchange (ETDEWEB)
Abrahamsen, Petter
1997-12-31
During the sixties, random functions became practical tools for predicting ore reserves with associated precision measures in the mining industry. This was the start of the geostatistical methods called kriging. These methods are used, for example, in petroleum exploration. This thesis reviews the possibilities for using Gaussian random functions in modelling of geological subsurfaces. It develops methods for including many sources of information and observations for precise prediction of the depth of geological subsurfaces. The simple properties of Gaussian distributions make it possible to calculate optimal predictors in the mean square sense. This is done in a discussion of kriging predictors. These predictors are then extended to deal with several subsurfaces simultaneously. It is shown how additional velocity observations can be used to improve predictions. The use of gradient data and even higher order derivatives are also considered and gradient data are used in an example. 130 refs., 44 figs., 12 tabs.
Pervasive randomness in physics: an introduction to its modelling and spectral characterisation
Howard, Roy
2017-10-01
An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.
Statistical Downscaling of Temperature with the Random Forest Model
Directory of Open Access Journals (Sweden)
Bo Pang
2017-01-01
Full Text Available The issues with downscaling the outputs of a global climate model (GCM to a regional scale that are appropriate to hydrological impact studies are investigated using the random forest (RF model, which has been shown to be superior for large dataset analysis and variable importance evaluation. The RF is proposed for downscaling daily mean temperature in the Pearl River basin in southern China. Four downscaling models were developed and validated by using the observed temperature series from 61 national stations and large-scale predictor variables derived from the National Center for Environmental Prediction–National Center for Atmospheric Research reanalysis dataset. The proposed RF downscaling model was compared to multiple linear regression, artificial neural network, and support vector machine models. Principal component analysis (PCA and partial correlation analysis (PAR were used in the predictor selection for the other models for a comprehensive study. It was shown that the model efficiency of the RF model was higher than that of the other models according to five selected criteria. By evaluating the predictor importance, the RF could choose the best predictor combination without using PCA and PAR. The results indicate that the RF is a feasible tool for the statistical downscaling of temperature.
Randomizing growing networks with a time-respecting null model
Ren, Zhuo-Ming; Mariani, Manuel Sebastian; Zhang, Yi-Cheng; Medo, Matúš
2018-05-01
Complex networks are often used to represent systems that are not static but grow with time: People make new friendships, new papers are published and refer to the existing ones, and so forth. To assess the statistical significance of measurements made on such networks, we propose a randomization methodology—a time-respecting null model—that preserves both the network's degree sequence and the time evolution of individual nodes' degree values. By preserving the temporal linking patterns of the analyzed system, the proposed model is able to factor out the effect of the system's temporal patterns on its structure. We apply the model to the citation network of Physical Review scholarly papers and the citation network of US movies. The model reveals that the two data sets are strikingly different with respect to their degree-degree correlations, and we discuss the important implications of this finding on the information provided by paradigmatic node centrality metrics such as indegree and Google's PageRank. The randomization methodology proposed here can be used to assess the significance of any structural property in growing networks, which could bring new insights into the problems where null models play a critical role, such as the detection of communities and network motifs.
Genetic evaluation of European quails by random regression models
Directory of Open Access Journals (Sweden)
Flaviana Miranda Gonçalves
2012-09-01
Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.
Czuba, Christiana; Czuba, Jonathan A.; Gendaszek, Andrew S.; Magirl, Christopher S.
2010-01-01
The Cedar River in Washington State originates on the western slope of the Cascade Range and provides the City of Seattle with most of its drinking water, while also supporting a productive salmon habitat. Water-resource managers require detailed information on how best to manage high-flow releases from Chester Morse Lake, a large reservoir on the Cedar River, during periods of heavy precipitation to minimize flooding, while mitigating negative effects on fish populations. Instream flow-management practices include provisions for adaptive management to promote and maintain healthy aquatic habitat in the river system. The current study is designed to understand the linkages between peak flow characteristics, geomorphic processes, riverine habitat, and biological responses. Specifically, two-dimensional hydrodynamic modeling is used to simulate and quantify the effects of the peak-flow magnitude, duration, and frequency on the channel morphology and salmon-spawning habitat. Two study reaches, representative of the typical geomorphic and ecologic characteristics of the Cedar River, were selected for the modeling. Detailed bathymetric data, collected with a real-time kinematic global positioning system and an acoustic Doppler current profiler, were combined with a LiDAR-derived digital elevation model in the overbank area to develop a computational mesh. The model is used to simulate water velocity, benthic shear stress, flood inundation, and morphologic changes in the gravel-bedded river under the current and alternative flood-release strategies. Simulations of morphologic change and salmon-redd scour by floods of differing magnitude and duration enable water-resource managers to incorporate model simulation results into adaptive management of peak flows in the Cedar River. PDF version of a presentation on hydrodynamic modelling in the Cedar River in Washington state. Presented at the American Geophysical Union Fall Meeting 2010.
Huizinga, Richard J.
2014-01-01
Streamflow data, basin characteristics, and rainfall data from 39 streamflow-gaging stations for urban areas in and adjacent to Missouri were used by the U.S. Geological Survey in cooperation with the Metropolitan Sewer District of St. Louis to develop an initial abstraction and constant loss model (a time-distributed basin-loss model) and a gamma unit hydrograph (GUH) for urban areas in Missouri. Study-specific methods to determine peak streamflow and flood volume for a given rainfall event also were developed.
Lurton, Thibaut; Jégou, Fabrice; Berthet, Gwenaël; Renard, Jean-Baptiste; Clarisse, Lieven; Schmidt, Anja; Brogniez, Colette; Roberts, Tjarda J.
2018-03-01
Volcanic eruptions impact climate through the injection of sulfur dioxide (SO2), which is oxidized to form sulfuric acid aerosol particles that can enhance the stratospheric aerosol optical depth (SAOD). Besides large-magnitude eruptions, moderate-magnitude eruptions such as Kasatochi in 2008 and Sarychev Peak in 2009 can have a significant impact on stratospheric aerosol and hence climate. However, uncertainties remain in quantifying the atmospheric and climatic impacts of the 2009 Sarychev Peak eruption due to limitations in previous model representations of volcanic aerosol microphysics and particle size, whilst biases have been identified in satellite estimates of post-eruption SAOD. In addition, the 2009 Sarychev Peak eruption co-injected hydrogen chloride (HCl) alongside SO2, whose potential stratospheric chemistry impacts have not been investigated to date. We present a study of the stratospheric SO2-particle-HCl processing and impacts following Sarychev Peak eruption, using the Community Earth System Model version 1.0 (CESM1) Whole Atmosphere Community Climate Model (WACCM) - Community Aerosol and Radiation Model for Atmospheres (CARMA) sectional aerosol microphysics model (with no a priori assumption on particle size). The Sarychev Peak 2009 eruption injected 0.9 Tg of SO2 into the upper troposphere and lower stratosphere (UTLS), enhancing the aerosol load in the Northern Hemisphere. The post-eruption evolution of the volcanic SO2 in space and time are well reproduced by the model when compared to Infrared Atmospheric Sounding Interferometer (IASI) satellite data. Co-injection of 27 Gg HCl causes a lengthening of the SO2 lifetime and a slight delay in the formation of aerosols, and acts to enhance the destruction of stratospheric ozone and mono-nitrogen oxides (NOx) compared to the simulation with volcanic SO2 only. We therefore highlight the need to account for volcanic halogen chemistry when simulating the impact of eruptions such as Sarychev on
Directory of Open Access Journals (Sweden)
T. Lurton
2018-03-01
Full Text Available Volcanic eruptions impact climate through the injection of sulfur dioxide (SO2, which is oxidized to form sulfuric acid aerosol particles that can enhance the stratospheric aerosol optical depth (SAOD. Besides large-magnitude eruptions, moderate-magnitude eruptions such as Kasatochi in 2008 and Sarychev Peak in 2009 can have a significant impact on stratospheric aerosol and hence climate. However, uncertainties remain in quantifying the atmospheric and climatic impacts of the 2009 Sarychev Peak eruption due to limitations in previous model representations of volcanic aerosol microphysics and particle size, whilst biases have been identified in satellite estimates of post-eruption SAOD. In addition, the 2009 Sarychev Peak eruption co-injected hydrogen chloride (HCl alongside SO2, whose potential stratospheric chemistry impacts have not been investigated to date. We present a study of the stratospheric SO2–particle–HCl processing and impacts following Sarychev Peak eruption, using the Community Earth System Model version 1.0 (CESM1 Whole Atmosphere Community Climate Model (WACCM – Community Aerosol and Radiation Model for Atmospheres (CARMA sectional aerosol microphysics model (with no a priori assumption on particle size. The Sarychev Peak 2009 eruption injected 0.9 Tg of SO2 into the upper troposphere and lower stratosphere (UTLS, enhancing the aerosol load in the Northern Hemisphere. The post-eruption evolution of the volcanic SO2 in space and time are well reproduced by the model when compared to Infrared Atmospheric Sounding Interferometer (IASI satellite data. Co-injection of 27 Gg HCl causes a lengthening of the SO2 lifetime and a slight delay in the formation of aerosols, and acts to enhance the destruction of stratospheric ozone and mono-nitrogen oxides (NOx compared to the simulation with volcanic SO2 only. We therefore highlight the need to account for volcanic halogen chemistry when simulating the impact of eruptions
Zero temperature landscape of the random sine-Gordon model
International Nuclear Information System (INIS)
Sanchez, A.; Bishop, A.R.; Cai, D.
1997-01-01
We present a preliminary summary of the zero temperature properties of the two-dimensional random sine-Gordon model of surface growth on disordered substrates. We found that the properties of this model can be accurately computed by using lattices of moderate size as the behavior of the model turns out to be independent of the size above certain length (∼ 128 x 128 lattices). Subsequently, we show that the behavior of the height difference correlation function is of (log r) 2 type up to a certain correlation length (ξ ∼ 20), which rules out predictions of log r behavior for all temperatures obtained by replica-variational techniques. Our results open the way to a better understanding of the complex landscape presented by this system, which has been the subject of very many (contradictory) analysis
Exponential random graph models for networks with community structure.
Fronczak, Piotr; Fronczak, Agata; Bujok, Maksymilian
2013-09-01
Although the community structure organization is an important characteristic of real-world networks, most of the traditional network models fail to reproduce the feature. Therefore, the models are useless as benchmark graphs for testing community detection algorithms. They are also inadequate to predict various properties of real networks. With this paper we intend to fill the gap. We develop an exponential random graph approach to networks with community structure. To this end we mainly built upon the idea of blockmodels. We consider both the classical blockmodel and its degree-corrected counterpart and study many of their properties analytically. We show that in the degree-corrected blockmodel, node degrees display an interesting scaling property, which is reminiscent of what is observed in real-world fractal networks. A short description of Monte Carlo simulations of the models is also given in the hope of being useful to others working in the field.
The Little-Hopfield model on a sparse random graph
International Nuclear Information System (INIS)
Castillo, I Perez; Skantzos, N S
2004-01-01
We study the Hopfield model on a random graph in scaling regimes where the average number of connections per neuron is a finite number and the spin dynamics is governed by a synchronous execution of the microscopic update rule (Little-Hopfield model). We solve this model within replica symmetry, and by using bifurcation analysis we prove that the spin-glass/paramagnetic and the retrieval/paramagnetic transition lines of our phase diagram are identical to those of sequential dynamics. The first-order retrieval/spin-glass transition line follows by direct evaluation of our observables using population dynamics. Within the accuracy of numerical precision and for sufficiently small values of the connectivity parameter we find that this line coincides with the corresponding sequential one. Comparison with simulation experiments shows excellent agreement
Pedestrian Walking Behavior Revealed through a Random Walk Model
Directory of Open Access Journals (Sweden)
Hui Xiong
2012-01-01
Full Text Available This paper applies method of continuous-time random walks for pedestrian flow simulation. In the model, pedestrians can walk forward or backward and turn left or right if there is no block. Velocities of pedestrian flow moving forward or diffusing are dominated by coefficients. The waiting time preceding each jump is assumed to follow an exponential distribution. To solve the model, a second-order two-dimensional partial differential equation, a high-order compact scheme with the alternating direction implicit method, is employed. In the numerical experiments, the walking domain of the first one is two-dimensional with two entrances and one exit, and that of the second one is two-dimensional with one entrance and one exit. The flows in both scenarios are one way. Numerical results show that the model can be used for pedestrian flow simulation.
Random isotropic one-dimensional XY-model
Gonçalves, L. L.; Vieira, A. P.
1998-01-01
The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .
SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation
International Nuclear Information System (INIS)
Yao, W; Farr, J
2015-01-01
Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations
International Nuclear Information System (INIS)
Lee, Sang Yong; Chung, Bob Dong; Lee, Young Jin; Park, Chan Eok; Lee, Guy Hyung; Choi, Chul Jin
1994-06-01
This research aims to develop reliable, advanced system thermal-hydraulic computer code and to quantify the uncertainties of code to introduce the best estimate methodology of ECCS for LBLOCA. Although the one of best estimate code, RELAP5/MOD3.1 was introduced from USNRC, several deficiencies in its reflood model and some improvements have been made. The improvements consist of modification of reflood wall heat transfer package and adjusting the drop size in dispersed flow regime. The tome smoothing of wall vaporization and level tracking model are also added to eliminate the pressure spike and level oscillation. For the verification of improved model and quantification of associated uncertainty, the FLECHT-SEASET data were used and upper limit of uncertainty at 95% confidence level is evaluated. (Author) 30 refs., 49 figs., 2 tabs
Energy Technology Data Exchange (ETDEWEB)
Lee, Sang Yong; Chung, Bob Dong; Lee, Young Jin; Park, Chan Eok; Lee, Guy Hyung; Choi, Chul Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
1994-06-01
This research aims to develop reliable, advanced system thermal-hydraulic computer code and to quantify the uncertainties of code to introduce the best estimate methodology of ECCS for LBLOCA. Although the one of best estimate code, RELAP5/MOD3.1 was introduced from USNRC, several deficiencies in its reflood model and some improvements have been made. The improvements consist of modification of reflood wall heat transfer package and adjusting the drop size in dispersed flow regime. The tome smoothing of wall vaporization and level tracking model are also added to eliminate the pressure spike and level oscillation. For the verification of improved model and quantification of associated uncertainty, the FLECHT-SEASET data were used and upper limit of uncertainty at 95% confidence level is evaluated. (Author) 30 refs., 49 figs., 2 tabs.
The random cluster model and a new integration identity
International Nuclear Information System (INIS)
Chen, L C; Wu, F Y
2005-01-01
We evaluate the free energy of the random cluster model at its critical point for 0 -1 (√q/2) is a rational number. As a by-product, our consideration leads to a closed-form evaluation of the integral 1/(4π 2 ) ∫ 0 2π dΘ ∫ 0 2π dΦ ln[A+B+C - AcosΘ - BcosΦ - Ccos(Θ+Φ)] = -ln(2S) + (2/π)[Ti 2 (AS) + Ti 2 (BS) + Ti 2 (CS)], which arises in lattice statistics, where A, B, C ≥ 0 and S=1/√(AB + BC + CA)
Universality in random-walk models with birth and death
International Nuclear Information System (INIS)
Bender, C.M.; Boettcher, S.; Meisinger, P.N.
1995-01-01
Models of random walks are considered in which walkers are born at one site and die at all other sites. Steady-state distributions of walkers exhibit dimensionally dependent critical behavior as a function of the birth rate. Exact analytical results for a hyperspherical lattice yield a second-order phase transition with a nontrivial critical exponent for all positive dimensions D≠2, 4. Numerical studies of hypercubic and fractal lattices indicate that these exact results are universal. This work elucidates the adsorption transition of polymers at curved interfaces. copyright 1995 The American Physical Society
Permeability of model porous medium formed by random discs
Gubaidullin, A. A.; Gubkin, A. S.; Igoshin, D. E.; Ignatev, P. A.
2018-03-01
Two-dimension model of the porous medium with skeleton of randomly located overlapping discs is proposed. The geometry and computational grid are built in open package Salome. Flow of Newtonian liquid in longitudinal and transverse directions is calculated and its flow rate is defined. The numerical solution of the Navier-Stokes equations for a given pressure drop at the boundaries of the area is realized in the open package OpenFOAM. Calculated value of flow rate is used for defining of permeability coefficient on the base of Darcy law. For evaluating of representativeness of computational domain the permeability coefficients in longitudinal and transverse directions are compered.
Interpreting parameters in the logistic regression model with random effects
DEFF Research Database (Denmark)
Larsen, Klaus; Petersen, Jørgen Holm; Budtz-Jørgensen, Esben
2000-01-01
interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects......interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects...
Geometric Models for Isotropic Random Porous Media: A Review
Directory of Open Access Journals (Sweden)
Helmut Hermann
2014-01-01
Full Text Available Models for random porous media are considered. The models are isotropic both from the local and the macroscopic point of view; that is, the pores have spherical shape or their surface shows piecewise spherical curvature, and there is no macroscopic gradient of any geometrical feature. Both closed-pore and open-pore systems are discussed. The Poisson grain model, the model of hard spheres packing, and the penetrable sphere model are used; variable size distribution of the pores is included. A parameter is introduced which controls the degree of open-porosity. Besides systems built up by a single solid phase, models for porous media with the internal surface coated by a second phase are treated. Volume fraction, surface area, and correlation functions are given explicitly where applicable; otherwise numerical methods for determination are described. Effective medium theory is applied to calculate physical properties for the models such as isotropic elastic moduli, thermal and electrical conductivity, and static dielectric constant. The methods presented are exemplified by applications: small-angle scattering of systems showing fractal-like behavior in limited ranges of linear dimension, optimization of nanoporous insulating materials, and improvement of properties of open-pore systems by atomic layer deposition of a second phase on the internal surface.
Rigorously testing multialternative decision field theory against random utility models.
Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg
2014-06-01
Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Scott, Daniel G.; Evans, Jessica
2010-01-01
This paper emerges from the continued analysis of data collected in a series of international studies concerning Childhood Peak Experiences (CPEs) based on developments in understanding peak experiences in Maslow's hierarchy of needs initiated by Dr Edward Hoffman. Bridging from the series of studies, Canadian researchers explore collected…
Gaussian random bridges and a geometric model for information equilibrium
Mengütürk, Levent Ali
2018-03-01
The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.
Analysis of the main dosimetric peak of Al2O3:C compounds with a model of interacting traps
International Nuclear Information System (INIS)
Ortega, F.; Marcazzó, J.; Molina, P.; Santiago, M.; Lester, M.; Henniger, J.; Caselli, E.
2013-01-01
The glow curve of Al 2 O 3 :C compounds has been analyzed by employing a model consisting of two active traps, thermally disconnected traps and one recombination centre. The analysis takes into account interaction among traps and the thermal quenching of the thermoluminescent emission. - Highlights: • Glow curves of Al 2 O 3 :C for two doses have been analysed taking into account interactions among traps. • The system of differential equations describing the kinetics has been uncoupled. • The new system of equations takes into account equations without derivatives. • The algorithm used will not become stiff. • The kinetics parameters obtained do not depend on the dose
DEFF Research Database (Denmark)
Raalskov, Jesper; Warming-Rasmussen, Bent
Peak-interviewet er en særlig effektiv metode til at gøre ubevidste menneskelige ressourcer bevidste. Fokuspersonen (den interviewede) interviewes om en selvvalgt, personlig succesoplevelse. Terapeuten/coachen (intervieweren) spørger ind til processen, som ledte hen til denne succes. Herved afdæk...... fokuspersonen ønsker at tage op (nye mål eller nye processer). Nærværende workingpaper beskriver, hvad der menes med et peak-interview, peakinterviwets teoretiske fundament samt metodikken til at foretage et tillidsfuldt og effektiv peak-interview....
Madeiro, João P V; Nicolson, William B; Cortez, Paulo C; Marques, João A L; Vázquez-Seisdedos, Carlos R; Elangovan, Narmadha; Ng, G Andre; Schlindwein, Fernando S
2013-08-01
This paper presents an innovative approach for T-wave peak detection and subsequent T-wave end location in 12-lead paced ECG signals based on a mathematical model of a skewed Gaussian function. Following the stage of QRS segmentation, we establish search windows using a number of the earliest intervals between each QRS offset and subsequent QRS onset. Then, we compute a template based on a Gaussian-function, modified by a mathematical procedure to insert asymmetry, which models the T-wave. Cross-correlation and an approach based on the computation of Trapezium's area are used to locate, respectively, the peak and end point of each T-wave throughout the whole raw ECG signal. For evaluating purposes, we used a database of high resolution 12-lead paced ECG signals, recorded from patients with ischaemic cardiomyopathy (ICM) in the University Hospitals of Leicester NHS Trust, UK, and the well-known QT database. The average T-wave detection rates, sensitivity and positive predictivity, were both equal to 99.12%, for the first database, and, respectively, equal to 99.32% and 99.47%, for QT database. The average time errors computed for T-wave peak and T-wave end locations were, respectively, -0.38±7.12 ms and -3.70±15.46 ms, for the first database, and 1.40±8.99 ms and 2.83±15.27 ms, for QT database. The results demonstrate the accuracy, consistency and robustness of the proposed method for a wide variety of T-wave morphologies studied. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Joint modeling of ChIP-seq data via a Markov random field model
Bao, Yanchun; Vinciotti, Veronica; Wit, Ernst; 't Hoen, Peter A C
Chromatin ImmunoPrecipitation-sequencing (ChIP-seq) experiments have now become routine in biology for the detection of protein-binding sites. In this paper, we present a Markov random field model for the joint analysis of multiple ChIP-seq experiments. The proposed model naturally accounts for
Moyer, R.D.
A peak power ratio generator is described for measuring, in combination with a conventional power meter, the peak power level of extremely narrow pulses in the gigahertz radio frequency bands. The present invention in a preferred embodiment utilizes a tunnel diode and a back diode combination in a detector circuit as the only high speed elements. The high speed tunnel diode provides a bistable signal and serves as a memory device of the input pulses for the remaining, slower components. A hybrid digital and analog loop maintains the peak power level of a reference channel at a known amount. Thus, by measuring the average power levels of the reference signal and the source signal, the peak power level of the source signal can be determined.
Bayesian Hierarchical Random Effects Models in Forensic Science
Directory of Open Access Journals (Sweden)
Colin G. G. Aitken
2018-04-01
Full Text Available Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.
Bayesian Hierarchical Random Effects Models in Forensic Science.
Aitken, Colin G G
2018-01-01
Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.
Percolation for a model of statistically inhomogeneous random media
International Nuclear Information System (INIS)
Quintanilla, J.; Torquato, S.
1999-01-01
We study clustering and percolation phenomena for a model of statistically inhomogeneous two-phase random media, including functionally graded materials. This model consists of inhomogeneous fully penetrable (Poisson distributed) disks and can be constructed for any specified variation of volume fraction. We quantify the transition zone in the model, defined by the frontier of the cluster of disks which are connected to the disk-covered portion of the model, by defining the coastline function and correlation functions for the coastline. We find that the behavior of these functions becomes largely independent of the specific choice of grade in volume fraction as the separation of length scales becomes large. We also show that the correlation function behaves in a manner similar to that of fractal Brownian motion. Finally, we study fractal characteristics of the frontier itself and compare to similar properties for two-dimensional percolation on a lattice. In particular, we show that the average location of the frontier appears to be related to the percolation threshold for homogeneous fully penetrable disks. copyright 1999 American Institute of Physics
Hedonic travel cost and random utility models of recreation
Energy Technology Data Exchange (ETDEWEB)
Pendleton, L. [Univ. of Southern California, Los Angeles, CA (United States); Mendelsohn, R.; Davis, E.W. [Yale Univ., New Haven, CT (United States). School of Forestry and Environmental Studies
1998-07-09
Micro-economic theory began as an attempt to describe, predict and value the demand and supply of consumption goods. Quality was largely ignored at first, but economists have started to address quality within the theory of demand and specifically the question of site quality, which is an important component of land management. This paper demonstrates that hedonic and random utility models emanate from the same utility theoretical foundation, although they make different estimation assumptions. Using a theoretically consistent comparison, both approaches are applied to examine the quality of wilderness areas in the Southeastern US. Data were collected on 4778 visits to 46 trails in 20 different forest areas near the Smoky Mountains. Visitor data came from permits and an independent survey. The authors limited the data set to visitors from within 300 miles of the North Carolina and Tennessee border in order to focus the analysis on single purpose trips. When consistently applied, both models lead to results with similar signs but different magnitudes. Because the two models are equally valid, recreation studies should continue to use both models to value site quality. Further, practitioners should be careful not to make simplifying a priori assumptions which limit the effectiveness of both techniques.
Jacques, Alain
2016-12-01
The dislocation-based modeling of the high-temperature creep of two-phased single-crystal superalloys requires input data beyond strain vs time curves. This may be obtained by use of in situ experiments combining high-temperature creep tests with high-resolution synchrotron three-crystal diffractometry. Such tests give access to changes in phase volume fractions and to the average components of the stress tensor in each phase as well as the plastic strain of each phase. Further progress may be obtained by a new method making intensive use of the Fast Fourier Transform, and first modeling the behavior of a representative volume of material (stress fields, plastic strain, dislocation densities…), then simulating directly the corresponding diffraction peaks, taking into account the displacement field within the material, chemical variations, and beam coherence. Initial tests indicate that the simulated peak shapes are close to the experimental ones and are quite sensitive to the details of the microstructure and to dislocation densities at interfaces and within the soft γ phase.
Droplet localization in the random XXZ model and its manifestations
Elgart, A.; Klein, A.; Stolz, G.
2018-01-01
We examine many-body localization properties for the eigenstates that lie in the droplet sector of the random-field spin- \\frac 1 2 XXZ chain. These states satisfy a basic single cluster localization property (SCLP), derived in Elgart et al (2018 J. Funct. Anal. (in press)). This leads to many consequences, including dynamical exponential clustering, non-spreading of information under the time evolution, and a zero velocity Lieb-Robinson bound. Since SCLP is only applicable to the droplet sector, our definitions and proofs do not rely on knowledge of the spectral and dynamical characteristics of the model outside this regime. Rather, to allow for a possible mobility transition, we adapt the notion of restricting the Hamiltonian to an energy window from the single particle setting to the many body context.
[Critical of the additive model of the randomized controlled trial].
Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine
2008-01-01
Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.
Stochastic equilibria of an asset pricing model with heterogeneous beliefs and random dividends
Zhu, M.; Wang, D.; Guo, M.
2011-01-01
We investigate dynamical properties of a heterogeneous agent model with random dividends and further study the relationship between dynamical properties of the random model and those of the corresponding deterministic skeleton, which is obtained by setting the random dividends as their constant mean
Multiscale model of short cracks in a random polycrystalline aggregate
International Nuclear Information System (INIS)
Simonovski, I.; Cizelj, L.; Petric, Z.
2006-01-01
A plane-strain finite element crystal plasticity model of microstructurally small stationary crack emanating at a surface grain in a 316L stainless steel is proposed. The model consisting of 212 randomly shaped, sized and oriented grains is loaded monotonically in uniaxial tension to a maximum load of 1.12Rp0.2 (280MPa). The influence that a random grain structure imposes on a Stage I crack is assessed by calculating the crack tip opening (CTOD) and sliding displacements (CTSD) for single crystal as well as for polycrystal models, considering also different crystallographic orientations. In the single crystal case the CTOD and CTSD may differ by more than one order of magnitude. Near the crack tip slip is activated on all the slip planes whereby only two are active in the rest of the model. The maximum CTOD is directly related to the maximal Schmid factors. For the more complex polycrystal cases it is shown that certain crystallographic orientations result in a cluster of soft grains around the crack-containing grain. In these cases the crack tip can become a part of the localized strain, resulting in a large CTOD value. This effect, resulting from the overall grain orientations and sizes, can have a greater impact on the CTOD than the local grain orientation. On the other hand, when a localized soft response is formed away from the crack, the localized strain does not affect the crack tip directly, resulting in a small CTOD value. The resulting difference in CTOD can be up to a factor of 4, depending upon the crystallographic set. Grains as far as 6 times the value of crack length significantly influence that crack tip parameters. It was also found that a larger crack containing grain tends to increase the CTOD. Finally, smaller than expected drop in the CTOD (12.7%) was obtained as the crack approached the grain boundary. This could be due to the assumption of the unchanged crack direction, only monotonic loading and simplified grain boundary modelling. (author)
Energy Technology Data Exchange (ETDEWEB)
Xu, Zhijie; Tartakovsky, Alexandre M.
2017-09-01
This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersion initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.
Yanamadala, J; Noetscher, G M; Makarov, S N; Pascual-Leone, A
2017-07-01
Transcranial magnetic stimulation (TMS) for treatment of depression during pregnancy is an appealing alternative to fetus-threatening drugs. However, no studies to date have been performed that evaluate the safety of TMS for a pregnant mother patient and her fetus. A full-body FEM model of a pregnant woman with about 100 tissue parts has been developed specifically for the present study. This model allows accurate computations of induced electric field in every tissue given different locations of a shape-eight coil, a biphasic pulse, common TMS pulse durations, and using different values of the TMS intensity measured in SMT (Standard Motor Threshold) units. Our simulation results estimate the maximum peak values of the electric field in the fetal area for every fetal tissue separately and for the TMS intensity of one SMT unit.
Measurement model choice influenced randomized controlled trial results.
Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos
2016-11-01
In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright Â© 2016 Elsevier Inc. All rights reserved.
Asthma - make peak flow a habit; Reactive airway disease - peak flow; Bronchial asthma - peak flow ... 2014:chap 55. National Asthma Education and Prevention Program website. How to use a peak flow meter. ...
Automated asteroseismic peak detections
DEFF Research Database (Denmark)
de Montellano, Andres Garcia Saravia Ortiz; Hekker, S.; Themessl, N.
2018-01-01
Space observatories such as Kepler have provided data that can potentially revolutionize our understanding of stars. Through detailed asteroseismic analyses we are capable of determining fundamental stellar parameters and reveal the stellar internal structure with unprecedented accuracy. However......, such detailed analyses, known as peak bagging, have so far been obtained for only a small percentage of the observed stars while most of the scientific potential of the available data remains unexplored. One of the major challenges in peak bagging is identifying how many solar-like oscillation modes are visible...... of detected oscillation modes. The algorithm presented here opens the possibility for detailed and automated peak bagging of the thousands of solar-like oscillators observed by Kepler....
Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification
Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.
2017-12-01
Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data
Models for randomly distributed nanoscopic domains on spherical vesicles
Anghel, Vinicius N. P.; Bolmatov, Dima; Katsaras, John
2018-06-01
The existence of lipid domains in the plasma membrane of biological systems has proven controversial, primarily due to their nanoscopic size—a length scale difficult to interrogate with most commonly used experimental techniques. Scattering techniques have recently proven capable of studying nanoscopic lipid domains populating spherical vesicles. However, the development of analytical methods able of predicting and analyzing domain pair correlations from such experiments has not kept pace. Here, we developed models for the random distribution of monodisperse, circular nanoscopic domains averaged on the surface of a spherical vesicle. Specifically, the models take into account (i) intradomain correlations corresponding to form factors and interdomain correlations corresponding to pair distribution functions, and (ii) the analytical computation of interdomain correlations for cases of two and three domains on a spherical vesicle. In the case of more than three domains, these correlations are treated either by Monte Carlo simulations or by spherical analogs of the Ornstein-Zernike and Percus-Yevick (PY) equations. Importantly, the spherical analog of the PY equation works best in the case of nanoscopic size domains, a length scale that is mostly inaccessible by experimental approaches such as, for example, fluorescent techniques and optical microscopies. The analytical form factors and structure factors of nanoscopic domains populating a spherical vesicle provide a new and important framework for the quantitative analysis of experimental data from commonly studied phase-separated vesicles used in a wide range of biophysical studies.
Gordiyenko, G. I.; Yakovets, A. F.
2017-07-01
The ionospheric F2 peak parameters recorded by a ground-based ionosonde at the midlatitude station Alma-Ata [43.25N, 76.92E] were compared with those obtained using the latest version of the IRI model (http://omniweb.gsfc.nasa.gov/vitmo/iri2012_vitmo.html). It was found that for the Alma-Ata (Kazakhstan) location, the IRI2012 model describes well the morphology of seasonal and diurnal variations of the ionospheric critical frequency (foF2) and peak density height (hmF2) monthly medians. The model errors in the median foF2 prediction (percentage deviations between the median foF2 values and their model predictions) were found to vary approximately in the range from about -20% to 34% and showed a stable overestimation in the median foF2 values for daytime in January and July and underestimation for day- and nighttime hours in the equinoctial months. The comparison between the ionosonde hmF2 and IRI results clearly showed that the IRI overestimates the nighttime hmF2 values for March and September months, and the difference is up to 30 km. The daytime Alma-Ata hmF2 data were found to be close to the IRI predictions (deviations are approximately ±10-15 km) in winter and equinoctial months, except in July when the observed hmF2 values were much more (from approximately 50-200 km). The comparison between the Alouette foF2 data and IRI predictions showed mixed results. In particular, the Alouette foF2 data showed a tendency to be overestimated for daytime in winter months similar to the ionosonde data; however, the overestimated foF2 values for nighttime in the autumn equinox were in disagreement with the ionosonde observations. There were large deviations between the observed hmF2 values and their model predictions. The largest deviations were found during winter and summer (up to -90 km). The comparison of the Alouette II electron density profiles with those predicted by the adapted IRI2012 model in the altitude range hmF2 of the satellite position showed a great
Automated asteroseismic peak detections
García Saravia Ortiz de Montellano, Andrés; Hekker, S.; Themeßl, N.
2018-05-01
Space observatories such as Kepler have provided data that can potentially revolutionize our understanding of stars. Through detailed asteroseismic analyses we are capable of determining fundamental stellar parameters and reveal the stellar internal structure with unprecedented accuracy. However, such detailed analyses, known as peak bagging, have so far been obtained for only a small percentage of the observed stars while most of the scientific potential of the available data remains unexplored. One of the major challenges in peak bagging is identifying how many solar-like oscillation modes are visible in a power density spectrum. Identification of oscillation modes is usually done by visual inspection that is time-consuming and has a degree of subjectivity. Here, we present a peak-detection algorithm especially suited for the detection of solar-like oscillations. It reliably characterizes the solar-like oscillations in a power density spectrum and estimates their parameters without human intervention. Furthermore, we provide a metric to characterize the false positive and false negative rates to provide further information about the reliability of a detected oscillation mode or the significance of a lack of detected oscillation modes. The algorithm presented here opens the possibility for detailed and automated peak bagging of the thousands of solar-like oscillators observed by Kepler.
Premium Pricing of Liability Insurance Using Random Sum Model
Kartikasari, Mujiati Dwi
2017-01-01
Premium pricing is one of important activities in insurance. Nonlife insurance premium is calculated from expected value of historical data claims. The historical data claims are collected so that it forms a sum of independent random number which is called random sum. In premium pricing using random sum, claim frequency distribution and claim severity distribution are combined. The combination of these distributions is called compound distribution. By using liability claim insurance data, we ...
Critical Behavior of the Annealed Ising Model on Random Regular Graphs
Can, Van Hao
2017-11-01
In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.
Annealed central limit theorems for the ising model on random graphs
Giardinà, C.; Giberti, C.; van der Hofstad, R.W.; Prioriello, M.L.
2016-01-01
The aim of this paper is to prove central limit theorems with respect to the annealed measure for the magnetization rescaled by √N of Ising models on random graphs. More precisely, we consider the general rank-1 inhomogeneous random graph (or generalized random graph), the 2-regular configuration
93-106, 2015 93 Multilevel random effect and marginal models
African Journals Online (AJOL)
Multilevel random effect and marginal models for longitudinal data ... and random effect models that take the correlation among measurements of the same subject ... comparing the level of redness, pain and irritability ... clinical trial evaluating the safety profile of a new .... likelihood-based methods to compare models and.
International Nuclear Information System (INIS)
Anderson, Vitas
2003-01-01
The aim of this study is to examine the scale and significance of differences in peak specific energy absorption rate (SAR) in the brains of children and adults exposed to radiofrequency emissions from mobile phones. Estimates were obtained by method of multipole analysis of a three layered (scalp/cranium/brain) spherical head exposed to a nearby 0.4 dipole at 900 MHz. A literature review of head parameters that influence SAR induction revealed strong indirect evidence based on total body water content that there are no substantive age-related changes in tissue conductivity after the first year of life. However, it was also found that the thickness of the ear, scalp and cranium do decrease on average with decreasing age, though individual variability within any age group is very high. The model analyses revealed that compared to an average adult, the peak brain 10 g averaged SAR in mean 4, 8, 12 and 16 year olds (yo) is increased by a factor of 1.31, 1.23, 1.15 and 1.07, respectively. However, contrary to the expectations of a recent prominent expert review, the UK Stewart Report, the relatively small scale of these increases does not warrant any special precautionary measures for child mobile phone users since: (a) SAR testing protocols as contained in the CENELEC (2001) standard provide an additional safety margin which ensures that allowable localized SAR limits are not exceeded in the brain; (b) the maximum worst case brain temperature rise (∼0.13 to 0.14 degrees C for an average 4 yo) in child users of mobile phones is well within safe levels and normal physiological parameters; and (c) the range of age average increases in children is less than the expected range of variation seen within the adult population
MODELING URBAN DYNAMICS USING RANDOM FOREST: IMPLEMENTING ROC AND TOC FOR MODEL EVALUATION
Directory of Open Access Journals (Sweden)
M. Ahmadlou
2016-06-01
Full Text Available The importance of spatial accuracy of land use/cover change maps necessitates the use of high performance models. To reach this goal, calibrating machine learning (ML approaches to model land use/cover conversions have received increasing interest among the scholars. This originates from the strength of these techniques as they powerfully account for the complex relationships underlying urban dynamics. Compared to other ML techniques, random forest has rarely been used for modeling urban growth. This paper, drawing on information from the multi-temporal Landsat satellite images of 1985, 2000 and 2015, calibrates a random forest regression (RFR model to quantify the variable importance and simulation of urban change spatial patterns. The results and performance of RFR model were evaluated using two complementary tools, relative operating characteristics (ROC and total operating characteristics (TOC, by overlaying the map of observed change and the modeled suitability map for land use change (error map. The suitability map produced by RFR model showed 82.48% area under curve for the ROC model which indicates a very good performance and highlights its appropriateness for simulating urban growth.
PEAK SHAVING CONSIDERING STREAMFLOW UNCERTAINTIES
African Journals Online (AJOL)
user
The random nature of the system load is re-organized by using a Markov load model. The results include a ... has received a considerable attention among optimisation problems. ... the dynamic programming theory used in this work is given ...
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope
Energy Technology Data Exchange (ETDEWEB)
Quan, Wei; Lv, Lin, E-mail: lvlinlch1990@163.com; Liu, Baiqi [School of Instrument Science and Opto-Electronics Engineering, Beihang University, Beijing 100191 (China)
2014-11-15
In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.
International Nuclear Information System (INIS)
Courtin, E.; Grund, K.; Traub, S.; Zeeb, H.
1975-01-01
The peak reading detector circuit serves for picking up the instants during which peaks of a given polarity occur in sequences of signals in which the extreme values, their time intervals, and the curve shape of the signals vary. The signal sequences appear in measuring the foetal heart beat frequence from amplitude-modulated ultrasonic, electrocardiagram, and blood pressure signals. In order to prevent undesired emission of output signals from, e. g., disturbing intermediate extreme values, the circuit consists of the series connections of a circuit to simulate an ideal diode, a strong unit, a discriminator for the direction of charging current, a time-delay circuit, and an electronic switch lying in the decharging circuit of the storage unit. The time-delay circuit thereby causes storing of a preliminary maximum value being used only after a certain time delay for the emission of the output signal. If a larger extreme value occurs during the delay time the preliminary maximum value is cleared and the delay time starts running anew. (DG/PB) [de
Palumbo, Ashley M.; Head, James W.; Wordsworth, Robin D.
2018-01-01
The nature of the Late Noachian climate of Mars remains one of the outstanding questions in the study of the evolution of martian geology and climate. Despite abundant evidence for flowing water (valley networks and open/closed basin lakes), climate models have had difficulties reproducing mean annual surface temperatures (MAT) > 273 K in order to generate the ;warm and wet; climate conditions presumed to be necessary to explain the observed fluvial and lacustrine features. Here, we consider a ;cold and icy; climate scenario, characterized by MAT ∼225 K and snow and ice distributed in the southern highlands, and ask: Does the formation of the fluvial and lacustrine features require continuous ;warm and wet; conditions, or could seasonal temperature variation in a ;cold and icy; climate produce sufficient summertime ice melting and surface runoff to account for the observed features? To address this question, we employ the 3D Laboratoire de Météorologie Dynamique global climate model (LMD GCM) for early Mars and (1) analyze peak annual temperature (PAT) maps to determine where on Mars temperatures exceed freezing in the summer season, (2) produce temperature time series at three valley network systems and compare the duration of the time during which temperatures exceed freezing with seasonal temperature variations in the Antarctic McMurdo Dry Valleys (MDV) where similar fluvial and lacustrine features are observed, and (3) perform a positive-degree-day analysis to determine the annual volume of meltwater produced through this mechanism, estimate the necessary duration that this process must repeat to produce sufficient meltwater for valley network formation, and estimate whether runoff rates predicted by this mechanism are comparable to those required to form the observed geomorphology of the valley networks. When considering an ambient CO2 atmosphere, characterized by MAT ∼225 K, we find that: (1) PAT can exceed the melting point of water (>273 K) in
Recent developments in exponential random graph (p*) models for social networks
Robins, Garry; Snijders, Tom; Wang, Peng; Handcock, Mark; Pattison, Philippa
This article reviews new specifications for exponential random graph models proposed by Snijders et al. [Snijders, T.A.B., Pattison, P., Robins, G.L., Handcock, M., 2006. New specifications for exponential random graph models. Sociological Methodology] and demonstrates their improvement over
Knighton, James; Steinschneider, Scott; Walter, M. Todd
2017-12-01
There is a chronic disconnection among purely probabilistic flood frequency analysis of flood hazards, flood risks, and hydrological flood mechanisms, which hamper our ability to assess future flood impacts. We present a vulnerability-based approach to estimating riverine flood risk that accommodates a more direct linkage between decision-relevant metrics of risk and the dominant mechanisms that cause riverine flooding. We adapt the conventional peaks-over-threshold (POT) framework to be used with extreme precipitation from different climate processes and rainfall-runoff-based model output. We quantify the probability that at least one adverse hydrologic threshold, potentially defined by stakeholders, will be exceeded within the next N years. This approach allows us to consider flood risk as the summation of risk from separate atmospheric mechanisms, and supports a more direct mapping between hazards and societal outcomes. We perform this analysis within a bottom-up framework to consider the relevance and consequences of information, with varying levels of credibility, on changes to atmospheric patterns driving extreme precipitation events. We demonstrate our proposed approach using a case study for Fall Creek in Ithaca, NY, USA, where we estimate the risk of stakeholder-defined flood metrics from three dominant mechanisms: summer convection, tropical cyclones, and spring rain and snowmelt. Using downscaled climate projections, we determine how flood risk associated with a subset of mechanisms may change in the future, and the resultant shift to annual flood risk. The flood risk approach we propose can provide powerful new insights into future flood threats.
International Nuclear Information System (INIS)
Kang, Li; Tang, Sanyi
2016-01-01
Highlights: • The discrete single species and multiple species models with random perturbation are proposed. • The complex dynamics and interesting bifurcation behavior have been investigated. • The reverse effects of random perturbation on discrete systems have been discussed and revealed. • The main results can be applied for pest control and resources management. - Abstract: The natural species are likely to present several interesting and complex phenomena under random perturbations, which have been confirmed by simple mathematical models. The important questions are: how the random perturbations influence the dynamics of the discrete population models with multiple steady states or multiple species interactions? and is there any different effects for single species and multiple species models with random perturbation? To address those interesting questions, we have proposed the discrete single species model with two stable equilibria and the host-parasitoid model with Holling type functional response functions to address how the random perturbation affects the dynamics. The main results indicate that the random perturbation does not change the number of blurred orbits of the single species model with two stable steady states compared with results for the classical Ricker model with same random perturbation, but it can strength the stability. However, extensive numerical investigations depict that the random perturbation does not influence the complexities of the host-parasitoid models compared with the results for the models without perturbation, while it does increase the period of periodic orbits doubly. All those confirm that the random perturbation has a reverse effect on the dynamics of the discrete single and multiple population models, which could be applied in reality including pest control and resources management.
Bayesian analysis for exponential random graph models using the adaptive exchange sampler
Jin, Ick Hoon; Liang, Faming; Yuan, Ying
2013-01-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
Square-lattice random Potts model: criticality and pitchfork bifurcation
International Nuclear Information System (INIS)
Costa, U.M.S.; Tsallis, C.
1983-01-01
Within a real space renormalization group framework based on self-dual clusters, the criticality of the quenched bond-mixed q-state Potts ferromagnet on square lattice is discussed. On qualitative grounds it is exhibited that the crossover from the pure fixed point to the random one occurs, while q increases, through a pitchfork bifurcation; the relationship with Harris criterion is analyzed. On quantitative grounds high precision numerical values are presented for the critical temperatures corresponding to various concentrations of the coupling constants J 1 and J 2 , and various ratios J 1 /J 2 . The pure, random and crossover critical exponents are discussed as well. (Author) [pt
Genetic Analysis of Daily Maximum Milking Speed by a Random Walk Model in Dairy Cows
DEFF Research Database (Denmark)
Karacaören, Burak; Janss, Luc; Kadarmideen, Haja
Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models of ...... filter applications: random walk model could give online prediction of breeding values. Hence without waiting for whole lactation records, genetic evaluation could be made when the daily or monthly data is available......Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models...... of maximum milking speed. Wood curve did not provide a good fit to the data set. Quadratic random regressions gave better predictions compared with the random walk model. However random walk model does not need to be evaluated for different orders of regression coefficients. In addition with the Kalman...
Multiscale peak detection in wavelet space.
Zhang, Zhi-Min; Tong, Xia; Peng, Ying; Ma, Pan; Zhang, Ming-Jin; Lu, Hong-Mei; Chen, Xiao-Qing; Liang, Yi-Zeng
2015-12-07
Accurate peak detection is essential for analyzing high-throughput datasets generated by analytical instruments. Derivatives with noise reduction and matched filtration are frequently used, but they are sensitive to baseline variations, random noise and deviations in the peak shape. A continuous wavelet transform (CWT)-based method is more practical and popular in this situation, which can increase the accuracy and reliability by identifying peaks across scales in wavelet space and implicitly removing noise as well as the baseline. However, its computational load is relatively high and the estimated features of peaks may not be accurate in the case of peaks that are overlapping, dense or weak. In this study, we present multi-scale peak detection (MSPD) by taking full advantage of additional information in wavelet space including ridges, valleys, and zero-crossings. It can achieve a high accuracy by thresholding each detected peak with the maximum of its ridge. It has been comprehensively evaluated with MALDI-TOF spectra in proteomics, the CAMDA 2006 SELDI dataset as well as the Romanian database of Raman spectra, which is particularly suitable for detecting peaks in high-throughput analytical signals. Receiver operating characteristic (ROC) curves show that MSPD can detect more true peaks while keeping the false discovery rate lower than MassSpecWavelet and MALDIquant methods. Superior results in Raman spectra suggest that MSPD seems to be a more universal method for peak detection. MSPD has been designed and implemented efficiently in Python and Cython. It is available as an open source package at .
Directory of Open Access Journals (Sweden)
Matthias An der Heiden
Full Text Available BACKGROUND: On June 11, 2009, the World Health Organization declared phase 6 of the novel influenza A/H1N1 pandemic. Although by the end of September 2009, the novel virus had been reported from all continents, the impact in most countries of the northern hemisphere has been limited. The return of the virus in a second wave would encounter populations that are still nonimmune and not vaccinated yet. We modelled the effect of control strategies to reduce the spread with the goal to defer the epidemic wave in a country where it is detected in a very early stage. METHODOLOGY/PRINCIPAL FINDINGS: We constructed a deterministic SEIR model using the age distribution and size of the population of Germany based on the observed number of imported cases and the early findings for the epidemiologic characteristics described by Fraser (Science, 2009. We propose a two-step control strategy with an initial effort to trace, quarantine, and selectively give prophylactic treatment to contacts of the first 100 to 500 cases. In the second step, the same measures are focused on the households of the next 5,000 to 10,000 cases. As a result, the peak of the epidemic could be delayed up to 7.6 weeks if up to 30% of cases are detected. However, the cumulative attack rates would not change. Necessary doses of antivirals would be less than the number of treatment courses for 0.1% of the population. In a sensitivity analysis, both case detection rate and the variation of R0 have major effects on the resulting delay. CONCLUSIONS/SIGNIFICANCE: Control strategies that reduce the spread of the disease during the early phase of a pandemic wave may lead to a substantial delay of the epidemic. Since prophylactic treatment is only offered to the contacts of the first 10,000 cases, the amount of antivirals needed is still very limited.
Random Modeling of Daily Rainfall and Runoff Using a Seasonal Model and Wavelet Denoising
Directory of Open Access Journals (Sweden)
Chien-ming Chou
2014-01-01
Full Text Available Instead of Fourier smoothing, this study applied wavelet denoising to acquire the smooth seasonal mean and corresponding perturbation term from daily rainfall and runoff data in traditional seasonal models, which use seasonal means for hydrological time series forecasting. The denoised rainfall and runoff time series data were regarded as the smooth seasonal mean. The probability distribution of the percentage coefficients can be obtained from calibrated daily rainfall and runoff data. For validated daily rainfall and runoff data, percentage coefficients were randomly generated according to the probability distribution and the law of linear proportion. Multiplying the generated percentage coefficient by the smooth seasonal mean resulted in the corresponding perturbation term. Random modeling of daily rainfall and runoff can be obtained by adding the perturbation term to the smooth seasonal mean. To verify the accuracy of the proposed method, daily rainfall and runoff data for the Wu-Tu watershed were analyzed. The analytical results demonstrate that wavelet denoising enhances the precision of daily rainfall and runoff modeling of the seasonal model. In addition, the wavelet denoising technique proposed in this study can obtain the smooth seasonal mean of rainfall and runoff processes and is suitable for modeling actual daily rainfall and runoff processes.
A queueing model with randomized depletion of inventory
Albrecher, H.-J.; Boxma, O.J.; Essifi, R.; Kuijstermans, A.C.M.
2015-01-01
In this paper we study an M/M/1 queue, where the server continues to work during idle periods and builds up inventory. This inventory is used for new arriving service requirements, but it is completely emptied at random epochs of a Poisson process, whose rate depends on the current level of the
Positive random fields for modeling material stiffness and compliance
DEFF Research Database (Denmark)
Hasofer, Abraham Michael; Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob
1998-01-01
Positive random fields with known marginal properties and known correlation function are not numerous in the literature. The most prominent example is the log\\-normal field for which the complete distribution is known and for which the reciprocal field is also lognormal. It is of interest to supp...
International Nuclear Information System (INIS)
Arellano, M. Soledad; Serra, Pablo
2007-01-01
This article extends the traditional electricity peak-load pricing model to include transmission costs. In the context of a two-node, two-technology electric power system, where suppliers face inelastic demand, we show that when the marginal plant is located at the energy-importing center, generators located away from that center should pay the marginal capacity transmission cost; otherwise, consumers should bear this cost through capacity payments. Since electric power transmission is a natural monopoly, marginal-cost pricing does not fully cover costs. We propose distributing the revenue deficit among users in proportion to the surplus they derive from the service priced at marginal cost. (Author)
Studies in astronomical time series analysis: Modeling random processes in the time domain
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
Activated aging dynamics and effective trap model description in the random energy model
Baity-Jesi, M.; Biroli, G.; Cammarota, C.
2018-01-01
We study the out-of-equilibrium aging dynamics of the random energy model (REM) ruled by a single spin-flip Metropolis dynamics. We focus on the dynamical evolution taking place on time-scales diverging with the system size. Our aim is to show to what extent the activated dynamics displayed by the REM can be described in terms of an effective trap model. We identify two time regimes: the first one corresponds to the process of escaping from a basin in the energy landscape and to the subsequent exploration of high energy configurations, whereas the second one corresponds to the evolution from a deep basin to the other. By combining numerical simulations with analytical arguments we show why the trap model description does not hold in the former but becomes exact in the second.
Numerical Simulation of Entropy Growth for a Nonlinear Evolutionary Model of Random Markets
Directory of Open Access Journals (Sweden)
Mahdi Keshtkar
2016-01-01
Full Text Available In this communication, the generalized continuous economic model for random markets is revisited. In this model for random markets, agents trade by pairs and exchange their money in a random and conservative way. They display the exponential wealth distribution as asymptotic equilibrium, independently of the effectiveness of the transactions and of the limitation of the total wealth. In the current work, entropy of mentioned model is defined and then some theorems on entropy growth of this evolutionary problem are given. Furthermore, the entropy increasing by simulation on some numerical examples is verified.
Wang, Wei; Griswold, Michael E
2016-11-30
The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
van Kasteren, T.L.M.; Noulas, A.K.; Kröse, B.J.A.; Smit, G.J.M.; Epema, D.H.J.; Lew, M.S.
2008-01-01
Conditional Random Fields are a discriminative probabilistic model which recently gained popularity in applications that require modeling nonindependent observation sequences. In this work, we present the basic advantages of this model over generative models and argue about its suitability in the
Large Deviations for the Annealed Ising Model on Inhomogeneous Random Graphs: Spins and Degrees
Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; Hofstad, Remco van der
2018-04-01
We prove a large deviations principle for the total spin and the number of edges under the annealed Ising measure on generalized random graphs. We also give detailed results on how the annealing over the Ising model changes the degrees of the vertices in the graph and show how it gives rise to interesting correlated random graphs.
DEFF Research Database (Denmark)
Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne
2014-01-01
Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI...
The Random Walk Model Based on Bipartite Network
Directory of Open Access Journals (Sweden)
Zhang Man-Dun
2016-01-01
Full Text Available With the continuing development of the electronic commerce and growth of network information, there is a growing possibility for citizens to be confused by the information. Though the traditional technology of information retrieval have the ability to relieve the overload of information in some extent, it can not offer a targeted personality service based on user’s interests and activities. In this context, the recommendation algorithm arose. In this paper, on the basis of conventional recommendation, we studied the scheme of random walk based on bipartite network and the application of it. We put forward a similarity measurement based on implicit feedback. In this method, a uneven character vector is imported(the weight of item in the system. We put forward a improved random walk pattern which make use of partial or incomplete neighbor information to create recommendation information. In the end, there is an experiment in the real data set, the recommendation accuracy and practicality are improved. We promise the reality of the result of the experiment
Numerical modelling of random walk one-dimensional diffusion
International Nuclear Information System (INIS)
Vamos, C.; Suciu, N.; Peculea, M.
1996-01-01
The evolution of a particle which moves on a discrete one-dimensional lattice, according to a random walk low, approximates better the diffusion process smaller the steps of the spatial lattice and time are. For a sufficiently large assembly of particles one can assume that their relative frequency at lattice knots approximates the distribution function of the diffusion process. This assumption has been tested by simulating on computer two analytical solutions of the diffusion equation: the Brownian motion and the steady state linear distribution. To evaluate quantitatively the similarity between the numerical and analytical solutions we have used a norm given by the absolute value of the difference of the two solutions. Also, a diffusion coefficient at any lattice knots and moment of time has been calculated, by using the numerical solution both from the diffusion equation and the particle flux given by Fick's low. The difference between diffusion coefficient of analytical solution and the spatial lattice mean coefficient of numerical solution constitutes another quantitative indication of the similarity of the two solutions. The results obtained show that the approximation depends first on the number of particles at each knot of the spatial lattice. In conclusion, the random walk is a microscopic process of the molecular dynamics type which permits simulations precision of the diffusion processes with given precision. The numerical method presented in this work may be useful both in the analysis of real experiments and for theoretical studies
Leumann, Andre; Fortuna, Rafael; Leonard, Tim; Valderrabano, Victor; Herzog, Walter
2015-01-01
The menisci are thought to modulate load transfer and to absorb shocks in the knee joint. No study has experimentally measured the meniscal functions in the intact, in vivo joint loaded by physiologically relevant muscular contractions. Right knee joints of seven New Zealand white rabbits were loaded using isometric contractions of the quadriceps femoris muscles controlled by femoral nerve stimulation. Isometric knee extensor torques at the maximal and two submaximal force levels were performed at knee angles of 70°, 90°, 110°, and 130°. Patellofemoral and tibiofemoral contact areas and pressure distributions were measured using Fuji Presensor film inserted above and below the menisci and also with the menisci removed. Meniscectomy was associated with a decrease in tibiofemoral contact area ranging from 30 to 70% and a corresponding increase in average contact pressures. Contact areas measured below the menisci were consistently larger than those measured on top of the menisci. Contact areas in the patellofemoral joint (PFJ), and peak pressures in tibiofemoral and PFJs, were not affected by meniscectomy. Contact areas and peak pressures in all joints depended crucially on knee joint angle and quadriceps force: The more flexed the knee joint was, the larger were the contact areas and the higher were the peak pressures. In agreement with the literature, removal of the menisci was associated with significant decreases in tibiofemoral contact area and corresponding increases in average contact pressures, but surprisingly, peak pressures remained unaffected, indicating that the function of the menisci is to distribute loads across a greater contact area.
DEFF Research Database (Denmark)
Kaplan, Sigal; Prato, Carlo Giacomo
This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers ...
DEFF Research Database (Denmark)
Kaplan, Sigal; Prato, Carlo Giacomo
2012-01-01
This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers ...
Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.
International Nuclear Information System (INIS)
Lutz, Christian; Lehr, Ulrike; Wiebe, Kirsten S.
2012-01-01
Assuming that global oil production peaked, this paper uses scenario analysis to show the economic effects of a possible supply shortage and corresponding rise in oil prices in the next decade on different sectors in Germany and other major economies such as the US, Japan, China, the OPEC or Russia. Due to the price-inelasticity of oil demand the supply shortage leads to a sharp increase in oil prices in the second scenario, with high effects on GDP comparable to the magnitude of the global financial crises in 2008/09. Oil exporting countries benefit from high oil prices, whereas oil importing countries are negatively affected. Generally, the effects in the third scenario are significantly smaller than in the second, showing that energy efficiency measures and the switch to renewable energy sources decreases the countries' dependence on oil imports and hence reduces their vulnerability to oil price shocks on the world market. - Highlights: ► National and sectoral economic effects of peak oil until 2020 are modelled. ► The price elasticity of oil demand is low resulting in high price fluctuations. ► Oil shortage strongly affects transport and indirectly all other sectors. ► Global macroeconomic effects are comparable to the 2008/2009 crisis. ► Country effects depend on oil imports and productivity, and economic structures.
Parametric level correlations in random-matrix models
International Nuclear Information System (INIS)
Weidenmueller, Hans A
2005-01-01
We show that parametric level correlations in random-matrix theories are closely related to a breaking of the symmetry between the advanced and the retarded Green functions. The form of the parametric level correlation function is the same as for the disordered case considered earlier by Simons and Altshuler and is given by the graded trace of the commutator of the saddle-point solution with the particular matrix that describes the symmetry breaking in the actual case of interest. The strength factor differs from the case of disorder. It is determined solely by the Goldstone mode. It is essentially given by the number of levels that are strongly mixed as the external parameter changes. The factor can easily be estimated in applications
Directory of Open Access Journals (Sweden)
Hongqiang Liu
2016-06-01
Full Text Available A Bayesian random effects modeling approach was used to examine the influence of neighborhood characteristics on burglary risks in Jianghan District, Wuhan, China. This random effects model is essentially spatial; a spatially structured random effects term and an unstructured random effects term are added to the traditional non-spatial Poisson regression model. Based on social disorganization and routine activity theories, five covariates extracted from the available data at the neighborhood level were used in the modeling. Three regression models were fitted and compared by the deviance information criterion to identify which model best fit our data. A comparison of the results from the three models indicates that the Bayesian random effects model is superior to the non-spatial models in fitting the data and estimating regression coefficients. Our results also show that neighborhoods with above average bar density and department store density have higher burglary risks. Neighborhood-specific burglary risks and posterior probabilities of neighborhoods having a burglary risk greater than 1.0 were mapped, indicating the neighborhoods that should warrant more attention and be prioritized for crime intervention and reduction. Implications and limitations of the study are discussed in our concluding section.
DEFF Research Database (Denmark)
Ruban, Andrei; Simak, S.I.; Shallcross, S.
2003-01-01
We present a simple effective tetrahedron model for local lattice relaxation effects in random metallic alloys on simple primitive lattices. A comparison with direct ab initio calculations for supercells representing random Ni0.50Pt0.50 and Cu0.25Au0.75 alloys as well as the dilute limit of Au-ri......-rich CuAu alloys shows that the model yields a quantitatively accurate description of the relaxtion energies in these systems. Finally, we discuss the bond length distribution in random alloys....
A dynamic random effects multinomial logit model of household car ownership
DEFF Research Database (Denmark)
Bue Bjørner, Thomas; Leth-Petersen, Søren
2007-01-01
Using a large household panel we estimate demand for car ownership by means of a dynamic multinomial model with correlated random effects. Results suggest that the persistence in car ownership observed in the data should be attributed to both true state dependence and to unobserved heterogeneity...... (random effects). It also appears that random effects related to single and multiple car ownership are correlated, suggesting that the IIA assumption employed in simple multinomial models of car ownership is invalid. Relatively small elasticities with respect to income and car costs are estimated...
A spatial error model with continuous random effects and an application to growth convergence
Laurini, Márcio Poletti
2017-10-01
We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.
Zero-inflated count models for longitudinal measurements with heterogeneous random effects.
Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M
2017-08-01
Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.
A simple model of global cascades on random networks
Watts, Duncan J.
2002-04-01
The origin of large but rare cascades that are triggered by small initial shocks is a phenomenon that manifests itself as diversely as cultural fads, collective action, the diffusion of norms and innovations, and cascading failures in infrastructure and organizational networks. This paper presents a possible explanation of this phenomenon in terms of a sparse, random network of interacting agents whose decisions are determined by the actions of their neighbors according to a simple threshold rule. Two regimes are identified in which the network is susceptible to very large cascadesherein called global cascadesthat occur very rarely. When cascade propagation is limited by the connectivity of the network, a power law distribution of cascade sizes is observed, analogous to the cluster size distribution in standard percolation theory and avalanches in self-organized criticality. But when the network is highly connected, cascade propagation is limited instead by the local stability of the nodes themselves, and the size distribution of cascades is bimodal, implying a more extreme kind of instability that is correspondingly harder to anticipate. In the first regime, where the distribution of network neighbors is highly skewed, it is found that the most connected nodes are far more likely than average nodes to trigger cascades, but not in the second regime. Finally, it is shown that heterogeneity plays an ambiguous role in determining a system's stability: increasingly heterogeneous thresholds make the system more vulnerable to global cascades; but an increasingly heterogeneous degree distribution makes it less vulnerable.
International Nuclear Information System (INIS)
Ziegler, W. H.; Campbell, C. J.; Zagar, J.J.
2009-01-01
Oil and gas were formed under exceptional conditions in the geological past, meaning that they are subject to natural depletion, such that the past growth in production must give way to decline. Although depletion is a simple concept to grasp, public data on the resource base are extremely unreliable due to ambiguous definitions and lax reporting. The oil industry is reluctant to admit to an onset of decline carrying obvious adverse financial consequences. There are several different categories of oil and gas, from tar sands to deep water fields, each with specific characteristics that need to be evaluated. It is important to build a global model on a country by country basis in order that anomalous statistics may be identified and evaluated. Such a study suggests that the world faces the onset of decline, with far-reaching consequences given the central role of oil-based energy. It is accordingly an important subject deserving detailed consideration by policy makers. (author)
A cellular automata model of traffic flow with variable probability of randomization
International Nuclear Information System (INIS)
Zheng Wei-Fan; Zhang Ji-Ye
2015-01-01
Research on the stochastic behavior of traffic flow is important to understand the intrinsic evolution rules of a traffic system. By introducing an interactional potential of vehicles into the randomization step, an improved cellular automata traffic flow model with variable probability of randomization is proposed in this paper. In the proposed model, the driver is affected by the interactional potential of vehicles before him, and his decision-making process is related to the interactional potential. Compared with the traditional cellular automata model, the modeling is more suitable for the driver’s random decision-making process based on the vehicle and traffic situations in front of him in actual traffic. From the improved model, the fundamental diagram (flow–density relationship) is obtained, and the detailed high-density traffic phenomenon is reproduced through numerical simulation. (paper)
Comparisons of methods for calculating retention and separation of chromatographic peaks
International Nuclear Information System (INIS)
Pauls, R.E.; Rogers, L.B.
1976-09-01
The accuracy and precision of calculating retention times from means and peak maxima have been examined using an exponentially modified Gaussian as a model for tailed chromotographic peaks. At different levels of random noise, retention times could be determined with nearly the same precision using either the mean or maximum. However, the accuracies and precisions of the maxima were affected by the number of points used in the digital smooth and by the number of points recorded per unit of standard deviation. For two peaks of similar shape, consistency in the selection of points should usually permit differences in retention to be determined accurately and with approximately the same precision using maxima, means, or half-heights on the leading side of the peak
International Nuclear Information System (INIS)
Rios, Paulo R; Assis, Weslley L S; Ribeiro, Tatiana C S; Villa, Elena
2012-01-01
In a classical paper, Cahn derived expressions for the kinetics of transformations nucleated on random planes and lines. He used those as a model for nucleation on the boundaries, edges and vertices of a polycrystal consisting of equiaxed grains. In this paper it is demonstrated that Cahn's expression for random planes may be used in situations beyond the scope envisaged in Cahn's original paper. For instance, we derived an expression for the kinetics of transformations nucleated on random parallel planes that is identical to that formerly obtained by Cahn considering random planes. Computer simulation of transformations nucleated on random parallel planes is carried out. It is shown that there is excellent agreement between simulated results and analytical solutions. Such an agreement is to be expected if both the simulation and the analytical solution are correct. (paper)
Directory of Open Access Journals (Sweden)
Gabriel Recchia
2015-01-01
Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.
Statistics of peaks in cosmological nonlinear density fields
International Nuclear Information System (INIS)
Suginohara, Tatsushi; Suto, Yasushi.
1990-06-01
Distribution of the high-density peaks in the universe is examined using N-body simulations. Nonlinear evolution of the underlying density field significantly changes the statistical properties of the peaks, compared with the analytic results valid for the random Gaussian field. In particular, the abundances and correlations of the initial density peaks are discussed in the context of biased galaxy formation theory. (author)
Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E
2014-06-01
Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.
Random fluid limit of an overloaded polling model
M. Frolkova (Masha); S.G. Foss (Sergey); A.P. Zwart (Bert)
2014-01-01
htmlabstractIn the present paper, we study the evolution of an overloaded cyclic polling model that starts empty. Exploiting a connection with multitype branching processes, we derive fluid asymptotics for the joint queue length process. Under passage to the fluid dynamics, the server switches
Random fluid limit of an overloaded polling model
M. Frolkova (Masha); S.G. Foss (Sergey); A.P. Zwart (Bert)
2013-01-01
htmlabstractIn the present paper, we study the evolution of an~overloaded cyclic polling model that starts empty. Exploiting a~connection with multitype branching processes, we derive fluid asymptotics for the joint queue length process. Under passage to the fluid dynamics, the server switches
Multilevel random effect and marginal models for longitudinal data ...
African Journals Online (AJOL)
The models were applied to data obtained from a phase-III clinical trial on a new meningococcal vaccine. The goal is to investigate whether children injected by the candidate vaccine have a lower or higher risk for the occurrence of specific adverse events than children injected with licensed vaccine, and if so, to quantify the ...
Susceptibility and magnetization of a random Ising model
Energy Technology Data Exchange (ETDEWEB)
Kumar, D; Srivastava, V [Roorkee Univ. (India). Dept. of Physics
1977-08-01
The susceptibility of a bond disordered Ising model is calculated by configurationally averaging an Ornstein-Zernike type of equation for the two spin correlation function. The equation for the correlation function is derived using a diagrammatic method due to Englert. The averaging is performed using bond CPA. The magnetization is also calculated by averaging in a similar manner a linearised molecular field equation.
Spectra of Anderson type models with decaying randomness
Indian Academy of Sciences (India)
Springer Verlag Heidelberg #4 2048 1996 Dec 15 10:16:45
Our models include potentials decaying in all directions in which case ..... the free operators with some uniform bounds of low moments of the measure µ weighted ..... We have the following inequality coming out of Cauchy–Schwarz and Fubini, ... The required statement on the limit follows if we now show that the quantity in ...
Restoration of dimensional reduction in the random-field Ising model at five dimensions
Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D equality at all studied dimensions.
A single-level random-effects cross-lagged panel model for longitudinal mediation analysis.
Wu, Wei; Carroll, Ian A; Chen, Po-Yi
2017-12-06
Cross-lagged panel models (CLPMs) are widely used to test mediation with longitudinal panel data. One major limitation of the CLPMs is that the model effects are assumed to be fixed across individuals. This assumption is likely to be violated (i.e., the model effects are random across individuals) in practice. When this happens, the CLPMs can potentially yield biased parameter estimates and misleading statistical inferences. This article proposes a model named a random-effects cross-lagged panel model (RE-CLPM) to account for random effects in CLPMs. Simulation studies show that the RE-CLPM outperforms the CLPM in recovering the mean indirect and direct effects in a longitudinal mediation analysis when random effects exist in the population. The performance of the RE-CLPM is robust to a certain degree, even when the random effects are not normally distributed. In addition, the RE-CLPM does not produce harmful results when the model effects are in fact fixed in the population. Implications of the simulation studies and potential directions for future research are discussed.
Joseph, Joshua W; Novack, Victor; Wong, Matthew L; Nathanson, Larry A; Sanchez, Leon D
2017-08-01
Emergency medicine residents need to be staffed in a way that balances operational needs with their educational experience. Key to developing an optimal schedule is knowing a resident's expected productivity, a poorly understood metric. We sought to measure how a resident's busiest (peak) workload affects their overall productivity for the shift. We conducted a retrospective, observational study of resident productivity at an urban, tertiary care center with a 3-year Accreditation Council for Graduate Medical Education-approved emergency medicine training program, with 55,000 visits annually. We abstracted resident productivity data from a database of patient assignments from July 1, 2010 to June 20, 2015, utilizing a generalized estimation equation method to evaluate physician shifts. Our primary outcome measure was the total number of patients seen by a resident over a shift. The secondary outcome was the number of patients seen excluding those in the peak hour. A total of 14,361 shifts were evaluated. Multivariate analysis showed that the total number of patients seen was significantly associated with the number of patients seen during the peak hour, level of training, the timing of the shift, but most prominently, lower variance in patients seen per hour (coefficient of variation productivity can be a strong predictor of their overall productivity, but the substantial negative effect of variability favors a steadier pace. This suggests that resident staffing and patient assignments should generally be oriented toward a more consistent workload, an effect that should be further investigated with attending physicians. Copyright © 2017 Elsevier Inc. All rights reserved.
Entropy, complexity, and Markov diagrams for random walk cancer models.
Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-19
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Janssen, Hans-Karl; Stenull, Olaf
2004-02-01
We investigate corrections to scaling induced by irrelevant operators in randomly diluted systems near the percolation threshold. The specific systems that we consider are the random resistor network and a class of continuous spin systems, such as the x-y model. We focus on a family of least irrelevant operators and determine the corrections to scaling that originate from this family. Our field theoretic analysis carefully takes into account that irrelevant operators mix under renormalization. It turns out that long standing results on corrections to scaling are respectively incorrect (random resistor networks) or incomplete (continuous spin systems).
Bayesian Peak Picking for NMR Spectra
Cheng, Yichen
2014-02-01
Protein structure determination is a very important topic in structural genomics, which helps people to understand varieties of biological functions such as protein-protein interactions, protein–DNA interactions and so on. Nowadays, nuclear magnetic resonance (NMR) has often been used to determine the three-dimensional structures of protein in vivo. This study aims to automate the peak picking step, the most important and tricky step in NMR structure determination. We propose to model the NMR spectrum by a mixture of bivariate Gaussian densities and use the stochastic approximation Monte Carlo algorithm as the computational tool to solve the problem. Under the Bayesian framework, the peak picking problem is casted as a variable selection problem. The proposed method can automatically distinguish true peaks from false ones without preprocessing the data. To the best of our knowledge, this is the first effort in the literature that tackles the peak picking problem for NMR spectrum data using Bayesian method.
Application of the load flow and random flow models for the analysis of power transmission networks
International Nuclear Information System (INIS)
Zio, Enrico; Piccinelli, Roberta; Delfanti, Maurizio; Olivieri, Valeria; Pozzi, Mauro
2012-01-01
In this paper, the classical load flow model and the random flow model are considered for analyzing the performance of power transmission networks. The analysis concerns both the system performance and the importance of the different system elements; this latter is computed by power flow and random walk betweenness centrality measures. A network system from the literature is analyzed, representing a simple electrical power transmission network. The results obtained highlight the differences between the LF “global approach” to flow dispatch and the RF local approach of randomized node-to-node load transfer. Furthermore, computationally the LF model is less consuming than the RF model but problems of convergence may arise in the LF calculation.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Directory of Open Access Journals (Sweden)
Hideki Katagiri
2017-10-01
Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.
Eggert, G M; Zimmer, J G; Hall, W J; Friedman, B
1991-01-01
This randomized controlled study compared two types of case management for skilled nursing level patients living at home: the centralized individual model and the neighborhood team model. The team model differed from the individual model in that team case managers performed client assessments, care planning, some direct services, and reassessments; they also had much smaller caseloads and were assigned a specific catchment area. While patients in both groups incurred very high estimated healt...
Model of Random Polygon Particles for Concrete and Mesh Automatic Subdivision
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
In order to study the constitutive behavior of concrete in mesoscopic level, a new method is proposed in this paper. This method uses random polygon particles to simulate full grading broken aggregates of concrete. Based on computational geometry, we carry out the automatic generation of the triangle finite element mesh for the model of random polygon particles of concrete. The finite element mesh generated in this paper is also applicable to many other numerical methods.
Covariance of random stock prices in the Stochastic Dividend Discount Model
Agosto, Arianna; Mainini, Alessandra; Moretto, Enrico
2016-01-01
Dividend discount models have been developed in a deterministic setting. Some authors (Hurley and Johnson, 1994 and 1998; Yao, 1997) have introduced randomness in terms of stochastic growth rates, delivering closed-form expressions for the expected value of stock prices. This paper extends such previous results by determining a formula for the covariance between random stock prices when the dividends' rates of growth are correlated. The formula is eventually applied to real market data.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
Phase transitions in the random field Ising model in the presence of a transverse field
Energy Technology Data Exchange (ETDEWEB)
Dutta, A.; Chakrabarti, B.K. [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India); Stinchcombe, R.B. [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India); Department of Physics, Oxford (United Kingdom)
1996-09-07
We have studied the phase transition behaviour of the random field Ising model in the presence of a transverse (or tunnelling) field. The mean field phase diagram has been studied in detail, and in particular the nature of the transition induced by the tunnelling (transverse) field at zero temperature. Modified hyper-scaling relation for the zero-temperature transition has been derived using the Suzuki-Trotter formalism and a modified 'Harris criterion'. Mapping of the model to a randomly diluted antiferromagnetic Ising model in uniform longitudinal and transverse field is also given. (author)
Equilibrium in a random viewer model of television broadcasting
DEFF Research Database (Denmark)
Hansen, Bodil Olai; Keiding, Hans
2014-01-01
The authors considered a model of commercial television market with advertising with probabilistic viewer choice of channel, where private broadcasters may coexist with a public television broadcaster. The broadcasters influence the probability of getting viewer attention through the amount...... number of channels. The authors derive properties of equilibrium in an oligopolistic market with private broadcasters and show that the number of firms has a negative effect on overall advertising and viewer satisfaction. If there is a public channel that also sells advertisements but does not maximize...... profits, this will have a positive effect on advertiser and viewer satisfaction....
Silkworm cocoons inspire models for random fiber and particulate composites
Energy Technology Data Exchange (ETDEWEB)
Fujia, Chen; Porter, David; Vollrath, Fritz [Department of Zoology, University of Oxford, Oxford OX1 3PS (United Kingdom)
2010-10-15
The bioengineering design principles evolved in silkworm cocoons make them ideal natural prototypes and models for structural composites. Cocoons depend for their stiffness and strength on the connectivity of bonding between their constituent materials of silk fibers and sericin binder. Strain-activated mechanisms for loss of bonding connectivity in cocoons can be translated directly into a surprisingly simple yet universal set of physically realistic as well as predictive quantitative structure-property relations for a wide range of technologically important fiber and particulate composite materials.
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The
A random effects meta-analysis model with Box-Cox transformation
Directory of Open Access Journals (Sweden)
Yusuke Yamaguchi
2017-07-01
Full Text Available Abstract Background In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. Methods We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. Results A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and
Potts Model with Invisible Colors : Random-Cluster Representation and Pirogov–Sinai Analysis
Enter, Aernout C.D. van; Iacobelli, Giulio; Taati, Siamak
We study a recently introduced variant of the ferromagnetic Potts model consisting of a ferromagnetic interaction among q “visible” colors along with the presence of r non-interacting “invisible” colors. We introduce a random-cluster representation for the model, for which we prove the existence of
P2 : A random effects model with covariates for directed graphs
van Duijn, M.A.J.; Snijders, T.A.B.; Zijlstra, B.J.H.
A random effects model is proposed for the analysis of binary dyadic data that represent a social network or directed graph, using nodal and/or dyadic attributes as covariates. The network structure is reflected by modeling the dependence between the relations to and from the same actor or node.
Random Walk Model for the Growth of Monolayer in Dip Pen Nanolithography
International Nuclear Information System (INIS)
Kim, H; Ha, S; Jang, J
2013-01-01
By using a simple random-walk model, we simulate the growth of a self-assembled monolayer (SAM) pattern generated in dip pen nanolithography (DPN). In this model, the SAM pattern grows mainly via the serial pushing of molecules deposited from the tip. We examine various SAM patterns, such as lines, crosses, and letters by changing the tip scan speed.
Examples of mixed-effects modeling with crossed random effects and with binomial data
Quené, H.; van den Bergh, H.
2008-01-01
Psycholinguistic data are often analyzed with repeated-measures analyses of variance (ANOVA), but this paper argues that mixed-effects (multilevel) models provide a better alternative method. First, models are discussed in which the two random factors of participants and items are crossed, and not
Thiene, M.; Boeri, M.; Chorus, C.G.
2011-01-01
This paper introduces the discrete choice model-paradigm of Random Regret Minimization (RRM) to the field of environmental and resource economics. The RRM-approach has been very recently developed in the context of travel demand modelling and presents a tractable, regret-based alternative to the
A binomial random sum of present value models in investment analysis
Βουδούρη, Αγγελική; Ντζιαχρήστος, Ευάγγελος
1997-01-01
Stochastic present value models have been widely adopted in financial theory and practice and play a very important role in capital budgeting and profit planning. The purpose of this paper is to introduce a binomial random sum of stochastic present value models and offer an application in investment analysis.
The limiting behavior of the estimated parameters in a misspecified random field regression model
DEFF Research Database (Denmark)
Dahl, Christian Møller; Qin, Yu
This paper examines the limiting properties of the estimated parameters in the random field regression model recently proposed by Hamilton (Econometrica, 2001). Though the model is parametric, it enjoys the flexibility of the nonparametric approach since it can approximate a large collection of n...
Jang, S.; Rasouli, S.; Timmermans, H.J.P.
2016-01-01
Recently, regret-based choice models have been introduced in the travel behavior research community as an alternative to expected/random utility models. The fundamental proposition underlying regret theory is that individuals minimize the amount of regret they (are expected to) experience when
A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications
Grauer, Jared A.
2017-01-01
Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.
Assessment of end-use electricity consumption and peak demand by Townsville's housing stock
International Nuclear Information System (INIS)
Ren, Zhengen; Paevere, Phillip; Grozev, George; Egan, Stephen; Anticev, Julia
2013-01-01
We have developed a comprehensive model to estimate annual end-use electricity consumption and peak demand of housing stock, considering occupants' use of air conditioning systems and major appliances. The model was applied to analyse private dwellings in Townsville, Australia's largest tropical city. For the financial year (FY) 2010–11 the predicted results agreed with the actual electricity consumption with an error less than 10% for cooling thermostat settings at the standard setting temperature of 26.5 °C and at 1.0 °C higher than the standard setting. The greatest difference in monthly electricity consumption in the summer season between the model and the actual data decreased from 21% to 2% when the thermostat setting was changed from 26.5 °C to 27.5 °C. Our findings also showed that installation of solar panels in Townville houses could reduce electricity demand from the grid and would have a minor impact on the yearly peak demand. A key new feature of the model is that it can be used to predict probability distribution of energy demand considering (a) that appliances may be used randomly and (b) the way people use thermostats. The peak demand for the FY estimated from the probability distribution tracked the actual peak demand at 97% confidence level. - Highlights: • We developed a model to estimate housing stock energy consumption and peak demand. • Appliances used randomly and thermostat settings for space cooling were considered. • On-site installation of solar panels was also considered. • Its' results agree well with the actual electricity consumption and peak demand. • It shows the model could provide the probability distribution of electricity demand
Statistical Shape Modelling and Markov Random Field Restoration (invited tutorial and exercise)
DEFF Research Database (Denmark)
Hilger, Klaus Baggesen
This tutorial focuses on statistical shape analysis using point distribution models (PDM) which is widely used in modelling biological shape variability over a set of annotated training data. Furthermore, Active Shape Models (ASM) and Active Appearance Models (AAM) are based on PDMs and have proven...... deformation field between shapes. The tutorial demonstrates both generative active shape and appearance models, and MRF restoration on 3D polygonized surfaces. ''Exercise: Spectral-Spatial classification of multivariate images'' From annotated training data this exercise applies spatial image restoration...... using Markov random field relaxation of a spectral classifier. Keywords: the Ising model, the Potts model, stochastic sampling, discriminant analysis, expectation maximization....
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
Levin, Bruce; Leu, Cheng-Shiun
2013-01-01
We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.
Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials
Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José
2018-01-01
In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023
A random walk model for evaluating clinical trials involving serial observations.
Hopper, J L; Young, G P
1988-05-01
For clinical trials where the variable of interest is ordered and categorical (for example, disease severity, symptom scale), and where measurements are taken at intervals, it might be possible to achieve a greater discrimination between the efficacy of treatments by modelling each patient's progress as a stochastic process. The random walk is a simple, easily interpreted model that can be fitted by maximum likelihood using a maximization routine with inference based on standard likelihood theory. In general the model can allow for randomly censored data, incorporates measured prognostic factors, and inference is conditional on the (possibly non-random) allocation of patients. Tests of fit and of model assumptions are proposed, and application to two therapeutic trials of gastroenterological disorders are presented. The model gave measures of the rate of, and variability in, improvement for patients under different treatments. A small simulation study suggested that the model is more powerful than considering the difference between initial and final scores, even when applied to data generated by a mechanism other than the random walk model assumed in the analysis. It thus provides a useful additional statistical method for evaluating clinical trials.
Reike, Dennis; Schwarz, Wolf
2016-01-01
The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…
Best, Krista L; Desharnais, Guylaine; Boily, Jeanette; Miller, William C; Camp, Pat G
2012-11-16
Pressure ulcers pose significant negative individual consequences and financial burden on the healthcare system. Prolonged sitting in High Fowler's position (HF) is common clinical practice for older adults who spend extended periods of time in bed. While HF aids in digestion and respiration, being placed in a HF may increase perceived discomfort and risk of pressure ulcers due to increased pressure magnitude at the sacral and gluteal regions. It is likely that shearing forces could also contribute to risk of pressure ulcers in HF. The purpose of this study was to evaluate the effect of a low-tech and time-efficient Trunk Release Manuever (TRM) on sacral and gluteal pressure, trunk displacement and perceived discomfort in ambulatory older adults. A randomized controlled trial was used. We recruited community-living adults who were 60 years of age and older using posters, newspaper advertisements and word-of-mouth. Participants were randomly allocated to either the intervention or control group. The intervention group (n = 59) received the TRM, while the control group (n = 58) maintained the standard HF position. The TRM group had significantly lower mean (SD) PPI values post-intervention compared to the control group, 59.6 (30.7) mmHg and 79.9 (36.5) mmHg respectively (p = 0.002). There was also a significant difference in trunk displacement between the TRM and control groups, +3.2 mm and -5.8 mm respectively (p = 0.005). There were no significant differences in perceived discomfort between the groups. The TRM was effective for reducing pressure in the sacral and gluteal regions and for releasing the trunk at the point of contact between the skin and the support surface, but did not have an effect on perceived discomfort. The TRM is a simple method of repositioning which may have important clinical application for the prevention of pressure ulcers that may occur as a result of HF.
Local properties of the large-scale peaks of the CMB temperature
Energy Technology Data Exchange (ETDEWEB)
Marcos-Caballero, A.; Martínez-González, E.; Vielva, P., E-mail: marcos@ifca.unican.es, E-mail: martinez@ifca.unican.es, E-mail: vielva@ifca.unican.es [Instituto de Física de Cantabria, CSIC-Universidad de Cantabria, Avda. de los Castros s/n, 39005 Santander (Spain)
2017-05-01
In the present work, we study the largest structures of the CMB temperature measured by Planck in terms of the most prominent peaks on the sky, which, in particular, are located in the southern galactic hemisphere. Besides these large-scale features, the well-known Cold Spot anomaly is included in the analysis. All these peaks would contribute significantly to some of the CMB large-scale anomalies, as the parity and hemispherical asymmetries, the dipole modulation, the alignment between the quadrupole and the octopole, or in the case of the Cold Spot, to the non-Gaussianity of the field. The analysis of the peaks is performed by using their multipolar profiles, which characterize the local shape of the peaks in terms of the discrete Fourier transform of the azimuthal angle. In order to quantify the local anisotropy of the peaks, the distribution of the phases of the multipolar profiles is studied by using the Rayleigh random walk methodology. Finally, a direct analysis of the 2-dimensional field around the peaks is performed in order to take into account the effect of the galactic mask. The results of the analysis conclude that, once the peak amplitude and its first and second order derivatives at the centre are conditioned, the rest of the field is compatible with the standard model. In particular, it is observed that the Cold Spot anomaly is caused by the large value of curvature at the centre.
Phase structure of the O(n) model on a random lattice for n > 2
DEFF Research Database (Denmark)
Durhuus, B.; Kristjansen, C.
1997-01-01
We show that coarse graining arguments invented for the analysis of multi-spin systems on a randomly triangulated surface apply also to the O(n) model on a random lattice. These arguments imply that if the model has a critical point with diverging string susceptibility, then either γ = +1....../2 or there exists a dual critical point with negative string susceptibility exponent, γ̃, related to γ by γ = γ̃/γ̃-1. Exploiting the exact solution of the O(n) model on a random lattice we show that both situations are realized for n > 2 and that the possible dual pairs of string susceptibility exponents are given...... by (γ̃, γ) = (-1/m, 1/m+1), m = 2, 3, . . . We also show that at the critical points with positive string susceptibility exponent the average number of loops on the surface diverges while the average length of a single loop stays finite....
Randomly dispersed particle fuel model in the PSG Monte Carlo neutron transport code
International Nuclear Information System (INIS)
Leppaenen, J.
2007-01-01
High-temperature gas-cooled reactor fuels are composed of thousands of microscopic fuel particles, randomly dispersed in a graphite matrix. The modelling of such geometry is complicated, especially using continuous-energy Monte Carlo codes, which are unable to apply any deterministic corrections in the calculation. This paper presents the geometry routine developed for modelling randomly dispersed particle fuels using the PSG Monte Carlo reactor physics code. The model is based on the delta-tracking method, and it takes into account the spatial self-shielding effects and the random dispersion of the fuel particles. The calculation routine is validated by comparing the results to reference MCNP4C calculations using uranium and plutonium based fuels. (authors)
Random cyclic constitutive models of 0Cr18Ni10Ti pipe steel
International Nuclear Information System (INIS)
Zhao Yongxiang; Yang Bing
2004-01-01
Experimental study is performed on the random cyclic constitutive relations of a new pipe stainless steel, 0Cr18Ni10Ti, by an incremental strain-controlled fatigue test. In the test, it is verified that the random cyclic constitutive relations, like the wide recognized random cyclic strain-life relations, is an intrinsic fatigue phenomenon of engineering materials. Extrapolating the previous work by Zhao et al, probability-based constitutive models are constructed, respectively, on the bases of Ramberg-Osgood equation and its modified form. Scattering regularity and amount of the test data are taken into account. The models consist of the survival probability-strain-life curves, the confidence strain-life curves, and the survival probability-confidence-strain-life curves. Availability and feasibility of the models have been indicated by analysis of the present test data
Self-dual random-plaquette gauge model and the quantum toric code
Takeda, Koujin; Nishimori, Hidetoshi
2004-05-01
We study the four-dimensional Z2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, pc=0.889972…, and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions.
Self-dual random-plaquette gauge model and the quantum toric code
International Nuclear Information System (INIS)
Takeda, Koujin; Nishimori, Hidetoshi
2004-01-01
We study the four-dimensional Z 2 random-plaquette lattice gauge theory as a model of topological quantum memory, the toric code in particular. In this model, the procedure of quantum error correction works properly in the ordered (Higgs) phase, and phase boundary between the ordered (Higgs) and disordered (confinement) phases gives the accuracy threshold of error correction. Using self-duality of the model in conjunction with the replica method, we show that this model has exactly the same mathematical structure as that of the two-dimensional random-bond Ising model, which has been studied very extensively. This observation enables us to derive a conjecture on the exact location of the multicritical point (accuracy threshold) of the model, p c =0.889972..., and leads to several nontrivial results including bounds on the accuracy threshold in three dimensions
Ferrimagnetic Properties of Bond Dilution Mixed Blume-Capel Model with Random Single-Ion Anisotropy
International Nuclear Information System (INIS)
Liu Lei; Yan Shilei
2005-01-01
We study the ferrimagnetic properties of spin 1/2 and spin-1 systems by means of the effective field theory. The system is considered in the framework of bond dilution mixed Blume-Capel model (BCM) with random single-ion anisotropy. The investigation of phase diagrams and magnetization curves indicates the existence of induced magnetic ordering and single or multi-compensation points. Special emphasis is placed on the influence of bond dilution and random single-ion anisotropy on normal or induced magnetic ordering states and single or multi-compensation points. Normal magnetic ordering states take on new phase diagrams with increasing randomness (bond and anisotropy), while anisotropy induced magnetic ordering states are always occurrence no matter whether concentration of anisotropy is large or small. Existence and disappearance of compensation points rely strongly on bond dilution and random single-ion anisotropy. Some results have not been revealed in previous papers and predicted by Neel theory of ferrimagnetism.
International Nuclear Information System (INIS)
Perez, J.F.; Pontin, L.F.; Segundo, J.A.B.
1985-01-01
Using a method proposed by van Hemmen the free energy of the Curie-Weiss version of the site-dilute antiferromagnetic Ising model is computed, in the presence of an uniform magnetic field. The solution displays an exact correspondence between this model and the Curie-Weiss version of the Ising model in the presence of a random magnetic field. The phase diagrams are discussed and a tricritical point is shown to exist. (Author) [pt
Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-01
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
Random Walk Model for Cell-To-Cell Misalignments in Accelerator Structures
International Nuclear Information System (INIS)
Stupakov, Gennady
2000-01-01
Due to manufacturing and construction errors, cells in accelerator structures can be misaligned relative to each other. As a consequence, the beam generates a transverse wakefield even when it passes through the structure on axis. The most important effect is the long-range transverse wakefield that deflects the bunches and causes growth of the bunch train projected emittance. In this paper, the effect of the cell-to-cell misalignments is evaluated using a random walk model that assumes that each cell is shifted by a random step relative to the previous one. The model is compared with measurements of a few accelerator structures
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus
2006-01-01
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...
A simulation-based goodness-of-fit test for random effects in generalized linear mixed models
DEFF Research Database (Denmark)
Waagepetersen, Rasmus Plenge
The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...
Directory of Open Access Journals (Sweden)
Best Krista L
2012-11-01
Full Text Available Abstract Background Pressure ulcers pose significant negative individual consequences and financial burden on the healthcare system. Prolonged sitting in High Fowler’s position (HF is common clinical practice for older adults who spend extended periods of time in bed. While HF aids in digestion and respiration, being placed in a HF may increase perceived discomfort and risk of pressure ulcers due to increased pressure magnitude at the sacral and gluteal regions. It is likely that shearing forces could also contribute to risk of pressure ulcers in HF. The purpose of this study was to evaluate the effect of a low-tech and time-efficient Trunk Release Manuever (TRM on sacral and gluteal pressure, trunk displacement and perceived discomfort in ambulatory older adults. Method A randomized controlled trial was used. We recruited community-living adults who were 60 years of age and older using posters, newspaper advertisements and word-of-mouth. Participants were randomly allocated to either the intervention or control group. The intervention group (n = 59 received the TRM, while the control group (n = 58 maintained the standard HF position. Results The TRM group had significantly lower mean (SD PPI values post-intervention compared to the control group, 59.6 (30.7 mmHg and 79.9 (36.5 mmHg respectively (p = 0.002. There was also a significant difference in trunk displacement between the TRM and control groups, +3.2 mm and −5.8 mm respectively (p = 0.005. There were no significant differences in perceived discomfort between the groups. Conclusion The TRM was effective for reducing pressure in the sacral and gluteal regions and for releasing the trunk at the point of contact between the skin and the support surface, but did not have an effect on perceived discomfort. The TRM is a simple method of repositioning which may have important clinical application for the prevention of pressure ulcers that may occur as a result of HF.
Electric peak power forecasting by year 2025
International Nuclear Information System (INIS)
Alsayegh, O.A.; Al-Matar, O.A.; Fairouz, F.A.; Al-Mulla Ali, A.
2005-01-01
Peak power demand in Kuwait up to the year 2025 was predicted using an artificial neural network (ANN) model. The aim of the study was to investigate the effect of air conditioning (A/C) units on long-term power demand. Five socio-economic factors were selected as inputs for the simulation: (1) gross national product, (2) population, (3) number of buildings, (4) imports of A/C units, and (5) index of industrial production. The study used socio-economic data from 1978 to 2000. Historical data of the first 10 years of the studied time period were used to train the ANN. The electrical network was then simulated to forecast peak power for the following 11 years. The calculated error was then used for years in which power consumption data were not available. The study demonstrated that average peak power rates increased by 4100 MW every 5 years. Various scenarios related to changes in population, the number of buildings, and the quantity of A/C units were then modelled to estimate long-term peak power demand. Results of the study demonstrated that population had the strongest impact on future power demand, while the number of buildings had the smallest impact. It was concluded that peak power growth can be controlled through the use of different immigration policies, increased A/C efficiency, and the use of vertical housing. 7 refs., 2 tabs., 6 figs
Electric peak power forecasting by year 2025
Energy Technology Data Exchange (ETDEWEB)
Alsayegh, O.A.; Al-Matar, O.A.; Fairouz, F.A.; Al-Mulla Ali, A. [Kuwait Inst. for Scientific Research, Kuwait City (Kuwait). Div. of Environment and Urban Development
2005-07-01
Peak power demand in Kuwait up to the year 2025 was predicted using an artificial neural network (ANN) model. The aim of the study was to investigate the effect of air conditioning (A/C) units on long-term power demand. Five socio-economic factors were selected as inputs for the simulation: (1) gross national product, (2) population, (3) number of buildings, (4) imports of A/C units, and (5) index of industrial production. The study used socio-economic data from 1978 to 2000. Historical data of the first 10 years of the studied time period were used to train the ANN. The electrical network was then simulated to forecast peak power for the following 11 years. The calculated error was then used for years in which power consumption data were not available. The study demonstrated that average peak power rates increased by 4100 MW every 5 years. Various scenarios related to changes in population, the number of buildings, and the quantity of A/C units were then modelled to estimate long-term peak power demand. Results of the study demonstrated that population had the strongest impact on future power demand, while the number of buildings had the smallest impact. It was concluded that peak power growth can be controlled through the use of different immigration policies, increased A/C efficiency, and the use of vertical housing. 7 refs., 2 tabs., 6 figs.
Generalized linear models with random effects unified analysis via H-likelihood
Lee, Youngjo; Pawitan, Yudi
2006-01-01
Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...
Simulating Urban Growth Using a Random Forest-Cellular Automata (RF-CA Model
Directory of Open Access Journals (Sweden)
Courage Kamusoko
2015-04-01
Full Text Available Sustainable urban planning and management require reliable land change models, which can be used to improve decision making. The objective of this study was to test a random forest-cellular automata (RF-CA model, which combines random forest (RF and cellular automata (CA models. The Kappa simulation (KSimulation, figure of merit, and components of agreement and disagreement statistics were used to validate the RF-CA model. Furthermore, the RF-CA model was compared with support vector machine cellular automata (SVM-CA and logistic regression cellular automata (LR-CA models. Results show that the RF-CA model outperformed the SVM-CA and LR-CA models. The RF-CA model had a Kappa simulation (KSimulation accuracy of 0.51 (with a figure of merit statistic of 47%, while SVM-CA and LR-CA models had a KSimulation accuracy of 0.39 and −0.22 (with figure of merit statistics of 39% and 6%, respectively. Generally, the RF-CA model was relatively accurate at allocating “non-built-up to built-up” changes as reflected by the correct “non-built-up to built-up” components of agreement of 15%. The performance of the RF-CA model was attributed to the relatively accurate RF transition potential maps. Therefore, this study highlights the potential of the RF-CA model for simulating urban growth.
DEFF Research Database (Denmark)
Vansteelandt, S.; Martinussen, Torben; Tchetgen, E. J Tchetgen
2014-01-01
We consider additive hazard models (Aalen, 1989) for the effect of a randomized treatment on a survival outcome, adjusting for auxiliary baseline covariates. We demonstrate that the Aalen least-squares estimator of the treatment effect parameter is asymptotically unbiased, even when the hazard...... that, in view of its robustness against model misspecification, Aalen least-squares estimation is attractive for evaluating treatment effects on a survival outcome in randomized experiments, and the primary reasons to consider baseline covariate adjustment in such settings could be interest in subgroup......'s dependence on time or on the auxiliary covariates is misspecified, and even away from the null hypothesis of no treatment effect. We furthermore show that adjustment for auxiliary baseline covariates does not change the asymptotic variance of the estimator of the effect of a randomized treatment. We conclude...
Generalized Whittle-Matern random field as a model of correlated fluctuations
International Nuclear Information System (INIS)
Lim, S C; Teo, L P
2009-01-01
This paper considers a generalization of the Gaussian random field with covariance function of the Whittle-Matern family. Such a random field can be obtained as the solution to the fractional stochastic differential equation with two fractional orders. Asymptotic properties of the covariance functions belonging to this generalized Whittle-Matern family are studied, which are used to deduce the sample path properties of the random field. The Whittle-Matern field has been widely used in modeling geostatistical data such as sea beam data, wind speed, field temperature and soil data. In this paper we show that the generalized Whittle-Matern field provides a more flexible model for wind speed data
Universality for 1d Random Band Matrices: Sigma-Model Approximation
Shcherbina, Mariya; Shcherbina, Tatyana
2018-02-01
The paper continues the development of the rigorous supersymmetric transfer matrix approach to the random band matrices started in (J Stat Phys 164:1233-1260, 2016; Commun Math Phys 351:1009-1044, 2017). We consider random Hermitian block band matrices consisting of W× W random Gaussian blocks (parametrized by j,k \\in Λ =[1,n]^d\\cap Z^d ) with a fixed entry's variance J_{jk}=δ _{j,k}W^{-1}+β Δ _{j,k}W^{-2} , β >0 in each block. Taking the limit W→ ∞ with fixed n and β , we derive the sigma-model approximation of the second correlation function similar to Efetov's one. Then, considering the limit β , n→ ∞, we prove that in the dimension d=1 the behaviour of the sigma-model approximation in the bulk of the spectrum, as β ≫ n , is determined by the classical Wigner-Dyson statistics.
Random effects model for the reliability management of modules of a fighter aircraft
Energy Technology Data Exchange (ETDEWEB)
Sohn, So Young [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: sohns@yonsei.ac.kr; Yoon, Kyung Bok [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: ykb@yonsei.ac.kr; Chang, In Sang [Department of Computer Science and Industrial Systems Engineering, Yonsei University, Shinchondong 134, Seoul 120-749 (Korea, Republic of)]. E-mail: isjang@yonsei.ac.kr
2006-04-15
The operational availability of fighter aircrafts plays an important role in the national defense. Low operational availability of fighter aircrafts can cause many problems and ROKA (Republic of Korea Airforce) needs proper strategies to improve the current practice of reliability management by accurately forecasting both MTBF (mean time between failure) and MTTR (mean time to repair). In this paper, we develop a random effects model to forecast both MTBF and MTTR of installed modules of fighter aircrafts based on their characteristics and operational conditions. Advantage of using such a random effects model is the ability of accommodating not only the individual characteristics of each module and operational conditions but also the uncertainty caused by random error that cannot be explained by them. Our study is expected to contribute to ROKA in improving operational availability of fighter aircrafts and establishing effective logistics management.
Modeling and understanding of effects of randomness in arrays of resonant meta-atoms
DEFF Research Database (Denmark)
Tretyakov, Sergei A.; Albooyeh, Mohammad; Alitalo, Pekka
2013-01-01
In this review presentation we will discuss approaches to modeling and understanding electromagnetic properties of 2D and 3D lattices of small resonant particles (meta-atoms) in transition from regular (periodic) to random (amorphous) states. Nanostructured metasurfaces (2D) and metamaterials (3D......) are arrangements of optically small but resonant particles (meta-atoms). We will present our results on analytical modeling of metasurfaces with periodical and random arrangements of electrically and magnetically resonant meta-atoms with identical or random sizes, both for the normal and oblique-angle excitations....... We show how the electromagnetic response of metasurfaces is related to the statistical parameters of the structure. Furthermore, we will discuss the phenomenon of anti-resonance in extracted effective parameters of metamaterials and clarify its relation to the periodicity (or amorphous nature...
Analytical connection between thresholds and immunization strategies of SIS model in random networks
Zhou, Ming-Yang; Xiong, Wen-Man; Liao, Hao; Wang, Tong; Wei, Zong-Wen; Fu, Zhong-Qian
2018-05-01
Devising effective strategies for hindering the propagation of viruses and protecting the population against epidemics is critical for public security and health. Despite a number of studies based on the susceptible-infected-susceptible (SIS) model devoted to this topic, we still lack a general framework to compare different immunization strategies in completely random networks. Here, we address this problem by suggesting a novel method based on heterogeneous mean-field theory for the SIS model. Our method builds the relationship between the thresholds and different immunization strategies in completely random networks. Besides, we provide an analytical argument that the targeted large-degree strategy achieves the best performance in random networks with arbitrary degree distribution. Moreover, the experimental results demonstrate the effectiveness of the proposed method in both artificial and real-world networks.
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
Scanlan, Tara K; Russell, David G; Magyar, T Michelle; Scanlan, Larry A
2009-12-01
The Sport Commitment Model was further tested using the Scanlan Collaborative Interview Method to examine its generalizability to New Zealand's elite female amateur netball team, the Silver Ferns. Results supported or clarified Sport Commitment Model predictions, revealed avenues for model expansion, and elucidated the functions of perceived competence and enjoyment in the commitment process. A comparison and contrast of the in-depth interview data from the Silver Ferns with previous interview data from a comparable elite team of amateur male athletes allowed assessment of model external validity, tested the generalizability of the underlying mechanisms, and separated gender differences from discrepancies that simply reflected team or idiosyncratic differences.
International Nuclear Information System (INIS)
Helene, O.A.M.
1982-08-01
The determination of the upper limit of peak area in a multi-channel spectra, with a known significance level is discussed. This problem is specially important when the peak area is masked by the background statistical fluctuations. The problem is exactly solved and, thus, the results are valid in experiments with small number of events. The results are submitted to a Monte Carlo test and applied to the 92 Nb beta decay. (Author) [pt
The peak in anomalous magnetic viscosity
International Nuclear Information System (INIS)
Collocott, S.J.; Watterson, P.A.; Tan, X.H.; Xu, H.
2014-01-01
Anomalous magnetic viscosity, where the magnetization as a function of time exhibits non-monotonic behaviour, being seen to increase, reach a peak, and then decrease, is observed on recoil lines in bulk amorphous ferromagnets, for certain magnetic prehistories. A simple geometrical approach based on the motion of the state line on the Preisach plane gives a theoretical framework for interpreting non-monotonic behaviour and explains the origin of the peak. This approach gives an expression for the time taken to reach the peak as a function of the applied (or holding) field. The theory is applied to experimental data for bulk amorphous ferromagnet alloys of composition Nd 60−x Fe 30 Al 10 Dy x , x = 0, 1, 2, 3 and 4, and it gives a reasonable description of the observed behaviour. The role played by other key magnetic parameters, such as the intrinsic coercivity and fluctuation field, is also discussed. When the non-monotonic behaviour of the magnetization of a number of alloys is viewed in the context of the model, features of universal behaviour emerge, that are independent of alloy composition. - Highlights: • Development of a simple geometrical model based on the Preisach model which gives a complete explanation of the peak in the magnetic viscosity. • Geometrical approach is extended by considering equations that govern the motion of the state line. • The model is used to deduce the relationship between the holding field and the time it takes to reach the peak. • The model is tested with experimental results for a range of Nd–Fe–Al–Dy bulk amorphous ferromagnets. • There is good agreement between the model and the experimental data
Directory of Open Access Journals (Sweden)
Xavier A. Harrison
2014-10-01
Full Text Available Overdispersion is common in models of count data in ecology and evolutionary biology, and can occur due to missing covariates, non-independent (aggregated data, or an excess frequency of zeroes (zero-inflation. Accounting for overdispersion in such models is vital, as failing to do so can lead to biased parameter estimates, and false conclusions regarding hypotheses of interest. Observation-level random effects (OLRE, where each data point receives a unique level of a random effect that models the extra-Poisson variation present in the data, are commonly employed to cope with overdispersion in count data. However studies investigating the efficacy of observation-level random effects as a means to deal with overdispersion are scarce. Here I use simulations to show that in cases where overdispersion is caused by random extra-Poisson noise, or aggregation in the count data, observation-level random effects yield more accurate parameter estimates compared to when overdispersion is simply ignored. Conversely, OLRE fail to reduce bias in zero-inflated data, and in some cases increase bias at high levels of overdispersion. There was a positive relationship between the magnitude of overdispersion and the degree of bias in parameter estimates. Critically, the simulations reveal that failing to account for overdispersion in mixed models can erroneously inflate measures of explained variance (r2, which may lead to researchers overestimating the predictive power of variables of interest. This work suggests use of observation-level random effects provides a simple and robust means to account for overdispersion in count data, but also that their ability to minimise bias is not uniform across all types of overdispersion and must be applied judiciously.
93-106, 2015 93 Multilevel random effect and marginal models
African Journals Online (AJOL)
injected by the candidate vaccine have a lower or higher risk for the occurrence of ... outcome relationship and test whether subjects inject- ... contains an agent that resembles a disease-causing ... to have different random effect variability at each cat- ... In the marginal models settings, the responses are ... Behavior as usual.
Kottonau, Johannes
2011-01-01
Effectively teaching the concepts of osmosis to college-level students is a major obstacle in biological education. Therefore, a novel computer model is presented that allows students to observe the random nature of particle motion simultaneously with the seemingly directed net flow of water across a semipermeable membrane during osmotic…
First steps towards a state classification in the random-field Ising model
International Nuclear Information System (INIS)
Basso, Vittorio; Magni, Alessandro; Bertotti, Giorgio
2006-01-01
The properties of locally stable states of the random-field Ising model are studied. A map is defined for the dynamics driven by the field starting from a locally stable state. The fixed points of the map are connected with the limit hysteresis loops that appear in the classification of the states
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
Theoretical model of the density of states of random binary alloys
International Nuclear Information System (INIS)
Zekri, N.; Brezini, A.
1991-09-01
A theoretical formulation of the density of states for random binary alloys is examined based on a mean field treatment. The present model includes both diagonal and off-diagonal disorder and also short-range order. Extensive results are reported for various concentrations and compared to other calculations. (author). 22 refs, 6 figs
A random regression model in analysis of litter size in pigs | Lukovi& ...
African Journals Online (AJOL)
Dispersion parameters for number of piglets born alive (NBA) were estimated using a random regression model (RRM). Two data sets of litter records from the Nemščak farm in Slovenia were used for analyses. The first dataset (DS1) included records from the first to the sixth parity. The second dataset (DS2) was extended ...
International Nuclear Information System (INIS)
Kaplan, T.; Gray, L.J.
1984-01-01
The self-consistent approximation of Kaplan, Leath, Gray, and Diehl is applied to models for substitutional random alloys with muffin-tin potentials. The particular advantage of this approximation is that, in addition to including cluster scattering, the muffin-tin potentials in the alloy can depend on the occupation of the surrounding sites (i.e., environmental disorder is included)
International Nuclear Information System (INIS)
Perez Curbelo, J.; Rosales, J.; Garcia, L.; Garcia, C.; Brayner, C.
2013-01-01
The pebble bed nuclear reactor is one of the main candidates for the next generation of nuclear power plants. In pebble bed type HTRs, the fuel is contained within graphite pebbles in the form of TRISO particles, which form a randomly packed bed inside a graphite-walled cylindrical cavity. Pebble bed reactors (PBR) offer the opportunity to meet the sustainability requirements, such as nuclear safety, economic competitiveness, proliferation resistance and a minimal production of radioactive waste. In order to simulate PBRs correctly, the double heterogeneity of the system must be considered. It consists on randomly located pebbles into the core and TRISO particles into the fuel pebbles. These features are often neglected due to the difficulty to model with MCPN code. The main reason is that there is a limited number of cells and surfaces to be defined. In this study, a computational tool which allows getting a new geometrical model of fuel pebbles for neutronic calculations with MCNPX code, was developed. The heterogeneity of system is considered, and also the randomly located TRISO particles inside the pebble. Four proposed fuel pebble models were compared regarding their effective multiplication factor and energy liberation profiles. Such models are: Homogeneous Pebble, Five Zone Homogeneous Pebble, Detailed Geometry, and Randomly Detailed Geometry. (Author)
Integrals of random fields treated by the model correction factor method
DEFF Research Database (Denmark)
Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der
2002-01-01
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...
DEFF Research Database (Denmark)
Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der
2002-01-01
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...
Generalized Dynamic Panel Data Models with Random Effects for Cross-Section and Time
Mesters, G.; Koopman, S.J.
2014-01-01
An exact maximum likelihood method is developed for the estimation of parameters in a nonlinear non-Gaussian dynamic panel data model with unobserved random individual-specific and time-varying effects. We propose an estimation procedure based on the importance sampling technique. In particular, a
Reduction of the number of parameters needed for a polynomial random regression test-day model
Pool, M.H.; Meuwissen, T.H.E.
2000-01-01
Legendre polynomials were used to describe the (co)variance matrix within a random regression test day model. The goodness of fit depended on the polynomial order of fit, i.e., number of parameters to be estimated per animal but is limited by computing capacity. Two aspects: incomplete lactation
DEFF Research Database (Denmark)
Petersen, Jørgen Holm
2016-01-01
This paper describes a new approach to the estimation in a logistic regression model with two crossed random effects where special interest is in estimating the variance of one of the effects while not making distributional assumptions about the other effect. A composite likelihood is studied...
Comparing Fuzzy Sets and Random Sets to Model the Uncertainty of Fuzzy Shorelines
Dewi, Ratna Sari; Bijker, Wietske; Stein, Alfred
2017-01-01
This paper addresses uncertainty modelling of shorelines by comparing fuzzy sets and random sets. Both methods quantify extensional uncertainty of shorelines extracted from remote sensing images. Two datasets were tested: pan-sharpened Pleiades with four bands (Pleiades) and pan-sharpened Pleiades
Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors
International Nuclear Information System (INIS)
Herschtal, A; Te Marvelde, L; Mengersen, K; Foroudi, F; Ball, D; Devereux, T; Pham, D; Greer, P B; Pichler, P; Eade, T; Kneebone, A; Bell, L; Caine, H; Hindson, B; Kron, T; Hosseinifard, Z
2015-01-01
Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts −19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements. (paper)
International Nuclear Information System (INIS)
Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.
2016-01-01
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development
Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.
2018-01-01
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development
Energy Technology Data Exchange (ETDEWEB)
Mishchenko, Michael I., E-mail: michael.i.mishchenko@nasa.gov [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Dlugach, Janna M. [Main Astronomical Observatory of the National Academy of Sciences of Ukraine, 27 Zabolotny Str., 03680, Kyiv (Ukraine); Yurkin, Maxim A. [Voevodsky Institute of Chemical Kinetics and Combustion, SB RAS, Institutskaya str. 3, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, Pirogova 2, 630090 Novosibirsk (Russian Federation); Bi, Lei [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Cairns, Brian [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Liu, Li [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Columbia University, 2880 Broadway, New York, NY 10025 (United States); Panetta, R. Lee [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Travis, Larry D. [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Yang, Ping [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Zakharova, Nadezhda T. [Trinnovim LLC, 2880 Broadway, New York, NY 10025 (United States)
2016-05-16
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development
Mishchenko, Michael I.; Dlugach, Janna M.; Yurkin, Maxim A.; Bi, Lei; Cairns, Brian; Liu, Li; Panetta, R. Lee; Travis, Larry D.; Yang, Ping; Zakharova, Nadezhda T.
2016-01-01
A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell's equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell- Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell-Lorentz equations, we trace the development of
Random regression models for daily feed intake in Danish Duroc pigs
DEFF Research Database (Denmark)
Strathe, Anders Bjerring; Mark, Thomas; Jensen, Just
The objective of this study was to develop random regression models and estimate covariance functions for daily feed intake (DFI) in Danish Duroc pigs. A total of 476201 DFI records were available on 6542 Duroc boars between 70 to 160 days of age. The data originated from the National test station......-year-season, permanent, and animal genetic effects. The functional form was based on Legendre polynomials. A total of 64 models for random regressions were initially ranked by BIC to identify the approximate order for the Legendre polynomials using AI-REML. The parsimonious model included Legendre polynomials of 2nd...... order for genetic and permanent environmental curves and a heterogeneous residual variance, allowing the daily residual variance to change along the age trajectory due to scale effects. The parameters of the model were estimated in a Bayesian framework, using the RJMC module of the DMU package, where...
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Peak Dose Assessment for Proposed DOE-PPPO Authorized Limits
International Nuclear Information System (INIS)
Maldonado, Delis
2012-01-01
The Oak Ridge Institute for Science and Education (ORISE), a U.S. Department of Energy (DOE) prime contractor, was contracted by the DOE Portsmouth/Paducah Project Office (DOE-PPPO) to conduct a peak dose assessment in support of the Authorized Limits Request for Solid Waste Disposal at Landfill C-746-U at the Paducah Gaseous Diffusion Plant (DOE-PPPO 2011a). The peak doses were calculated based on the DOE-PPPO Proposed Single Radionuclides Soil Guidelines and the DOE-PPPO Proposed Authorized Limits (AL) Volumetric Concentrations available in DOE-PPPO 2011a. This work is provided as an appendix to the Dose Modeling Evaluations and Technical Support Document for the Authorized Limits Request for the C-746-U Landfill at the Paducah Gaseous Diffusion Plant, Paducah, Kentucky (ORISE 2012). The receptors evaluated in ORISE 2012 were selected by the DOE-PPPO for the additional peak dose evaluations. These receptors included a Landfill Worker, Trespasser, Resident Farmer (onsite), Resident Gardener, Recreational User, Outdoor Worker and an Offsite Resident Farmer. The RESRAD (Version 6.5) and RESRAD-OFFSITE (Version 2.5) computer codes were used for the peak dose assessments. Deterministic peak dose assessments were performed for all the receptors and a probabilistic dose assessment was performed only for the Offsite Resident Farmer at the request of the DOE-PPPO. In a deterministic analysis, a single input value results in a single output value. In other words, a deterministic analysis uses single parameter values for every variable in the code. By contrast, a probabilistic approach assigns parameter ranges to certain variables, and the code randomly selects the values for each variable from the parameter range each time it calculates the dose (NRC 2006). The receptor scenarios, computer codes and parameter input files were previously used in ORISE 2012. A few modifications were made to the parameter input files as appropriate for this effort. Some of these changes
A random point process model for the score in sport matches
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2009-01-01
Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf
Peak Oil and other threatening peaks-Chimeras without substance
International Nuclear Information System (INIS)
Radetzki, Marian
2010-01-01
The Peak Oil movement has widely spread its message about an impending peak in global oil production, caused by an inadequate resource base. On closer scrutiny, the underlying analysis is inconsistent, void of a theoretical foundation and without support in empirical observations. Global oil resources are huge and expanding, and pose no threat to continuing output growth within an extended time horizon. In contrast, temporary or prolonged supply crunches are indeed plausible, even likely, on account of growing resource nationalism denying access to efficient exploitation of the existing resource wealth.
Electricity Portfolio Management: Optimal Peak / Off-Peak Allocations
Huisman, Ronald; Mahieu, Ronald; Schlichter, Felix
2007-01-01
textabstractElectricity purchasers manage a portfolio of contracts in order to purchase the expected future electricity consumption profile of a company or a pool of clients. This paper proposes a mean-variance framework to address the concept of structuring the portfolio and focuses on how to allocate optimal positions in peak and off-peak forward contracts. It is shown that the optimal allocations are based on the difference in risk premiums per unit of day-ahead risk as a measure of relati...
Ultrasonic Transducer Peak-to-Peak Optical Measurement
Directory of Open Access Journals (Sweden)
Pavel Skarvada
2012-01-01
Full Text Available Possible optical setups for measurement of the peak-to-peak value of an ultrasonic transducer are described in this work. The Michelson interferometer with the calibrated nanopositioner in reference path and laser Doppler vibrometer were used for the basic measurement of vibration displacement. Langevin type of ultrasonic transducer is used for the purposes of Electro-Ultrasonic Nonlinear Spectroscopy (EUNS. Parameters of produced mechanical vibration have to been well known for EUNS. Moreover, a monitoring of mechanical vibration frequency shift with a mass load and sample-transducer coupling is important for EUNS measurement.
Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R
2016-10-01
Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Accumulator and random-walk models of psychophysical discrimination: a counter-evaluation.
Vickers, D; Smith, P
1985-01-01
In a recent assessment of models of psychophysical discrimination, Heath criticises the accumulator model for its reliance on computer simulation and qualitative evidence, and contrasts it unfavourably with a modified random-walk model, which yields exact predictions, is susceptible to critical test, and is provided with simple parameter-estimation techniques. A counter-evaluation is presented, in which the approximations employed in the modified random-walk analysis are demonstrated to be seriously inaccurate, the resulting parameter estimates to be artefactually determined, and the proposed test not critical. It is pointed out that Heath's specific application of the model is not legitimate, his data treatment inappropriate, and his hypothesis concerning confidence inconsistent with experimental results. Evidence from adaptive performance changes is presented which shows that the necessary assumptions for quantitative analysis in terms of the modified random-walk model are not satisfied, and that the model can be reconciled with data at the qualitative level only by making it virtually indistinguishable from an accumulator process. A procedure for deriving exact predictions for an accumulator process is outlined.
Autcha Araveeporn
2013-01-01
This paper compares a Least-Squared Random Coefficient Autoregressive (RCA) model with a Least-Squared RCA model based on Autocorrelated Errors (RCA-AR). We looked at only the first order models, denoted RCA(1) and RCA(1)-AR(1). The efficiency of the Least-Squared method was checked by applying the models to Brownian motion and Wiener process, and the efficiency followed closely the asymptotic properties of a normal distribution. In a simulation study, we compared the performance of RCA(1) an...
International Nuclear Information System (INIS)
Liu Lianshou; Zhang Yang; Wu Yuanfang
1996-01-01
The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)
A theory of solving TAP equations for Ising models with general invariant random matrices
DEFF Research Database (Denmark)
Opper, Manfred; Çakmak, Burak; Winther, Ole
2016-01-01
We consider the problem of solving TAP mean field equations by iteration for Ising models with coupling matrices that are drawn at random from general invariant ensembles. We develop an analysis of iterative algorithms using a dynamical functional approach that in the thermodynamic limit yields...... the iteration dependent on a Gaussian distributed field only. The TAP magnetizations are stable fixed points if a de Almeida–Thouless stability criterion is fulfilled. We illustrate our method explicitly for coupling matrices drawn from the random orthogonal ensemble....
Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan
2016-07-01
The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.
Generalized random walk algorithm for the numerical modeling of complex diffusion processes
Vamos, C; Vereecken, H
2003-01-01
A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.
Generalized random walk algorithm for the numerical modeling of complex diffusion processes
International Nuclear Information System (INIS)
Vamos, Calin; Suciu, Nicolae; Vereecken, Harry
2003-01-01
A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested
GLOBAL RANDOM WALK SIMULATIONS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS OF PASSIVE TRANSPORT MODELS
Directory of Open Access Journals (Sweden)
Nicolae Suciu
2011-07-01
Full Text Available The Global Random Walk algorithm (GRW performs a simultaneoustracking on a fixed grid of huge numbers of particles at costscomparable to those of a single-trajectory simulation by the traditional Particle Tracking (PT approach. Statistical ensembles of GRW simulations of a typical advection-dispersion process in groundwater systems with randomly distributed spatial parameters are used to obtain reliable estimations of the input parameters for the upscaled transport model and of their correlations, input-output correlations, as well as full probability distributions of the input and output parameters.
Anderson localization through Polyakov loops: Lattice evidence and random matrix model
International Nuclear Information System (INIS)
Bruckmann, Falk; Schierenberg, Sebastian; Kovacs, Tamas G.
2011-01-01
We investigate low-lying fermion modes in SU(2) gauge theory at temperatures above the phase transition. Both staggered and overlap spectra reveal transitions from chaotic (random matrix) to integrable (Poissonian) behavior accompanied by an increasing localization of the eigenmodes. We show that the latter are trapped by local Polyakov loop fluctuations. Islands of such ''wrong'' Polyakov loops can therefore be viewed as defects leading to Anderson localization in gauge theories. We find strong similarities in the spatial profile of these localized staggered and overlap eigenmodes. We discuss possible interpretations of this finding and present a sparse random matrix model that reproduces these features.
International Nuclear Information System (INIS)
Kirsch, W.; Martinelli, F.
1981-01-01
After the derivation of weak conditions under which the potential for the Schroedinger operator is well defined the authers state an ergodicity assumption of this potential which ensures that the spectrum of this operator is a fixed non random set. Then random point interaction Hamiltonians are considered in this framework. Finally the authors consider a model where for sufficiently small fluctuations around the equilibrium positions a finite number of gaps appears. (HSI)
Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile
Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.
2012-09-01
Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.
International Nuclear Information System (INIS)
Morioka, Noboru; Kato, Yasuji; Yokoi, M.
1975-01-01
Output peaking factor often plays an important role in the safety and operation of nuclear reactors. The meaning of the peaking factor of PWRs is categorized into two features or the peaking factor in core (FQ-core) and the peaking factor on the basis of accident analysis (or FQ-limit). FQ-core is the actual peaking factor realized in nuclear core at the time of normal operation, and FQ-limit should be evaluated from loss of coolant accident and other abnormal conditions. If FQ-core is lower than FQ-limit, the reactor may be operated at full load, but if FQ-core is larger than FQ-limit, reactor output should be controlled lower than FQ-limit. FQ-core has two kinds of values, or the one on the basis of nuclear design, and the other actually measured in reactor operation. The first FQ-core should be named as FQ-core-design and the latter as FQ-core-measured. The numerical evaluation of FQ-core-design is as follows; FQ-core-design of three-dimensions is synthesized with FQ-core horizontal value (X-Y) and FQ-core vertical value, the former one is calculated with ASSY-CORE code, and the latter one with one dimensional diffusion code. For the evaluation of FQ-core-measured, on-site data observation from nuclear reactor instrumentation or off-site data observation is used. (Iwase, T.)
Assessing robustness of designs for random effects parameters for nonlinear mixed-effects models.
Duffull, Stephen B; Hooker, Andrew C
2017-12-01
Optimal designs for nonlinear models are dependent on the choice of parameter values. Various methods have been proposed to provide designs that are robust to uncertainty in the prior choice of parameter values. These methods are generally based on estimating the expectation of the determinant (or a transformation of the determinant) of the information matrix over the prior distribution of the parameter values. For high dimensional models this can be computationally challenging. For nonlinear mixed-effects models the question arises as to the importance of accounting for uncertainty in the prior value of the variances of the random effects parameters. In this work we explore the influence of the variance of the random effects parameters on the optimal design. We find that the method for approximating the expectation and variance of the likelihood is of potential importance for considering the influence of random effects. The most common approximation to the likelihood, based on a first-order Taylor series approximation, yields designs that are relatively insensitive to the prior value of the variance of the random effects parameters and under these conditions it appears to be sufficient to consider uncertainty on the fixed-effects parameters only.
The spatial resolution of epidemic peaks.
Directory of Open Access Journals (Sweden)
Harriet L Mills
2014-04-01
Full Text Available The emergence of novel respiratory pathogens can challenge the capacity of key health care resources, such as intensive care units, that are constrained to serve only specific geographical populations. An ability to predict the magnitude and timing of peak incidence at the scale of a single large population would help to accurately assess the value of interventions designed to reduce that peak. However, current disease-dynamic theory does not provide a clear understanding of the relationship between: epidemic trajectories at the scale of interest (e.g. city; population mobility; and higher resolution spatial effects (e.g. transmission within small neighbourhoods. Here, we used a spatially-explicit stochastic meta-population model of arbitrary spatial resolution to determine the effect of resolution on model-derived epidemic trajectories. We simulated an influenza-like pathogen spreading across theoretical and actual population densities and varied our assumptions about mobility using Latin-Hypercube sampling. Even though, by design, cumulative attack rates were the same for all resolutions and mobilities, peak incidences were different. Clear thresholds existed for all tested populations, such that models with resolutions lower than the threshold substantially overestimated population-wide peak incidence. The effect of resolution was most important in populations which were of lower density and lower mobility. With the expectation of accurate spatial incidence datasets in the near future, our objective was to provide a framework for how to use these data correctly in a spatial meta-population model. Our results suggest that there is a fundamental spatial resolution for any pathogen-population pair. If underlying interactions between pathogens and spatially heterogeneous populations are represented at this resolution or higher, accurate predictions of peak incidence for city-scale epidemics are feasible.
How to use your peak flow meter
... meter - how to use; Asthma - peak flow meter; Reactive airway disease - peak flow meter; Bronchial asthma - peak ... 2014:chap 55. National Asthma Education and Prevention Program website. How to use a peak flow meter. ...
New constraints on modelling the random magnetic field of the MW
Energy Technology Data Exchange (ETDEWEB)
Beck, Marcus C.; Nielaba, Peter [Department of Physics, University of Konstanz, Universitätsstr. 10, D-78457 Konstanz (Germany); Beck, Alexander M.; Dolag, Klaus [University Observatory Munich, Scheinerstr. 1, D-81679 Munich (Germany); Beck, Rainer [Max Planck Institute for Radioastronomy, Auf dem Hügel 69, D-53121 Bonn (Germany); Strong, Andrew W., E-mail: marcus.beck@uni-konstanz.de, E-mail: abeck@usm.uni-muenchen.de, E-mail: rbeck@mpifr-bonn.mpg.de, E-mail: dolag@usm.uni-muenchen.de, E-mail: aws@mpe.mpg.de, E-mail: peter.nielaba@uni-konstanz.de [Max Planck Institute for Extraterrestrial Physics, Giessenbachstr. 1, D-85748 Garching (Germany)
2016-05-01
We extend the description of the isotropic and anisotropic random component of the small-scale magnetic field within the existing magnetic field model of the Milky Way from Jansson and Farrar, by including random realizations of the small-scale component. Using a magnetic-field power spectrum with Gaussian random fields, the NE2001 model for the thermal electrons and the Galactic cosmic-ray electron distribution from the current GALPROP model we derive full-sky maps for the total and polarized synchrotron intensity as well as the Faraday rotation-measure distribution. While previous work assumed that small-scale fluctuations average out along the line-of-sight or which only computed ensemble averages of random fields, we show that these fluctuations need to be carefully taken into account. Comparing with observational data we obtain not only good agreement with 408 MHz total and WMAP7 22 GHz polarized intensity emission maps, but also an improved agreement with Galactic foreground rotation-measure maps and power spectra, whose amplitude and shape strongly depend on the parameters of the random field. We demonstrate that a correlation length of 0≈22 pc (05 pc being a 5σ lower limit) is needed to match the slope of the observed power spectrum of Galactic foreground rotation-measure maps. Using multiple realizations allows us also to infer errors on individual observables. We find that previously-used amplitudes for random and anisotropic random magnetic field components need to be rescaled by factors of ≈0.3 and 0.6 to account for the new small-scale contributions. Our model predicts a rotation measure of −2.8±7.1 rad/m{sup 2} and 04.4±11. rad/m{sup 2} for the north and south Galactic poles respectively, in good agreement with observations. Applying our model to deflections of ultra-high-energy cosmic rays we infer a mean deflection of ≈3.5±1.1 degree for 60 EeV protons arriving from CenA.
Tests of peak flow scaling in simulated self-similar river networks
Menabde, M.; Veitzer, S.; Gupta, V.; Sivapalan, M.
2001-01-01
The effect of linear flow routing incorporating attenuation and network topology on peak flow scaling exponent is investigated for an instantaneously applied uniform runoff on simulated deterministic and random self-similar channel networks. The flow routing is modelled by a linear mass conservation equation for a discrete set of channel links connected in parallel and series, and having the same topology as the channel network. A quasi-analytical solution for the unit hydrograph is obtained in terms of recursion relations. The analysis of this solution shows that the peak flow has an asymptotically scaling dependence on the drainage area for deterministic Mandelbrot-Vicsek (MV) and Peano networks, as well as for a subclass of random self-similar channel networks. However, the scaling exponent is shown to be different from that predicted by the scaling properties of the maxima of the width functions. ?? 2001 Elsevier Science Ltd. All rights reserved.
Statistics of Microstructure, Peak Stress and Interface Damage in Fiber Reinforced Composites
DEFF Research Database (Denmark)
Kushch, Volodymyr I.; Shmegera, Sergii V.; Mishnaevsky, Leon
2009-01-01
This paper addresses an effect of the fiber arrangement and interactions on the peak interface stress statistics in a fiber reinforced composite material (FRC). The method we apply combines the multipole expansion technique with the representative unit cell model of composite bulk, which is able...... to simulate both the uniform and clustered random fiber arrangements. By averaging over a number of numerical tests, the empirical probability functions have been obtained for the nearest neighbor distance and the peak interface stress. It is shown that the considered statistical parameters are rather...... sensitive to the fiber arrangement, particularly cluster formation. An explicit correspondence between them has been established and an analytical formula linking the microstructure and peak stress statistics in FRCs has been suggested. Application of the statistical theory of extreme values to the local...
The Schwinger Dyson equations and the algebra of constraints of random tensor models at all orders
International Nuclear Information System (INIS)
Gurau, Razvan
2012-01-01
Random tensor models for a generic complex tensor generalize matrix models in arbitrary dimensions and yield a theory of random geometries. They support a 1/N expansion dominated by graphs of spherical topology. Their Schwinger Dyson equations, generalizing the loop equations of matrix models, translate into constraints satisfied by the partition function. The constraints have been shown, in the large N limit, to close a Lie algebra indexed by colored rooted D-ary trees yielding a first generalization of the Virasoro algebra in arbitrary dimensions. In this paper we complete the Schwinger Dyson equations and the associated algebra at all orders in 1/N. The full algebra of constraints is indexed by D-colored graphs, and the leading order D-ary tree algebra is a Lie subalgebra of the full constraints algebra.
Bayesian analysis for exponential random graph models using the adaptive exchange sampler
Jin, Ick Hoon
2013-01-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the issue of intractable normalizing constants encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Effect of disorder on condensation in the lattice gas model on a random graph.
Handford, Thomas P; Dear, Alexander; Pérez-Reche, Francisco J; Taraskin, Sergei N
2014-07-01
The lattice gas model of condensation in a heterogeneous pore system, represented by a random graph of cells, is studied using an exact analytical solution. A binary mixture of pore cells with different coordination numbers is shown to exhibit two phase transitions as a function of chemical potential in a certain temperature range. Heterogeneity in interaction strengths is demonstrated to reduce the critical temperature and, for large-enough degreeS of disorder, divides the cells into ones which are either on average occupied or unoccupied. Despite treating the pore space loops in a simplified manner, the random-graph model provides a good description of condensation in porous structures containing loops. This is illustrated by considering capillary condensation in a structural model of mesoporous silica SBA-15.
Directory of Open Access Journals (Sweden)
Huibing Hao
2015-01-01
Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.
A special covariance structure for random coefficient models with both between and within covariates
International Nuclear Information System (INIS)
Riedel, K.S.
1990-07-01
We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)
Estimating required information size by quantifying diversity in random-effects model meta-analyses
DEFF Research Database (Denmark)
Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper
2009-01-01
an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...
Application of random number generators in genetic algorithms to improve rainfall-runoff modelling
Chlumecký, Martin; Buchtele, Josef; Richta, Karel
2017-10-01
The efficient calibration of rainfall-runoff models is a difficult issue, even for experienced hydrologists. Therefore, fast and high-quality model calibration is a valuable improvement. This paper describes a novel methodology and software for the optimisation of a rainfall-runoff modelling using a genetic algorithm (GA) with a newly prepared concept of a random number generator (HRNG), which is the core of the optimisation. The GA estimates model parameters using evolutionary principles, which requires a quality number generator. The new HRNG generates random numbers based on hydrological information and it provides better numbers compared to pure software generators. The GA enhances the model calibration very well and the goal is to optimise the calibration of the model with a minimum of user interaction. This article focuses on improving the internal structure of the GA, which is shielded from the user. The results that we obtained indicate that the HRNG provides a stable trend in the output quality of the model, despite various configurations of the GA. In contrast to previous research, the HRNG speeds up the calibration of the model and offers an improvement of rainfall-runoff modelling.
Peak effect in twinned superconductors
International Nuclear Information System (INIS)
Larkin, A.I.; Marchetti, M.C.; Vinokur, V.M.
1995-01-01
A sharp maximum in the critical current J c as a function of temperature just below the melting point of the Abrikosov flux lattice has recently been observed in both low- and high-temperature superconductors. This peak effect is strongest in twinned crystals for fields aligned with the twin planes. We propose that this peak signals the breakdown of the collective pinning regime and the crossover to strong pinning of single vortices on the twin boundaries. This crossover is very sharp and can account for the steep drop of the differential resistivity observed in experiments. copyright 1995 The American Physical Society
Random regret-based discrete-choice modelling: an application to healthcare.
de Bekker-Grob, Esther W; Chorus, Caspar G
2013-07-01
A new modelling approach for analysing data from discrete-choice experiments (DCEs) has been recently developed in transport economics based on the notion of regret minimization-driven choice behaviour. This so-called Random Regret Minimization (RRM) approach forms an alternative to the dominant Random Utility Maximization (RUM) approach. The RRM approach is able to model semi-compensatory choice behaviour and compromise effects, while being as parsimonious and formally tractable as the RUM approach. Our objectives were to introduce the RRM modelling approach to healthcare-related decisions, and to investigate its usefulness in this domain. Using data from DCEs aimed at determining valuations of attributes of osteoporosis drug treatments and human papillomavirus (HPV) vaccinations, we empirically compared RRM models, RUM models and Hybrid RUM-RRM models in terms of goodness of fit, parameter ratios and predicted choice probabilities. In terms of model fit, the RRM model did not outperform the RUM model significantly in the case of the osteoporosis DCE data (p = 0.21), whereas in the case of the HPV DCE data, the Hybrid RUM-RRM model outperformed the RUM model (p implied by the two models can vary substantially. Differences in model fit between RUM, RRM and Hybrid RUM-RRM were found to be small. Although our study did not show significant differences in parameter ratios, the RRM and Hybrid RUM-RRM models did feature considerable differences in terms of the trade-offs implied by these ratios. In combination, our results suggest that RRM and Hybrid RUM-RRM modelling approach hold the potential of offering new and policy-relevant insights for health researchers and policy makers.
SHER: A Colored Petri Net Based Random Mobility Model for Wireless Communications
Khan, Naeem Akhtar; Ahmad, Farooq; Khan, Sher Afzal
2015-01-01
In wireless network research, simulation is the most imperative technique to investigate the network’s behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the
Depletion benchmarks calculation of random media using explicit modeling approach of RMC
International Nuclear Information System (INIS)
Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan
2016-01-01
Highlights: • Explicit modeling of RMC is applied to depletion benchmark for HTGR fuel element. • Explicit modeling can provide detailed burnup distribution and burnup heterogeneity. • The results would serve as a supplement for the HTGR fuel depletion benchmark. • The method of adjacent burnup regions combination is proposed for full-core problems. • The combination method can reduce memory footprint, keeping the computing accuracy. - Abstract: Monte Carlo method plays an important role in accurate simulation of random media, owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been implemented in RMC to simulate the particle transport in the dispersed fuels, in which the explicit modeling method is regarded as the best choice. In this paper, the explicit modeling method is applied to the depletion benchmark for HTGR fuel element, and the method of combination of adjacent burnup regions has been proposed and investigated. The results show that the explicit modeling can provide detailed burnup distribution of individual TRISO particles, and this work would serve as a supplement for the HTGR fuel depletion benchmark calculations. The combination of adjacent burnup regions can effectively reduce the memory footprint while keeping the computational accuracy.
Czech Academy of Sciences Publication Activity Database
Papáček, Š.; Matonoha, Ctirad; Štumbauer, V.; Štys, D.
2012-01-01
Roč. 82, č. 10 (2012), s. 2022-2032 ISSN 0378-4754. [Modelling 2009. IMACS Conference on Mathematical Modelling and Computational Methods in Applied Sciences and Engineering /4./. Rožnov pod Radhoštěm, 22.06.2009-26.06.2009] Grant - others:CENAKVA(CZ) CZ.1.05/2.1.00/01.0024; GA JU(CZ) 152//2010/Z Institutional research plan: CEZ:AV0Z10300504 Keywords : multiscale modelling * distributed parameter system * boundary value problem * random walk * photosynthetic factory Subject RIV: EI - Biotechnology ; Bionics Impact factor: 0.836, year: 2012
Hubbert's Peak -- A Physicist's View
McDonald, Richard
2011-04-01
Oil, as used in agriculture and transportation, is the lifeblood of modern society. It is finite in quantity and will someday be exhausted. In 1956, Hubbert proposed a theory of resource production and applied it successfully to predict peak U.S. oil production in 1970. Bartlett extended this work in publications and lectures on the finite nature of oil and its production peak and depletion. Both Hubbert and Bartlett place peak world oil production at a similar time, essentially now. Central to these analyses are estimates of total ``oil in place'' obtained from engineering studies of oil reservoirs as this quantity determines the area under the Hubbert's Peak. Knowing the production history and the total oil in place allows us to make estimates of reserves, and therefore future oil availability. We will then examine reserves data for various countries, in particular OPEC countries, and see if these data tell us anything about the future availability of oil. Finally, we will comment on synthetic oil and the possibility of carbon-neutral synthetic oil for a sustainable future.
International Nuclear Information System (INIS)
Zhang Yu; Wang Guangyi; Lu Xinmiao; Hu Yongcai; Xu Jiangtao
2016-01-01
The random telegraph signal noise in the pixel source follower MOSFET is the principle component of the noise in the CMOS image sensor under low light. In this paper, the physical and statistical model of the random telegraph signal noise in the pixel source follower based on the binomial distribution is set up. The number of electrons captured or released by the oxide traps in the unit time is described as the random variables which obey the binomial distribution. As a result, the output states and the corresponding probabilities of the first and the second samples of the correlated double sampling circuit are acquired. The standard deviation of the output states after the correlated double sampling circuit can be obtained accordingly. In the simulation section, one hundred thousand samples of the source follower MOSFET have been simulated, and the simulation results show that the proposed model has the similar statistical characteristics with the existing models under the effect of the channel length and the density of the oxide trap. Moreover, the noise histogram of the proposed model has been evaluated at different environmental temperatures. (paper)
International Nuclear Information System (INIS)
Fairchild, A J; Chirayath, V A; Gladen, R W; Chrysler, M D; Koymen, A R; Weiss, A H
2017-01-01
In this paper, we present results of numerical modelling of the University of Texas at Arlington’s time of flight positron annihilation induced Auger electron spectrometer (UTA TOF-PAES) using SIMION® 8.1 Ion and Electron Optics Simulator. The time of flight (TOF) spectrometer measures the energy of electrons emitted from the surface of a sample as a result of the interaction of low energy positrons with the sample surface. We have used SIMION® 8.1 to calculate the times of flight spectra of electrons leaving the sample surface with energies and angles dispersed according to distribution functions chosen to model the positron induced electron emission process and have thus obtained an estimate of the true electron energy distribution. The simulated TOF distribution was convolved with a Gaussian timing resolution function and compared to the experimental distribution. The broadening observed in the simulated TOF spectra was found to be consistent with that observed in the experimental secondary electron spectra of Cu generated as a result of positrons incident with energy 1.5 eV to 901 eV, when a timing resolution of 2.3 ns was assumed. (paper)
Energy Technology Data Exchange (ETDEWEB)
Barenboim, G.; Bernabeu, J.; Vives, O. [Universitat de Valencia, Departament de Fisica Teorica, Burjassot (Spain); Universitat de Valencia-CSIC, Parc Cientific U.V., IFIC, Paterna (Spain); Mitsou, V.A.; Romero, E. [Universitat de Valencia-CSIC, Parc Cientific U.V., IFIC, Paterna (Spain)
2016-02-15
Recently the ATLAS experiment announced a 3 σ excess at the Z-peak consisting of 29 pairs of leptons together with two or more jets, E{sub T}{sup miss} > 225 GeV and HT > 600 GeV, to be compared with 10.6 ± 3.2 expected lepton pairs in the Standard Model. No excess outside the Z-peak was observed. By trying to explain this signal with SUSY we find that only relatively light gluinos, m{sub g}
International Nuclear Information System (INIS)
Barenboim, G.; Bernabeu, J.; Vives, O.; Mitsou, V.A.; Romero, E.
2016-01-01
Recently the ATLAS experiment announced a 3 σ excess at the Z-peak consisting of 29 pairs of leptons together with two or more jets, E T miss > 225 GeV and HT > 600 GeV, to be compared with 10.6 ± 3.2 expected lepton pairs in the Standard Model. No excess outside the Z-peak was observed. By trying to explain this signal with SUSY we find that only relatively light gluinos, m g
International Nuclear Information System (INIS)
Mudry, Christopher; Wen Xiaogang
1999-01-01
Effective theories for random critical points are usually non-unitary, and thus may contain relevant operators with negative scaling dimensions. To study the consequences of the existence of negative-dimensional operators, we consider the random-bond XY model. It has been argued that the XY model on a square lattice, when weakly perturbed by random phases, has a quasi-long-range ordered phase (the random spin wave phase) at sufficiently low temperatures. We show that infinitely many relevant perturbations to the proposed critical action for the random spin wave phase were omitted in all previous treatments. The physical origin of these perturbations is intimately related to the existence of broadly distributed correlation functions. We find that those relevant perturbations do enter the Renormalization Group equations, and affect critical behavior. This raises the possibility that the random XY model has no quasi-long-range ordered phase and no Kosterlitz-Thouless (KT) phase transition
Energy Technology Data Exchange (ETDEWEB)
Wollschlaeger, A.
1996-12-31
The presented particle tracking model is for the numerical calculation of heavy metal transport in natural waters. The Navier-Stokes-Equations are solved with the Finite-Element-Method. The advective movement of the particles is interpolated from the velocities on the discrete mesh. The influence of turbulence is simulated with a Random-Walk-Model where particles are distributed due to a given probability function. Both parts are added and lead to the new particle position. The characteristics of the heavy metals are assigned to the particules as their attributes. Dissolved heavy metals are transported only by the flow. Heavy metals which are bound to particulate matter have an additional settling velocity. The sorption and the remobilization processes are approximated through a probability law which maintains the proportionality ratio between dissolved heavy metals and those which are bound to particulate matter. At the bed heavy metals bound to particulate matter are subjected to deposition and erosion processes. The model treats these processes by considering the absorption intensity of the heavy metals to the bottom sediments. Calculations of the Weser estuary show that the particle tracking model allows the simulation of the heavy metal behaviour even under complex flow conditions. (orig.) [Deutsch] Das vorgestellte Partikelmodell dient zur numerischen Berechnung des Schwermetalltransports in natuerlichen Gewaessern. Die Navier-Stokes-Gleichungen werden mit der Methode der Finiten Elemente geloest. Die advektive Bewegung der Teilchen ergibt sich aus der Interpolation der Geschwindigkeiten auf dem diskreten Netz. Der Einfluss der Turbulenz wird mit einem Random-Walk-Modell simuliert, bei dem sich die Partikel anhand einer vorgegebenen Wahrscheinlichkeitsfunktion verteilen. Beide Bewegungsanteile werden zusammengefasst und ergeben die neue Partikelposition. Die Eigenschaften der Schwermetalle werden den Partikeln als Attribute zugeordnet. Geloeste Schwermetalle
Krivitsky, Pavel N; Handcock, Mark S; Raftery, Adrian E; Hoff, Peter D
2009-07-01
Social network data often involve transitivity, homophily on observed attributes, clustering, and heterogeneity of actor degrees. We propose a latent cluster random effects model to represent all of these features, and we describe a Bayesian estimation method for it. The model is applicable to both binary and non-binary network data. We illustrate the model using two real datasets. We also apply it to two simulated network datasets with the same, highly skewed, degree distribution, but very different network behavior: one unstructured and the other with transitivity and clustering. Models based on degree distributions, such as scale-free, preferential attachment and power-law models, cannot distinguish between these very different situations, but our model does.
A new crack growth model for life prediction under random loading
International Nuclear Information System (INIS)
Lee, Ouk Sub; Chen, Zhi Wei
1999-01-01
The load interaction effect in variable amplitude fatigue test is a very important issue for correctly predicting fatigue life. Some prediction methods for retardation are reviewed and the problems discussed. The so-called 'under-load' effect is also of importance for a prediction model to work properly under random load spectrum. A new model that is simple in form but combines overload plastic zone and residual stress considerations together with Elber's closure concept is proposed to fully take account of the load-interaction effects including both over-load and under-load effects. Applying this new model to complex load sequence is explored here. Simulations of tests show the improvement of the new model over other models. The best prediction (mostly closely resembling test curve) is given by the newly proposed Chen-Lee model
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Directory of Open Access Journals (Sweden)
Keqin Yan
2017-01-01
Full Text Available This chapter presents a reliability study for an offshore jacket structure with emphasis on the features of nonconventional modeling. Firstly, a random set model is formulated for modeling the random waves in an ocean site. Then, a jacket structure is investigated in a pushover analysis to identify the critical wave direction and key structural elements. This is based on the ultimate base shear strength. The selected probabilistic models are adopted for the important structural members and the wave direction is specified in the weakest direction of the structure for a conservative safety analysis. The wave height model is processed in a P-box format when it is used in the numerical analysis. The models are applied to find the bounds of the failure probabilities for the jacket structure. The propagation of this wave model to the uncertainty in results is investigated in both an interval analysis and Monte Carlo simulation. The results are compared in context of information content and numerical accuracy. Further, the failure probability bounds are compared with the conventional probabilistic approach.
Collocation methods for uncertainty quanti cation in PDE models with random data
Nobile, Fabio
2014-01-06
In this talk we consider Partial Differential Equations (PDEs) whose input data are modeled as random fields to account for their intrinsic variability or our lack of knowledge. After parametrizing the input random fields by finitely many independent random variables, we exploit the high regularity of the solution of the PDE as a function of the input random variables and consider sparse polynomial approximations in probability (Polynomial Chaos expansion) by collocation methods. We first address interpolatory approximations where the PDE is solved on a sparse grid of Gauss points in the probability space and the solutions thus obtained interpolated by multivariate polynomials. We present recent results on optimized sparse grids in which the selection of points is based on a knapsack approach and relies on sharp estimates of the decay of the coefficients of the polynomial chaos expansion of the solution. Secondly, we consider regression approaches where the PDE is evaluated on randomly chosen points in the probability space and a polynomial approximation constructed by the least square method. We present recent theoretical results on the stability and optimality of the approximation under suitable conditions between the number of sampling points and the dimension of the polynomial space. In particular, we show that for uniform random variables, the number of sampling point has to scale quadratically with the dimension of the polynomial space to maintain the stability and optimality of the approximation. Numerical results show that such condition is sharp in the monovariate case but seems to be over-constraining in higher dimensions. The regression technique seems therefore to be attractive in higher dimensions.
Regularity of the Speed of Biased Random Walk in a One-Dimensional Percolation Model
Gantert, Nina; Meiners, Matthias; Müller, Sebastian
2018-03-01
We consider biased random walks on the infinite cluster of a conditional bond percolation model on the infinite ladder graph. Axelson-Fisk and Häggström established for this model a phase transition for the asymptotic linear speed \\overline{v} of the walk. Namely, there exists some critical value λ c>0 such that \\overline{v}>0 if λ \\in (0,λ c) and \\overline{v}=0 if λ ≥ λ c. We show that the speed \\overline{v} is continuous in λ on (0,∞) and differentiable on (0,λ c/2). Moreover, we characterize the derivative as a covariance. For the proof of the differentiability of \\overline{v} on (0,λ c/2), we require and prove a central limit theorem for the biased random walk. Additionally, we prove that the central limit theorem fails to hold for λ ≥ λ c/2.
Derrida's Generalized Random Energy models; 4, Continuous state branching and coalescents
Bovier, A
2003-01-01
In this paper we conclude our analysis of Derrida's Generalized Random Energy Models (GREM) by identifying the thermodynamic limit with a one-parameter family of probability measures related to a continuous state branching process introduced by Neveu. Using a construction introduced by Bertoin and Le Gall in terms of a coherent family of subordinators related to Neveu's branching process, we show how the Gibbs geometry of the limiting Gibbs measure is given in terms of the genealogy of this process via a deterministic time-change. This construction is fully universal in that all different models (characterized by the covariance of the underlying Gaussian process) differ only through that time change, which in turn is expressed in terms of Parisi's overlap distribution. The proof uses strongly the Ghirlanda-Guerra identities that impose the structure of Neveu's process as the only possible asymptotic random mechanism.
On Absence of Pure Singular Spectrum of Random Perturbations and in Anderson Model at Low Disorde
Grinshpun, V
2006-01-01
Absence of singular component, with probability one, in the conductivity spectra of bounded random perturbations of multidimensional finite-difference Hamiltonians, is for the first time rigorously established under certain conditions ensuring either absence of pure point, or absence of pure absolutely continuous component in the corresponding regions of spectra. The main technical tool applied is the theory of rank-one perturbations of singular spectra. The respective new result (the non-mixing property) is applied to establish existence and bounds of the (non-empty) pure absolutely continuous component in the spectrum of the Anderson model with bounded random potential in dimension 2 at low disorder. The new (1999) result implies, via the trace-class perturbation analysis, the Anderson model with the unbounded potential to have only pure point spectrum (complete system of localized wave-functions) with probability one in arbitrary dimension. The new technics, based on the resolvent reduction formula, and ex...
International Nuclear Information System (INIS)
Fyodorov, Yan V; Bouchaud, Jean-Philippe
2008-01-01
We investigate some implications of the freezing scenario proposed by Carpentier and Le Doussal (CLD) for a random energy model (REM) with logarithmically correlated random potential. We introduce a particular (circular) variant of the model, and show that the integer moments of the partition function in the high-temperature phase are given by the well-known Dyson Coulomb gas integrals. The CLD freezing scenario allows one to use those moments for extracting the distribution of the free energy in both high- and low-temperature phases. In particular, it yields the full distribution of the minimal value in the potential sequence. This provides an explicit new class of extreme-value statistics for strongly correlated variables, manifestly different from the standard Gumbel class. (fast track communication)
Energy Technology Data Exchange (ETDEWEB)
Fyodorov, Yan V [School of Mathematical Sciences, University of Nottingham, Nottingham NG72RD (United Kingdom); Bouchaud, Jean-Philippe [Science and Finance, Capital Fund Management 6-8 Bd Haussmann, 75009 Paris (France)
2008-09-19
We investigate some implications of the freezing scenario proposed by Carpentier and Le Doussal (CLD) for a random energy model (REM) with logarithmically correlated random potential. We introduce a particular (circular) variant of the model, and show that the integer moments of the partition function in the high-temperature phase are given by the well-known Dyson Coulomb gas integrals. The CLD freezing scenario allows one to use those moments for extracting the distribution of the free energy in both high- and low-temperature phases. In particular, it yields the full distribution of the minimal value in the potential sequence. This provides an explicit new class of extreme-value statistics for strongly correlated variables, manifestly different from the standard Gumbel class. (fast track communication)
A Bayesian Analysis of a Random Effects Small Business Loan Credit Scoring Model
Directory of Open Access Journals (Sweden)
Patrick J. Farrell
2011-09-01
Full Text Available One of the most important aspects of credit scoring is constructing a model that has low misclassification rates and is also flexible enough to allow for random variation. It is also well known that, when there are a large number of highly correlated variables as is typical in studies involving questionnaire data, a method must be found to reduce the number of variables to those that have high predictive power. Here we propose a Bayesian multivariate logistic regression model with both fixed and random effects for small business loan credit scoring and a variable reduction method using Bayes factors. The method is illustrated on an interesting data set based on questionnaires sent to loan officers in Canadian banks and venture capital companies
Rényi Entropies from Random Quenches in Atomic Hubbard and Spin Models
Elben, A.; Vermersch, B.; Dalmonte, M.; Cirac, J. I.; Zoller, P.
2018-02-01
We present a scheme for measuring Rényi entropies in generic atomic Hubbard and spin models using single copies of a quantum state and for partitions in arbitrary spatial dimensions. Our approach is based on the generation of random unitaries from random quenches, implemented using engineered time-dependent disorder potentials, and standard projective measurements, as realized by quantum gas microscopes. By analyzing the properties of the generated unitaries and the role of statistical errors, with respect to the size of the partition, we show that the protocol can be realized in existing quantum simulators and used to measure, for instance, area law scaling of entanglement in two-dimensional spin models or the entanglement growth in many-body localized systems.
Peaking for optimal performance: Research limitations and future directions.
Pyne, David B; Mujika, Iñigo; Reilly, Thomas
2009-02-01
A key element of the physical preparation of athletes is the taper period in the weeks immediately preceding competition. Existing research has defined the taper, identified various forms used in contemporary sport, and examined the prescription of training volume, load, intensity, duration, and type (progressive or step). Current limitations include: the lack of studies on team, combative, racquet, and precision (target) sports; the relatively small number of randomized controlled trials; the narrow focus on a single competition (single peak) compared with multiple peaking for weekly, multi-day or multiple events; and limited understanding of the physiological, neuromuscular, and biomechanical basis of the taper. Future research should address these limitations, together with the influence of prior training on optimal tapering strategies, and the interactions between the taper and long-haul travel, heat, and altitude. Practitioners seek information on how to prescribe tapers from season to season during an athlete's career, or a team's progression through a domestic league season, or multi-year Olympic or World Cup cycle. Practical guidelines for planning effective tapers for the Vancouver 2010 and London 2012 Olympics will evolve from both experimental investigations and modelling of successful tapers currently employed in a wide range of sports.
A Collective Study on Modeling and Simulation of Resistive Random Access Memory
Panda, Debashis; Sahu, Paritosh Piyush; Tseng, Tseung Yuen
2018-01-01
In this work, we provide a comprehensive discussion on the various models proposed for the design and description of resistive random access memory (RRAM), being a nascent technology is heavily reliant on accurate models to develop efficient working designs and standardize its implementation across devices. This review provides detailed information regarding the various physical methodologies considered for developing models for RRAM devices. It covers all the important models reported till now and elucidates their features and limitations. Various additional effects and anomalies arising from memristive system have been addressed, and the solutions provided by the models to these problems have been shown as well. All the fundamental concepts of RRAM model development such as device operation, switching dynamics, and current-voltage relationships are covered in detail in this work. Popular models proposed by Chua, HP Labs, Yakopcic, TEAM, Stanford/ASU, Ielmini, Berco-Tseng, and many others have been compared and analyzed extensively on various parameters. The working and implementations of the window functions like Joglekar, Biolek, Prodromakis, etc. has been presented and compared as well. New well-defined modeling concepts have been discussed which increase the applicability and accuracy of the models. The use of these concepts brings forth several improvements in the existing models, which have been enumerated in this work. Following the template presented, highly accurate models would be developed which will vastly help future model developers and the modeling community.
Thermal behavior for a nanoscale two ferromagnetic phase system based on random anisotropy model
International Nuclear Information System (INIS)
Muraca, D.; Sanchez, F.H.; Pampillo, L.G.; Saccone, F.D.
2010-01-01
Advances in theory that explain the magnetic behavior as function of temperature for two phase nanocrystalline soft magnetic materials are presented. The theory developed is based on the well known random anisotropy model, which includes the crystalline exchange stiffness and anisotropy energies in both amorphous and crystalline phases. The phenomenological behavior of the coercivity was obtained in the temperature range between the amorphous phase Curie temperature and the crystalline phase one.
Universality in invariant random-matrix models: Existence near the soft edge
International Nuclear Information System (INIS)
Kanzieper, E.; Freilikher, V.
1997-01-01
We consider two non-Gaussian ensembles of large Hermitian random matrices with strong level confinement and show that near the soft edge of the spectrum both scaled density of states and eigenvalue correlations follow so-called Airy laws inherent in the Gaussian unitary ensemble. This suggests that the invariant one-matrix models should display universal eigenvalue correlations in the soft-edge scaling limit. copyright 1997 The American Physical Society
Bastin, Catherine; Gillon, Alain; Massart, Xavier; Bertozzi, Carlo; Vanderick, Sylvie; Gengler, Nicolas
2010-01-01
Genetic correlations between body condition score (BCS) in lactation 1 to 3 and four economically important traits (days open, 305-days milk, fat, and protein yields recorded in the first 3 lactations) were estimated on about 12,500 Walloon Holstein cows using 4-trait random regression models. Results indicated moderate favorable genetic correlations between BCS and days open (from -0.46 to -0.62) and suggested the use of BCS for indirect selection on fertility. However, unfavorable genetic c...
Stable Graphical Model Estimation with Random Forests for Discrete, Continuous, and Mixed Variables
Fellinghauer, Bernd; Bühlmann, Peter; Ryffel, Martin; von Rhein, Michael; Reinhardt, Jan D.
2011-01-01
A conditional independence graph is a concise representation of pairwise conditional independence among many variables. Graphical Random Forests (GRaFo) are a novel method for estimating pairwise conditional independence relationships among mixed-type, i.e. continuous and discrete, variables. The number of edges is a tuning parameter in any graphical model estimator and there is no obvious number that constitutes a good choice. Stability Selection helps choosing this parameter with respect to...
Masters, Elizabeth T.; Emir,Birol; Mardekian,Jack; Clair,Andrew; Kuhn,Max; Silverman,Stuart
2015-01-01
Birol Emir,1 Elizabeth T Masters,1 Jack Mardekian,1 Andrew Clair,1 Max Kuhn,2 Stuart L Silverman,3 1Pfizer Inc., New York, NY, 2Pfizer Inc., Groton, CT, 3Cedars-Sinai Medical Center, Los Angeles, CA, USA Background: Diagnosis of fibromyalgia (FM), a chronic musculoskeletal condition characterized by widespread pain and a constellation of symptoms, remains challenging and is often delayed. Methods: Random forest modeling of electronic medical records was used to identify variables that may fa...
Robust Peak Recognition in Intracranial Pressure Signals
Directory of Open Access Journals (Sweden)
Bergsneider Marvin
2010-10-01
Full Text Available Abstract Background The waveform morphology of intracranial pressure pulses (ICP is an essential indicator for monitoring, and forecasting critical intracranial and cerebrovascular pathophysiological variations. While current ICP pulse analysis frameworks offer satisfying results on most of the pulses, we observed that the performance of several of them deteriorates significantly on abnormal, or simply more challenging pulses. Methods This paper provides two contributions to this problem. First, it introduces MOCAIP++, a generic ICP pulse processing framework that generalizes MOCAIP (Morphological Clustering and Analysis of ICP Pulse. Its strength is to integrate several peak recognition methods to describe ICP morphology, and to exploit different ICP features to improve peak recognition. Second, it investigates the effect of incorporating, automatically identified, challenging pulses into the training set of peak recognition models. Results Experiments on a large dataset of ICP signals, as well as on a representative collection of sampled challenging ICP pulses, demonstrate that both contributions are complementary and significantly improve peak recognition performance in clinical conditions. Conclusion The proposed framework allows to extract more reliable statistics about the ICP waveform morphology on challenging pulses to investigate the predictive power of these pulses on the condition of the patient.
ESTIMATION OF GENETIC PARAMETERS IN TROPICARNE CATTLE WITH RANDOM REGRESSION MODELS USING B-SPLINES
Directory of Open Access Journals (Sweden)
Joel DomÃnguez Viveros
2015-04-01
Full Text Available The objectives were to estimate variance components, and direct (h2 and maternal (m2 heritability in the growth of Tropicarne cattle based on a random regression model using B-Splines for random effects modeling. Information from 12 890 monthly weightings of 1787 calves, from birth to 24 months old, was analyzed. The pedigree included 2504 animals. The random effects model included genetic and permanent environmental (direct and maternal of cubic order, and residuals. The fixed effects included contemporaneous groups (year â€“ season of weighed, sex and the covariate age of the cow (linear and quadratic. The B-Splines were defined in four knots through the growth period analyzed. Analyses were performed with the software Wombat. The variances (phenotypic and residual presented a similar behavior; of 7 to 12 months of age had a negative trend; from birth to 6 months and 13 to 18 months had positive trend; after 19 months were maintained constant. The m2 were low and near to zero, with an average of 0.06 in an interval of 0.04 to 0.11; the h2 also were close to zero, with an average of 0.10 in an interval of 0.03 to 0.23.
Bridging Weighted Rules and Graph Random Walks for Statistical Relational Models
Directory of Open Access Journals (Sweden)
Seyed Mehran Kazemi
2018-02-01
Full Text Available The aim of statistical relational learning is to learn statistical models from relational or graph-structured data. Three main statistical relational learning paradigms include weighted rule learning, random walks on graphs, and tensor factorization. These paradigms have been mostly developed and studied in isolation for many years, with few works attempting at understanding the relationship among them or combining them. In this article, we study the relationship between the path ranking algorithm (PRA, one of the most well-known relational learning methods in the graph random walk paradigm, and relational logistic regression (RLR, one of the recent developments in weighted rule learning. We provide a simple way to normalize relations and prove that relational logistic regression using normalized relations generalizes the path ranking algorithm. This result provides a better understanding of relational learning, especially for the weighted rule learning and graph random walk paradigms. It opens up the possibility of using the more flexible RLR rules within PRA models and even generalizing both by including normalized and unnormalized relations in the same model.
Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling
Galelli, S.; Castelletti, A.
2013-07-01
Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.
The Theory of Random Laser Systems
International Nuclear Information System (INIS)
Xunya Jiang
2002-01-01
Studies of random laser systems are a new direction with promising potential applications and theoretical interest. The research is based on the theories of localization and laser physics. So far, the research shows that there are random lasing modes inside the systems which is quite different from the common laser systems. From the properties of the random lasing modes, they can understand the phenomena observed in the experiments, such as multi-peak and anisotropic spectrum, lasing mode number saturation, mode competition and dynamic processes, etc. To summarize, this dissertation has contributed the following in the study of random laser systems: (1) by comparing the Lamb theory with the Letokhov theory, the general formulas of the threshold length or gain of random laser systems were obtained; (2) they pointed out the vital weakness of previous time-independent methods in random laser research; (3) a new model which includes the FDTD method and the semi-classical laser theory. The solutions of this model provided an explanation of the experimental results of multi-peak and anisotropic emission spectra, predicted the saturation of lasing modes number and the length of localized lasing modes; (4) theoretical (Lamb theory) and numerical (FDTD and transfer-matrix calculation) studies of the origin of localized lasing modes in the random laser systems; and (5) proposal of using random lasing modes as a new path to study wave localization in random systems and prediction of the lasing threshold discontinuity at mobility edge
Individual vision and peak distribution in collective actions
Lu, Peng
2017-06-01
People make decisions on whether they should participate as participants or not as free riders in collective actions with heterogeneous visions. Besides of the utility heterogeneity and cost heterogeneity, this work includes and investigates the effect of vision heterogeneity by constructing a decision model, i.e. the revised peak model of participants. In this model, potential participants make decisions under the joint influence of utility, cost, and vision heterogeneities. The outcomes of simulations indicate that vision heterogeneity reduces the values of peaks, and the relative variance of peaks is stable. Under normal distributions of vision heterogeneity and other factors, the peaks of participants are normally distributed as well. Therefore, it is necessary to predict distribution traits of peaks based on distribution traits of related factors such as vision heterogeneity and so on. We predict the distribution of peaks with parameters of both mean and standard deviation, which provides the confident intervals and robust predictions of peaks. Besides, we validate the peak model of via the Yuyuan Incident, a real case in China (2014), and the model works well in explaining the dynamics and predicting the peak of real case.
Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Mubin, Marizan; Saad, Ismail
2016-01-01
In the existing electroencephalogram (EEG) signals peak classification research, the existing models, such as Dumpala, Acir, Liu, and Dingle peak models, employ different set of features. However, all these models may not be able to offer good performance for various applications and it is found to be problem dependent. Therefore, the objective of this study is to combine all the associated features from the existing models before selecting the best combination of features. A new optimization algorithm, namely as angle modulated simulated Kalman filter (AMSKF) will be employed as feature selector. Also, the neural network random weight method is utilized in the proposed AMSKF technique as a classifier. In the conducted experiment, 11,781 samples of peak candidate are employed in this study for the validation purpose. The samples are collected from three different peak event-related EEG signals of 30 healthy subjects; (1) single eye blink, (2) double eye blink, and (3) eye movement signals. The experimental results have shown that the proposed AMSKF feature selector is able to find the best combination of features and performs at par with the existing related studies of epileptic EEG events classification.
International Nuclear Information System (INIS)
Cocco, S; Monasson, R
2009-01-01
We consider the Sinai model, in which a random walker moves in a random quenched potential V, and ask the following questions: 1. how can the quenched potential V be inferred from the observations of one or more realizations of the random motion? 2. how many observations (walks) are required to make a reliable inference, that is, to be able to distinguish between two similar but distinct potentials, V 1 and V 2 ? We show how question 1 can be easily solved within the Bayesian framework. In addition, we show that the answer to question 2 is, in general, intimately connected to the calculation of the survival probability of a fictitious walker in a potential W defined from V 1 and V 2 , with partial absorption at sites where V 1 and V 2 do not coincide. For the one-dimensional Sinai model, this survival probability can be analytically calculated, in excellent agreement with numerical simulations.
Random intermittent search and the tug-of-war model of motor-driven transport
International Nuclear Information System (INIS)
Newby, Jay; Bressloff, Paul C
2010-01-01
We formulate the 'tug-of-war' model of microtubule cargo transport by multiple molecular motors as an intermittent random search for a hidden target. A motor complex consisting of multiple molecular motors with opposing directional preference is modeled using a discrete Markov process. The motors randomly pull each other off of the microtubule so that the state of the motor complex is determined by the number of bound motors. The tug-of-war model prescribes the state transition rates and corresponding cargo velocities in terms of experimentally measured physical parameters. We add space to the resulting Chapman–Kolmogorov (CK) equation so that we can consider delivery of the cargo to a hidden target at an unknown location along the microtubule track. The target represents some subcellular compartment such as a synapse in a neuron's dendrites, and target delivery is modeled as a simple absorption process. Using a quasi-steady-state (QSS) reduction technique we calculate analytical approximations of the mean first passage time (MFPT) to find the target. We show that there exists an optimal adenosine triphosphate (ATP) concentration that minimizes the MFPT for two different cases: (i) the motor complex is composed of equal numbers of kinesin motors bound to two different microtubules (symmetric tug-of-war model) and (ii) the motor complex is composed of different numbers of kinesin and dynein motors bound to a single microtubule (asymmetric tug-of-war model)
Random intermittent search and the tug-of-war model of motor-driven transport
Newby, Jay; Bressloff, Paul C.
2010-04-01
We formulate the 'tug-of-war' model of microtubule cargo transport by multiple molecular motors as an intermittent random search for a hidden target. A motor complex consisting of multiple molecular motors with opposing directional preference is modeled using a discrete Markov process. The motors randomly pull each other off of the microtubule so that the state of the motor complex is determined by the number of bound motors. The tug-of-war model prescribes the state transition rates and corresponding cargo velocities in terms of experimentally measured physical parameters. We add space to the resulting Chapman-Kolmogorov (CK) equation so that we can consider delivery of the cargo to a hidden target at an unknown location along the microtubule track. The target represents some subcellular compartment such as a synapse in a neuron's dendrites, and target delivery is modeled as a simple absorption process. Using a quasi-steady-state (QSS) reduction technique we calculate analytical approximations of the mean first passage time (MFPT) to find the target. We show that there exists an optimal adenosine triphosphate (ATP) concentration that minimizes the MFPT for two different cases: (i) the motor complex is composed of equal numbers of kinesin motors bound to two different microtubules (symmetric tug-of-war model) and (ii) the motor complex is composed of different numbers of kinesin and dynein motors bound to a single microtubule (asymmetric tug-of-war model).
Random intermittent search and the tug-of-war model of motor-driven transport
Newby, Jay
2010-04-16
We formulate the \\'tug-of-war\\' model of microtubule cargo transport by multiple molecular motors as an intermittent random search for a hidden target. A motor complex consisting of multiple molecular motors with opposing directional preference is modeled using a discrete Markov process. The motors randomly pull each other off of the microtubule so that the state of the motor complex is determined by the number of bound motors. The tug-of-war model prescribes the state transition rates and corresponding cargo velocities in terms of experimentally measured physical parameters. We add space to the resulting Chapman-Kolmogorov (CK) equation so that we can consider delivery of the cargo to a hidden target at an unknown location along the microtubule track. The target represents some subcellular compartment such as a synapse in a neuron\\'s dendrites, and target delivery is modeled as a simple absorption process. Using a quasi-steady-state (QSS) reduction technique we calculate analytical approximations of the mean first passage time (MFPT) to find the target. We show that there exists an optimal adenosine triphosphate (ATP) concentration that minimizes the MFPT for two different cases: (i) the motor complex is composed of equal numbers of kinesin motors bound to two different microtubules (symmetric tug-of-war model) and (ii) the motor complex is composed of different numbers of kinesin and dynein motors bound to a single microtubule (asymmetric tug-of-war model). © 2010 IOP Publishing Ltd.
Random intermittent search and the tug-of-war model of motor-driven transport
Newby, Jay; Bressloff, Paul C
2010-01-01
We formulate the 'tug-of-war' model of microtubule cargo transport by multiple molecular motors as an intermittent random search for a hidden target. A motor complex consisting of multiple molecular motors with opposing directional preference is modeled using a discrete Markov process. The motors randomly pull each other off of the microtubule so that the state of the motor complex is determined by the number of bound motors. The tug-of-war model prescribes the state transition rates and corresponding cargo velocities in terms of experimentally measured physical parameters. We add space to the resulting Chapman-Kolmogorov (CK) equation so that we can consider delivery of the cargo to a hidden target at an unknown location along the microtubule track. The target represents some subcellular compartment such as a synapse in a neuron's dendrites, and target delivery is modeled as a simple absorption process. Using a quasi-steady-state (QSS) reduction technique we calculate analytical approximations of the mean first passage time (MFPT) to find the target. We show that there exists an optimal adenosine triphosphate (ATP) concentration that minimizes the MFPT for two different cases: (i) the motor complex is composed of equal numbers of kinesin motors bound to two different microtubules (symmetric tug-of-war model) and (ii) the motor complex is composed of different numbers of kinesin and dynein motors bound to a single microtubule (asymmetric tug-of-war model). © 2010 IOP Publishing Ltd.
Liu, Hong; Zhu, Jingping; Wang, Kai
2015-08-24
The geometrical attenuation model given by Blinn was widely used in the geometrical optics bidirectional reflectance distribution function (BRDF) models. Blinn's geometrical attenuation model based on symmetrical V-groove assumption and ray scalar theory causes obvious inaccuracies in BRDF curves and negatives the effects of polarization. Aiming at these questions, a modified polarized geometrical attenuation model based on random surface microfacet theory is presented by combining of masking and shadowing effects and polarized effect. The p-polarized, s-polarized and unpolarized geometrical attenuation functions are given in their separate expressions and are validated with experimental data of two samples. It shows that the modified polarized geometrical attenuation function reaches better physical rationality, improves the precision of BRDF model, and widens the applications for different polarization.
SPANISH PEAKS PRIMITIVE AREA, MONTANA.
Calkins, James A.; Pattee, Eldon C.
1984-01-01
A mineral survey of the Spanish Peaks Primitive Area, Montana, disclosed a small low-grade deposit of demonstrated chromite and asbestos resources. The chances for discovery of additional chrome resources are uncertain and the area has little promise for the occurrence of other mineral or energy resources. A reevaluation, sampling at depth, and testing for possible extensions of the Table Mountain asbestos and chromium deposit should be undertaken in the light of recent interpretations regarding its geologic setting.
Neurofeedback training for peak performance
Marek Graczyk; Maria Pąchalska; Artur Ziółkowski; Grzegorz Mańko; Beata Łukaszewska; Kazimierz Kochanowicz; Andrzej Mirski; Iurii D. Kropotov
2014-01-01
[b]aim[/b]. One of the applications of the Neurofeedback methodology is peak performance in sport. The protocols of the neurofeedback are usually based on an assessment of the spectral parameters of spontaneous EEG in resting state conditions. The aim of the paper was to study whether the intensive neurofeedback training of a well-functioning Olympic athlete who has lost his performance confidence after injury in sport, could change the brain functioning reflected in changes in spontaneou...
Power peaking nuclear reliability factors
International Nuclear Information System (INIS)
Hassan, H.A.; Pegram, J.W.; Mays, C.W.; Romano, J.J.; Woods, J.J.; Warren, H.D.
1977-11-01
The Calculational Nuclear Reliability Factor (CNRF) assigned to the limiting power density calculated in reactor design has been determined. The CNRF is presented as a function of the relative power density of the fuel assembly and its radial local. In addition, the Measurement Nuclear Reliability Factor (MNRF) for the measured peak hot pellet power in the core has been evaluated. This MNRF is also presented as a function of the relative power density and radial local within the fuel assembly
Evaluation of concurrent peak responses
International Nuclear Information System (INIS)
Wang, P.C.; Curreri, J.; Reich, M.
1983-01-01
This report deals with the problem of combining two or more concurrent responses which are induced by dynamic loads acting on nuclear power plant structures. Specifically, the acceptability of using the square root of the sum of the squares (SRSS) value of peak values as the combined response is investigated. Emphasis is placed on the establishment of a simplified criterion that is convenient and relatively easy to use by design engineers
International Nuclear Information System (INIS)
Silagadze, Z.K.
2007-01-01
Two-dimensional generalization of the original peak finding algorithm suggested earlier is given. The ideology of the algorithm emerged from the well-known quantum mechanical tunneling property which enables small bodies to penetrate through narrow potential barriers. We merge this 'quantum' ideology with the philosophy of Particle Swarm Optimization to get the global optimization algorithm which can be called Quantum Swarm Optimization. The functionality of the newborn algorithm is tested on some benchmark optimization problems
Role of Statistical Random-Effects Linear Models in Personalized Medicine.
Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose
2012-03-01
Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization.
Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E
2017-12-01
1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.
GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY
Directory of Open Access Journals (Sweden)
F. Biljecki
2016-09-01
Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.
Random walk in degree space and the time-dependent Watts-Strogatz model
Casa Grande, H. L.; Cotacallapa, M.; Hase, M. O.
2017-01-01
In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.
Tian, Yuzhen; Guo, Jin; Wang, Rui; Wang, Tingfeng
2011-09-12
In order to research the statistical properties of Gaussian beam propagation through an arbitrary thickness random phase screen for adaptive optics and laser communication application in the laboratory, we establish mathematic models of statistical quantities, which are based on the Rytov method and the thin phase screen model, involved in the propagation process. And the analytic results are developed for an arbitrary thickness phase screen based on the Kolmogorov power spectrum. The comparison between the arbitrary thickness phase screen and the thin phase screen shows that it is more suitable for our results to describe the generalized case, especially the scintillation index.
Directory of Open Access Journals (Sweden)
Pablo Gregori
2014-03-01
Full Text Available This paper represents a survey of recent advances in modeling of space or space-time Gaussian Random Fields (GRF, tools of Geostatistics at hand for the understanding of special cases of noise in image analysis. They can be used when stationarity or isotropy are unrealistic assumptions, or even when negative covariance between some couples of locations are evident. We show some strategies in order to escape from these restrictions, on the basis of rich classes of well known stationary or isotropic non negative covariance models, and through suitable operations, like linear combinations, generalized means, or with particular Fourier transforms.
Chan, Jennifer S K
2016-05-01
Dropouts are common in longitudinal study. If the dropout probability depends on the missing observations at or after dropout, this type of dropout is called informative (or nonignorable) dropout (ID). Failure to accommodate such dropout mechanism into the model will bias the parameter estimates. We propose a conditional autoregressive model for longitudinal binary data with an ID model such that the probabilities of positive outcomes as well as the drop-out indicator in each occasion are logit linear in some covariates and outcomes. This model adopting a marginal model for outcomes and a conditional model for dropouts is called a selection model. To allow for the heterogeneity and clustering effects, the outcome model is extended to incorporate mixture and random effects. Lastly, the model is further extended to a novel model that models the outcome and dropout jointly such that their dependency is formulated through an odds ratio function. Parameters are estimated by a Bayesian approach implemented using the user-friendly Bayesian software WinBUGS. A methadone clinic dataset is analyzed to illustrate the proposed models. Result shows that the treatment time effect is still significant but weaker after allowing for an ID process in the data. Finally the effect of drop-out on parameter estimates is evaluated through simulation studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Drivers of peak sales for pharmaceutical brands
Fischer, Marc; Leeflang, Peter S. H.; Verhoef, Peter C.
2010-01-01
Peak sales are an important metric in the pharmaceutical industry. Specifically, managers are focused on the height-of-peak-sales and the time required achieving peak sales. We analyze how order of entry and quality affect the level of peak sales and the time-to-peak-sales of pharmaceutical brands.
Range walk error correction and modeling on Pseudo-random photon counting system
Shen, Shanshan; Chen, Qian; He, Weiji
2017-08-01
Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.
Estimation of the peak factor based on watershed characteristics
Energy Technology Data Exchange (ETDEWEB)
Gauthier, Jean; Nolin, Simon; Ruest, Benoit [BPR Inc., Quebec, (Canada)
2010-07-01
Hydraulic modeling and dam structure design require the river flood flow as a primary input. For a given flood event, the ratio of peak flow over mean daily flow defines the peak factor. The peak factor value is dependent on the watershed and location along the river. The main goal of this study consisted in finding a relationship between watershed characteristics and this peak factor. Regression analyses were carried out on 53 natural watersheds located in the southern part of the province of Quebec using data from the Centre d'expertise hydrique du Quebec (CEHQ). The watershed characteristics included in the analyses were the watershed area, the maximum flow length, the mean slope, the lake proportion and the mean elevation. The results showed that watershed area and length are the major parameters influencing the peak factor. Nine natural watersheds were also used to test the use of a multivariable model in order to determine the peak factor for ungauged watersheds.
A random matrix model for elliptic curve L-functions of finite conductor
International Nuclear Information System (INIS)
Dueñez, E; Huynh, D K; Keating, J P; Snaith, N C; Miller, S J
2012-01-01
We propose a random-matrix model for families of elliptic curve L-functions of finite conductor. A repulsion of the critical zeros of these L-functions away from the centre of the critical strip was observed numerically by Miller (2006 Exp. Math. 15 257–79); such behaviour deviates qualitatively from the conjectural limiting distribution of the zeros (for large conductors this distribution is expected to approach the one-level density of eigenvalues of orthogonal matrices after appropriate rescaling). Our purpose here is to provide a random-matrix model for Miller’s surprising discovery. We consider the family of even quadratic twists of a given elliptic curve. The main ingredient in our model is a calculation of the eigenvalue distribution of random orthogonal matrices whose characteristic polynomials are larger than some given value at the symmetry point in the spectra. We call this sub-ensemble of SO(2N) the excised orthogonal ensemble. The sieving-off of matrices with small values of the characteristic polynomial is akin to the discretization of the central values of L-functions implied by the formulae of Waldspurger and Kohnen–Zagier. The cut-off scale appropriate to modelling elliptic curve L-functions is exponentially small relative to the matrix size N. The one-level density of the excised ensemble can be expressed in terms of that of the well-known Jacobi ensemble, enabling the former to be explicitly calculated. It exhibits an exponentially small (on the scale of the mean spacing) hard gap determined by the cut-off value, followed by soft repulsion on a much larger scale. Neither of these features is present in the one-level density of SO(2N). When N → ∞ we recover the limiting orthogonal behaviour. Our results agree qualitatively with Miller’s discrepancy. Choosing the cut-off appropriately gives a model in good quantitative agreement with the number-theoretical data. (paper)
Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco
2017-04-01
Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This
Genetic analysis of partial egg production records in Japanese quail using random regression models.
Abou Khadiga, G; Mahmoud, B Y F; Farahat, G S; Emam, A M; El-Full, E A
2017-08-01
The main objectives of this study were to detect the most appropriate random regression model (RRM) to fit the data of monthly egg production in 2 lines (selected and control) of Japanese quail and to test the consistency of different criteria of model choice. Data from 1,200 female Japanese quails for the first 5 months of egg production from 4 consecutive generations of an egg line selected for egg production in the first month (EP1) was analyzed. Eight RRMs with different orders of Legendre polynomials were compared to determine the proper model for analysis. All criteria of model choice suggested that the adequate model included the second-order Legendre polynomials for fixed effects, and the third-order for additive genetic effects and permanent environmental effects. Predictive ability of the best model was the highest among all models (ρ = 0.987). According to the best model fitted to the data, estimates of heritability were relatively low to moderate (0.10 to 0.17) showed a descending pattern from the first to the fifth month of production. A similar pattern was observed for permanent environmental effects with greater estimates in the first (0.36) and second (0.23) months of production than heritability estimates. Genetic correlations between separate production periods were higher (0.18 to 0.93) than their phenotypic counterparts (0.15 to 0.87). The superiority of the selected line over the control was observed through significant (P egg production in earlier ages (first and second months) than later ones. A methodology based on random regression animal models can be recommended for genetic evaluation of egg production in Japanese quail. © 2017 Poultry Science Association Inc.