WorldWideScience

Sample records for random peak model

  1. A model to forecast peak spreading.

    Science.gov (United States)

    2012-04-01

    As traffic congestion increases, the K-factor, defined as the proportion of the 24-hour traffic volume that occurs during the peak hour, may decrease. This behavioral response is known as peak spreading: as congestion grows during the peak travel tim...

  2. OccuPeak: ChIP-Seq peak calling based on internal background modelling

    NARCIS (Netherlands)

    de Boer, Bouke A.; van Duijvenboden, Karel; van den Boogaard, Malou; Christoffels, Vincent M.; Barnett, Phil; Ruijter, Jan M.

    2014-01-01

    ChIP-seq has become a major tool for the genome-wide identification of transcription factor binding or histone modification sites. Most peak-calling algorithms require input control datasets to model the occurrence of background reads to account for local sequencing and GC bias. However, the

  3. Group Elevator Peak Scheduling Based on Robust Optimization Model

    OpenAIRE

    Zhang, J.; Q. Zong

    2013-01-01

    Scheduling of Elevator Group Control System (EGCS) is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization) method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is re...

  4. Flood Peak Estimation Using Rainfall Run off Models | Matondo ...

    African Journals Online (AJOL)

    The design of hydraulic structures such as road culverts, road bridges and dam spillways requires the determination of the design food peak. Two approaches are available in the determination of the design flood peak and these are: flood frequency analysis and rainfall runoff models. Flood frequency analysis requires a ...

  5. Amorphous chalcogenides as random octahedrally bonded solids: I. Implications for the first sharp diffraction peak, photodarkening, and Boson peak

    Science.gov (United States)

    Lukyanov, Alexey; Lubchenko, Vassiliy

    2017-09-01

    We develop a computationally efficient algorithm for generating high-quality structures for amorphous materials exhibiting distorted octahedral coordination. The computationally costly step of equilibrating the simulated melt is relegated to a much more efficient procedure, viz., generation of a random close-packed structure, which is subsequently used to generate parent structures for octahedrally bonded amorphous solids. The sites of the so-obtained lattice are populated by atoms and vacancies according to the desired stoichiometry while allowing one to control the number of homo-nuclear and hetero-nuclear bonds and, hence, effects of the mixing entropy. The resulting parent structure is geometrically optimized using quantum-chemical force fields; by varying the extent of geometric optimization of the parent structure, one can partially control the degree of octahedrality in local coordination and the strength of secondary bonding. The present methodology is applied to the archetypal chalcogenide alloys AsxSe1-x. We find that local coordination in these alloys interpolates between octahedral and tetrahedral bonding but in a non-obvious way; it exhibits bonding motifs that are not characteristic of either extreme. We consistently recover the first sharp diffraction peak (FSDP) in our structures and argue that the corresponding mid-range order stems from the charge density wave formed by regions housing covalent and weak, secondary interactions. The number of secondary interactions is determined by a delicate interplay between octahedrality and tetrahedrality in the covalent bonding; many of these interactions are homonuclear. The present results are consistent with the experimentally observed dependence of the FSDP on arsenic content, pressure, and temperature and its correlation with photodarkening and the Boson peak. They also suggest that the position of the FSDP can be used to infer the effective particle size relevant for the configurational equilibration in

  6. Group Elevator Peak Scheduling Based on Robust Optimization Model

    Directory of Open Access Journals (Sweden)

    ZHANG, J.

    2013-08-01

    Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.

  7. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    Science.gov (United States)

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  8. Remote Sensing and Modeling of Cyclone Monica near Peak Intensity

    Directory of Open Access Journals (Sweden)

    Stephen L. Durden

    2010-07-01

    Full Text Available Cyclone Monica was an intense Southern Hemisphere tropical cyclone of 2006. Although no in situ measurements of Monica’s inner core were made, microwave, infrared, and visible satellite instruments observed Monica before and during peak intensity through landfall on Australia’s northern coast. The author analyzes remote sensing measurements in detail to investigate Monica’s intensity. While Dvorak analysis of its imagery argues that it was of extreme intensity, infrared and microwave soundings indicate a somewhat lower intensity, especially as it neared landfall. The author also describes several numerical model runs that were made to investigate the maximum possible intensity for the observed environmental conditions; these simulations also suggest a lower intensity than estimates from Dvorak analysis alone. Based on the evidence from the various measurements and modeling, the estimated range for the minimum sea level pressure at peak intensity is 900 to 920 hPa. The estimated range for the one-minute averaged maximum wind speed at peak intensity is 72 to 82 m/s. These maxima were likely reached about 24 hours prior to landfall, with some weakening occurring afterward.

  9. Signal preserving and seismic random noise attenuation by Hurst exponent based time-frequency peak filtering

    Science.gov (United States)

    Zhang, Chao; Li, Yue; Lin, Hongbo; Yang, Baojun

    2015-11-01

    Attenuating random noise is of great significance in seismic data processing. In recent years, time-frequency peak filtering (TFPF) has been successfully applied to seismic random noise attenuation field. However, a fixed window length (WL) is used in the conventional TFPF. Since a short WL in the TFPF is used to preserve signals while a long WL can eliminate random noise effectively, signal preserving and noise attenuation cannot be balanced by a fixed WL especially when the signal-to-noise ratio of the noisy seismic record is low. Thus, we need to divide a noisy signal into signal and noise segments before the filtering. Then a short WL is used to the signal segments to preserve signals and a long WL is chosen for noise segments to eliminate random noise. In this paper, we test the smoothness of signals and random noise in time using the Hurst exponent which is a statistic for representing smoothness characteristics of signals. The time-series of signals with higher smoothness which lead to larger Hurst exponent values, however random noise is a random series in time without fixed waveforms and thus its smoothness is low, so the signal and noise segments can be divided by the Hurst exponent values. After the segmentation, we can adopt different filtering WLs in the TFPF for different segments to make a trade-off between signal preserving and random noise attenuation. Synthetic and real data experiments demonstrate that the proposed method can remove random noise from seismic record and preserve reflection events effectively.

  10. Seismic random noise removal by delay-compensation time-frequency peak filtering

    Science.gov (United States)

    Yu, Pengjun; Li, Yue; Lin, Hongbo; Wu, Ning

    2017-06-01

    Over the past decade, there has been an increasing awareness of time-frequency peak filtering (TFPF) due to its outstanding performance in suppressing non-stationary and strong seismic random noise. The traditional approach based on time-windowing achieves local linearity and meets the unbiased estimation. However, the traditional TFPF (including the improved algorithms with alterable window lengths) could hardly relieve the contradiction between removing noise and recovering the seismic signal, and this situation is more obvious in wave crests and troughs, even for alterable window lengths (WL). To improve the efficiency of the algorithm, the following TFPF in the time-space domain is applied, such as in the Radon domain and radial trace domain. The time-space transforms obtain a reduced-frequency input to reduce the TFPF error and stretch the desired signal along a certain direction, therefore the time-space development brings an improvement by both enhancing reflection events and attenuating noise. It still proves limited in application because the direction should be matched as a straight line or quadratic curve. As a result, waveform distortion and false seismic events may appear when processing the complex stratum record. The main emphasis in this article is placed on the time-space TFPF applicable expansion. The reconstructed signal in delay-compensation TFPF, which is generated according to the similarity among the reflection events, overcomes the limitation of the direction curve fitting. Moreover, the reconstructed signal just meets the TFPF linearity unbiased estimation and integrates signal reservation with noise attenuation. Experiments on both the synthetic model and field data indicate that delay-compensation TFPF has a better performance over the conventional filtering algorithms.

  11. Random regression models

    African Journals Online (AJOL)

    zlukovi

    modelled as a quadratic regression, nested within parity. The previous lactation length was ... This proportion was mainly covered by linear and quadratic coefficients. Results suggest that RRM could .... The multiple trait models in scalar notation are presented by equations (1, 2), while equation. (3) represents the random ...

  12. A practice-specificity-based model of arousal for achieving peak performance.

    Science.gov (United States)

    Movahedi, Ahmadreza; Sheikh, Mahmood; Bagherzadeh, Fazlolah; Hemayattalab, Rasool; Ashayeri, Hassan

    2007-11-01

    The authors propose a practice-specificity-based model of arousal for achieving peak performance. The study included 37 healthy male physical education students whom they randomly assigned to a high-arousal (n = 19) or low-arousal group (n = 18). To manipulate participants' level of arousal, the authors used motivational techniques. They used heart rate and the Sport Competition Anxiety Test (R. Martens, 1977) to measure the level of arousal that participants achieved. At the determined and given arousal state, the 2 groups performed the task (basketball free throws) for 18 sessions. Both groups performed a retention test at the 2 arousal levels immediately after the last exercise session, in the posttest, and after 10 days. Results showed that both groups learned the task similarly and achieved their peak performance at their experienced arousal level. When tested at an arousal level that differed from the one that they experienced throughout practice sessions, participants' performance had deteriorated significantly. Performance of the task seemed to have integrated with the arousal level of the participants during the task learning. The findings of this study suggest a practice-specificity-based explanation for achieving peak performance.

  13. Scaling of peak flows with constant flow velocity in random self-similar networks

    Directory of Open Access Journals (Sweden)

    R. Mantilla

    2011-07-01

    Full Text Available A methodology is presented to understand the role of the statistical self-similar topology of real river networks on scaling, or power law, in peak flows for rainfall-runoff events. We created Monte Carlo generated sets of ensembles of 1000 random self-similar networks (RSNs with geometrically distributed interior and exterior generators having parameters pi and pe, respectively. The parameter values were chosen to replicate the observed topology of real river networks. We calculated flow hydrographs in each of these networks by numerically solving the link-based mass and momentum conservation equation under the assumption of constant flow velocity. From these simulated RSNs and hydrographs, the scaling exponents β and φ characterizing power laws with respect to drainage area, and corresponding to the width functions and flow hydrographs respectively, were estimated. We found that, in general, φ > β, which supports a similar finding first reported for simulations in the river network of the Walnut Gulch basin, Arizona. Theoretical estimation of β and φ in RSNs is a complex open problem. Therefore, using results for a simpler problem associated with the expected width function and expected hydrograph for an ensemble of RSNs, we give heuristic arguments for theoretical derivations of the scaling exponents β(E and φ(E that depend on the Horton ratios for stream lengths and areas. These ratios in turn have a known dependence on the parameters of the geometric distributions of RSN generators. Good agreement was found between the analytically conjectured values of β(E and φ(E and the values estimated by the simulated ensembles of RSNs and hydrographs. The independence of the scaling exponents φ(E and φ with respect to the value of flow velocity and runoff intensity implies an interesting connection between unit

  14. Home-based aerobic interval training improves peak oxygen uptake equal to residential cardiac rehabilitation: a randomized, controlled trial.

    Science.gov (United States)

    Moholdt, Trine; Bekken Vold, Mona; Grimsmo, Jostein; Slørdahl, Stig Arild; Wisløff, Ulrik

    2012-01-01

    Aerobic capacity, measured as the peak oxygen uptake, is a strong predictor of survival in cardiac patients. Aerobic interval training (AIT), walking/running four times four minutes at 85-95% of peak heart rate, has proven to be effective in increasing peak oxygen uptake in coronary heart disease patients. As some patients do not attend organized rehabilitation programs, home-based exercise should be an alternative. We investigated whether AIT could be performed effectively at home, and compared the effects on peak oxygen uptake with that observed after a standard care, four-week residential rehabilitation. Thirty patients undergoing coronary artery bypass surgery were randomized to residential rehabilitation or home-based AIT. At six months follow-up, peak oxygen uptake increased 4.6 (±2.7) and 3.9 (±3.6) mL·kg(-1) min(-1) (both peffect between patients randomized to home-based AIT compared to patients attending organized rehabilitation (95% confidence interval -1.8, 3.5). AIT patients reported good adherence to exercise training. Even though these first data indicate positive effects of home-based AIT in patients undergoing coronary artery bypass surgery, more studies are needed to provide supporting evidence for the application of this rehabilitation strategy. ClinicalTrials.gov NCT00363922.

  15. Modeling peak oil and the geological constraints on oil production

    NARCIS (Netherlands)

    Okullo, S.J.; Reynès, F.; Hofkes, M.W.

    2015-01-01

    We propose a model to reconcile the theory of inter-temporal non-renewable resource depletion with well-known stylized facts concerning the exploitation of exhaustible resources such as oil. Our approach introduces geological constraints into a Hotelling type extraction-exploration model. We show

  16. Peak power prediction in junior basketballers: comparing linear and allometric models.

    Science.gov (United States)

    Duncan, Michael J; Hankey, Joanne; Lyons, Mark; James, Rob S; Nevill, Alan M

    2013-03-01

    Equations, commonly used to predict peak power from jump height, have relied on linear additive models that are biologically unsound beyond the range of observations because of high negative intercept values. This study explored the utility of allometric multiplicative modeling to better predict peak power in adolescent basketball players. Seventy-seven elite junior basketball players (62 adolescent boys, 15 adolescent girls, age = 16.8 ± 0.8 years) performed 3 counter movement jumps (CMJs) on a force platform. Both linear and multiplicative models were then used to determine their efficacy. Four previously published linear equations were significantly associated with actual peak power (all p equations by Sayers (both p Allometric modeling was used to determine an alternative biologically sound equation which was more strongly associated with (r = 0.886, p 0.05), actual peak power and predicted 77.9% of the variance in actual peak power (adjusted R = 0.779, p equation was significantly associated (r = 0.871, p 0.05) and offered a more accurate estimation of peak power than previously validated linear additive models examined in this study. The allometric model determined from this study or the multiplicative model (body mass × CMJ height) provides biologically sound models to accurately estimate peak power in elite adolescent basketballers that are more accurate than equations based on linear additive models.

  17. Home-based aerobic interval training improves peak oxygen uptake equal to residential cardiac rehabilitation: a randomized, controlled trial.

    Directory of Open Access Journals (Sweden)

    Trine Moholdt

    Full Text Available Aerobic capacity, measured as the peak oxygen uptake, is a strong predictor of survival in cardiac patients. Aerobic interval training (AIT, walking/running four times four minutes at 85-95% of peak heart rate, has proven to be effective in increasing peak oxygen uptake in coronary heart disease patients. As some patients do not attend organized rehabilitation programs, home-based exercise should be an alternative. We investigated whether AIT could be performed effectively at home, and compared the effects on peak oxygen uptake with that observed after a standard care, four-week residential rehabilitation. Thirty patients undergoing coronary artery bypass surgery were randomized to residential rehabilitation or home-based AIT. At six months follow-up, peak oxygen uptake increased 4.6 (±2.7 and 3.9 (±3.6 mL·kg(-1 min(-1 (both p<0.005, non-significant between-group difference after residential rehabilitation and AIT, respectively. Quality of life increased significantly in both groups, with no statistical significant difference between groups. We found no evidence for a different treatment effect between patients randomized to home-based AIT compared to patients attending organized rehabilitation (95% confidence interval -1.8, 3.5. AIT patients reported good adherence to exercise training. Even though these first data indicate positive effects of home-based AIT in patients undergoing coronary artery bypass surgery, more studies are needed to provide supporting evidence for the application of this rehabilitation strategy.ClinicalTrials.gov NCT00363922.

  18. New approaches based on modified Gaussian models for the prediction of chromatographic peaks.

    Science.gov (United States)

    Baeza-Baeza, J J; Ortiz-Bolsico, C; García-Álvarez-Coque, M C

    2013-01-03

    The description of skewed chromatographic peaks has been discussed extensively and many functions have been proposed. Among these, the Polynomially Modified Gaussian (PMG) models interpret the deviations from ideality as a change in the standard deviation with time. This approach has shown a high accuracy in the fitting to tailing and fronting peaks. However, it has the drawback of the uncontrolled growth of the predicted signal outside the elution region, which departs from the experimental baseline. To solve this problem, the Parabolic-Lorentzian Modified Gaussian (PLMG) model was developed. This combines a parabola that describes the variance change in the peak region, and a Lorentzian function that decreases the variance growth out of the peak region. The PLMG model has, however, the drawback of its high flexibility that makes the optimisation process difficult when the initial values of the model parameters are far from the optimal ones. Based on the fitting of experimental peaks of diverse origin and asymmetry degree, several semiempirical approaches that make use of the halfwidths at 60.65% and 10% peak height are here reported, which allow the use of the PLMG model for prediction purposes with small errors (below 2-3%). The incorporation of several restrictions in the algorithm avoids the indeterminations that arise frequently with this model, when applied to highly skewed peaks. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Localized auxin peaks in concentration-based transport models of the shoot apical meristem.

    Science.gov (United States)

    Draelants, Delphine; Avitabile, Daniele; Vanroose, Wim

    2015-05-06

    We study the formation of auxin peaks in a generic class of concentration-based auxin transport models, posed on static plant tissues. Using standard asymptotic analysis, we prove that, on bounded domains, auxin peaks are not formed via a Turing instability in the active transport parameter, but via simple corrections to the homogeneous steady state. When the active transport is small, the geometry of the tissue encodes the peaks' amplitude and location: peaks arise where cells have fewer neighbours, that is, at the boundary of the domain. We test our theory and perform numerical bifurcation analysis on two models that are known to generate auxin patterns for biologically plausible parameter values. In the same parameter regimes, we find that realistic tissues are capable of generating a multitude of stationary patterns, with a variable number of auxin peaks, that can be selected by different initial conditions or by quasi-static changes in the active transport parameter. The competition between active transport and production rate determines whether peaks remain localized or cover the entire domain. In particular, changes in the auxin production that are fast with respect to the cellular life cycle affect the auxin peak distribution, switching from localized spots to fully patterned states. We relate the occurrence of localized patterns to a snaking bifurcation structure, which is known to arise in a wide variety of nonlinear media, but has not yet been reported in plant models. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  20. Joint modelling of flood peaks and volumes: A copula application for the Danube River

    Directory of Open Access Journals (Sweden)

    Papaioannou George

    2016-12-01

    Full Text Available Flood frequency analysis is usually performed as a univariate analysis of flood peaks using a suitable theoretical probability distribution of the annual maximum flood peaks or peak over threshold values. However, other flood attributes, such as flood volume and duration, are necessary for the design of hydrotechnical projects, too. In this study, the suitability of various copula families for a bivariate analysis of peak discharges and flood volumes has been tested. Streamflow data from selected gauging stations along the whole Danube River have been used. Kendall’s rank correlation coefficient (tau quantifies the dependence between flood peak discharge and flood volume settings. The methodology is applied to two different data samples: 1 annual maximum flood (AMF peaks combined with annual maximum flow volumes of fixed durations at 5, 10, 15, 20, 25, 30 and 60 days, respectively (which can be regarded as a regime analysis of the dependence between the extremes of both variables in a given year, and 2 annual maximum flood (AMF peaks with corresponding flood volumes (which is a typical choice for engineering studies. The bivariate modelling of the extracted peak discharge - flood volume couples is achieved with the use of the Ali-Mikhail-Haq (AMH, Clayton, Frank, Joe, Gumbel, Hüsler-Reiss, Galambos, Tawn, Normal, Plackett and FGM copula families. Scatterplots of the observed and simulated peak discharge - flood volume pairs and goodness-of-fit tests have been used to assess the overall applicability of the copulas as well as observing any changes in suitable models along the Danube River. The results indicate that for the second data sampling method, almost all of the considered Archimedean class copula families perform better than the other copula families selected for this study, and that for the first method, only the upper-tail-flat copulas excel (except for the AMH copula due to its inability to model stronger relationships.

  1. Randomized Item Response Theory Models

    NARCIS (Netherlands)

    Fox, Gerardus J.A.

    2005-01-01

    The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by

  2. Prediction on the Peak of the CO2 Emissions in China Using the STIRPAT Model

    Directory of Open Access Journals (Sweden)

    Li Li

    2016-01-01

    Full Text Available Climate change has threatened our economic, environmental, and social sustainability seriously. The world has taken active measures in dealing with climate change to mitigate carbon emissions. Predicting the carbon emissions peak has become a global focus, as well as a leading target for China’s low carbon development. China has promised its carbon emissions will have peaked by around 2030, with the intention of peaking earlier. Scholars generally have studied the influencing factors of carbon emissions. However, research on carbon emissions peaks is not extensive. Therefore, by setting a low scenario, a middle scenario, and a high scenario, this paper predicts China’s carbon emissions peak from 2015 to 2035 based on the data from 1998 to 2014 using the Stochastic Impacts by Regression on Population, Affluence, and Technology (STIRPAT model. The results show that in the low, middle, and high scenarios China will reach its carbon emissions peak in 2024, 2027, and 2030, respectively. Thus, this paper puts forward the large-scale application of technology innovation to improve energy efficiency and optimize energy structure and supply and demand. China should use industrial policy and human capital investment to stimulate the rapid development of low carbon industries and modern agriculture and service industries to help China to reach its carbon emissions peak by around 2030 or earlier.

  3. Systems Modelling of the Socio-Technical Aspects of Residential Electricity Use and Network Peak Demand.

    Directory of Open Access Journals (Sweden)

    Jim Lewis

    Full Text Available Provision of network infrastructure to meet rising network peak demand is increasing the cost of electricity. Addressing this demand is a major imperative for Australian electricity agencies. The network peak demand model reported in this paper provides a quantified decision support tool and a means of understanding the key influences and impacts on network peak demand. An investigation of the system factors impacting residential consumers' peak demand for electricity was undertaken in Queensland, Australia. Technical factors, such as the customers' location, housing construction and appliances, were combined with social factors, such as household demographics, culture, trust and knowledge, and Change Management Options (CMOs such as tariffs, price, managed supply, etc., in a conceptual 'map' of the system. A Bayesian network was used to quantify the model and provide insights into the major influential factors and their interactions. The model was also used to examine the reduction in network peak demand with different market-based and government interventions in various customer locations of interest and investigate the relative importance of instituting programs that build trust and knowledge through well designed customer-industry engagement activities. The Bayesian network was implemented via a spreadsheet with a tickbox interface. The model combined available data from industry-specific and public sources with relevant expert opinion. The results revealed that the most effective intervention strategies involve combining particular CMOs with associated education and engagement activities. The model demonstrated the importance of designing interventions that take into account the interactions of the various elements of the socio-technical system. The options that provided the greatest impact on peak demand were Off-Peak Tariffs and Managed Supply and increases in the price of electricity. The impact in peak demand reduction differed for each

  4. Systems Modelling of the Socio-Technical Aspects of Residential Electricity Use and Network Peak Demand.

    Science.gov (United States)

    Lewis, Jim; Mengersen, Kerrie; Buys, Laurie; Vine, Desley; Bell, John; Morris, Peter; Ledwich, Gerard

    2015-01-01

    Provision of network infrastructure to meet rising network peak demand is increasing the cost of electricity. Addressing this demand is a major imperative for Australian electricity agencies. The network peak demand model reported in this paper provides a quantified decision support tool and a means of understanding the key influences and impacts on network peak demand. An investigation of the system factors impacting residential consumers' peak demand for electricity was undertaken in Queensland, Australia. Technical factors, such as the customers' location, housing construction and appliances, were combined with social factors, such as household demographics, culture, trust and knowledge, and Change Management Options (CMOs) such as tariffs, price, managed supply, etc., in a conceptual 'map' of the system. A Bayesian network was used to quantify the model and provide insights into the major influential factors and their interactions. The model was also used to examine the reduction in network peak demand with different market-based and government interventions in various customer locations of interest and investigate the relative importance of instituting programs that build trust and knowledge through well designed customer-industry engagement activities. The Bayesian network was implemented via a spreadsheet with a tickbox interface. The model combined available data from industry-specific and public sources with relevant expert opinion. The results revealed that the most effective intervention strategies involve combining particular CMOs with associated education and engagement activities. The model demonstrated the importance of designing interventions that take into account the interactions of the various elements of the socio-technical system. The options that provided the greatest impact on peak demand were Off-Peak Tariffs and Managed Supply and increases in the price of electricity. The impact in peak demand reduction differed for each of the locations

  5. Peak oxygen uptake after cardiac rehabilitation: a randomized controlled trial of a 12-month maintenance program versus usual care.

    Directory of Open Access Journals (Sweden)

    Erik Madssen

    Full Text Available BACKGROUND: Exercise capacity is a strong predictor of survival in patients with coronary artery disease (CAD. Exercise capacity improves after cardiac rehabilitation exercise training, but previous studies have demonstrated a decline in peak oxygen uptake after ending a formal rehabilitation program. There is a lack of knowledge on how long-term exercise adherence can be achieved in CAD patients. We therefore assessed if a 12-month maintenance program following cardiac rehabilitation would lead to increased adherence to exercise and increased exercise capacity compared to usual care. MATERIALS AND METHODS: Two-centre, open, parallel randomized controlled trial with 12 months follow-up comparing usual care to a maintenance program. The maintenance program consisted of one monthly supervised high intensity interval training session, a written exercise program and exercise diary, and a maximum exercise test every third month during follow-up. Forty-nine patients (15 women on optimal medical treatment were included following discharge from cardiac rehabilitation. The primary endpoint was change in peak oxygen uptake at follow-up; secondary endpoints were physical activity level, quality of life and blood markers of cardiovascular risk. RESULTS: There was no change in peak oxygen uptake from baseline to follow-up in either group (intervention group 27.9 (±4.7 to 28.8 (±5.6 mL·kg (-1 min (-1, control group 32.0 (±6.2 to 32.8 (±5.8 mL·kg (-1 min (-1, with no between-group difference, p = 0.22. Quality of life and blood biomarkers remained essentially unchanged, and both self-reported and measured physical activity levels were similar between groups after 12 months. CONCLUSIONS: A maintenance exercise program for 12 months did not improve adherence to exercise or peak oxygen uptake in CAD patients after discharge from cardiac rehabilitation compared to usual care. This suggests that infrequent supervised high intensity interval training

  6. Hydroclimatology of Dual-Peak Annual Cholera Incidence: Insights from a Spatially Explicit Model

    Science.gov (United States)

    Bertuzzo, E.; Mari, L.; Righetto, L.; Gatto, M.; Casagrandi, R.; Rodriguez-Iturbe, I.; Rinaldo, A.

    2012-12-01

    Cholera incidence in some regions of the Indian subcontinent may exhibit two annual peaks although the main environmental drivers that have been linked to the disease (e.g. sea surface temperature, zooplankton abundance, river discharge) peak once per year during the summer. An empirical hydroclimatological explanation relating cholera transmission to river flows and to the disease spatial spreading has been recently proposed. We specifically support and substantiate mechanistically such hypothesis by means of a spatially explicit model of cholera transmission. Our framework directly accounts for the role of the river network in transporting and redistributing cholera bacteria among human communities as well as for spatial and temporal annual fluctuations of precipitation and river flows. To single out the single out the hydroclimatologic controls on the prevalence patterns in a non-specific geographical context, we first apply the model to Optimal Channel Networks as a general model of hydrological networks. Moreover, we impose a uniform distribution of population. The model is forced by seasonal environmental drivers, namely precipitation, temperature and chlorophyll concentration in the coastal environment, a proxy for Vibrio cholerae concentration. Our results show that these drivers may suffice to generate dual-peak cholera prevalence patterns for proper combinations of timescales involved in pathogen transport, hydrologic variability and disease unfolding. The model explains the possible occurrence of spatial patterns of cholera incidence characterized by a spring peak confined to coastal areas and a fall peak involving inland regions. We then proceed applying the model to the specific settings of Bay of Bengal accounting for the actual river networks (derived from digital terrain map manipulations), the proper distribution of population (estimated from downscaling of census data based on remotely sensed features) and precipitation patterns. Overall our

  7. Seismic random noise attenuation by time-frequency peak filtering based on joint time-frequency distribution

    Science.gov (United States)

    Zhang, Chao; Lin, Hong-bo; Li, Yue; Yang, Bao-jun

    2013-09-01

    Time-Frequency Peak Filtering (TFPF) is an effective method to eliminate pervasive random noise when seismic signals are analyzed. In conventional TFPF, the pseudo Wigner-Ville distribution (PWVD) is used for estimating instantaneous frequency (IF), but is sensitive to noise interferences that mask the borderline between signal and noise and detract the energy concentration on the IF curve. This leads to the deviation of the peaks of the pseudo Wigner-Ville distribution from the instantaneous frequency, which is the cause of undesirable lateral oscillations as well as of amplitude attenuation of the highly varying seismic signal, and ultimately of the biased seismic signal. With the purpose to overcome greatly these drawbacks and increase the signal-to-noise ratio, we propose in this paper a TFPF refinement that is based upon the joint time-frequency distribution (JTFD). The joint time-frequency distribution is obtained by the combination of the PWVD and smooth PWVD (SPWVD). First we use SPWVD to generate a broad time-frequency area of the signal. Then this area is filtered with a step function to remove some divergent time-frequency points. Finally, the joint time-frequency distribution JTFD is obtained from PWVD weighted by this filtered distribution. The objective pursued with all these operations is to reduce the effects of the interferences and enhance the energy concentration around the IF of the signal in the time-frequency domain. Experiments with synthetic and real seismic data demonstrate that TFPF based on the joint time-frequency distribution can effectively suppress strong random noise and preserve events of interest.

  8. A quantitative trait locus mixture model that avoids spurious LOD score peaks.

    Science.gov (United States)

    Feenstra, Bjarke; Skovgaard, Ib M

    2004-06-01

    In standard interval mapping of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. At any given location in the genome, the evidence of a putative QTL is measured by the likelihood ratio of the mixture model compared to a single normal distribution (the LOD score). This approach can occasionally produce spurious LOD score peaks in regions of low genotype information (e.g., widely spaced markers), especially if the phenotype distribution deviates markedly from a normal distribution. Such peaks are not indicative of a QTL effect; rather, they are caused by the fact that a mixture of normals always produces a better fit than a single normal distribution. In this study, a mixture model for QTL mapping that avoids the problems of such spurious LOD score peaks is presented.

  9. A new global model for the ionospheric F2 peak height for radio wave propagation

    Directory of Open Access Journals (Sweden)

    M. M. Hoque

    2012-05-01

    Full Text Available The F2-layer peak density height hmF2 is one of the most important ionospheric parameters characterizing HF propagation conditions. Therefore, the ability to model and predict the spatial and temporal variations of the peak electron density height is of great use for both ionospheric research and radio frequency planning and operation. For global hmF2 modelling we present a nonlinear model approach with 13 model coefficients and a few empirically fixed parameters. The model approach describes the temporal and spatial dependencies of hmF2 on global scale. For determining the 13 model coefficients, we apply this model approach to a large quantity of global hmF2 observational data obtained from GNSS radio occultation measurements onboard CHAMP, GRACE and COSMIC satellites and data from 69 worldwide ionosonde stations. We have found that the model fits to these input data with the same root mean squared (RMS and standard deviations of 10%. In comparison with the electron density NeQuick model, the proposed Neustrelitz global hmF2 model (Neustrelitz Peak Height Model – NPHM shows percentage RMS deviations of about 13% and 12% from the observational data during high and low solar activity conditions, respectively, whereas the corresponding deviations for the NeQuick model are found 18% and 16%, respectively.

  10. A new global model for the ionospheric F2 peak height for radio wave propagation

    Directory of Open Access Journals (Sweden)

    M. M. Hoque

    2012-05-01

    Full Text Available The F2-layer peak density height hmF2 is one of the most important ionospheric parameters characterizing HF propagation conditions. Therefore, the ability to model and predict the spatial and temporal variations of the peak electron density height is of great use for both ionospheric research and radio frequency planning and operation. For global hmF2 modelling we present a nonlinear model approach with 13 model coefficients and a few empirically fixed parameters. The model approach describes the temporal and spatial dependencies of hmF2 on global scale. For determining the 13 model coefficients, we apply this model approach to a large quantity of global hmF2 observational data obtained from GNSS radio occultation measurements onboard CHAMP, GRACE and COSMIC satellites and data from 69 worldwide ionosonde stations. We have found that the model fits to these input data with the same root mean squared (RMS and standard deviations of 10%. In comparison with the electron density NeQuick model, the proposed Neustrelitz global hmF2 model (Neustrelitz Peak Height Model – NPHM shows percentage RMS deviations of about 13% and 12% from the observational data during high and low solar activity conditions, respectively, whereas the corresponding deviations for the NeQuick model are found 18% and 16%, respectively.

  11. The weighted random graph model

    Science.gov (United States)

    Garlaschelli, Diego

    2009-07-01

    We introduce the weighted random graph (WRG) model, which represents the weighted counterpart of the Erdos-Renyi random graph and provides fundamental insights into more complicated weighted networks. We find analytically that the WRG is characterized by a geometric weight distribution, a binomial degree distribution and a negative binomial strength distribution. We also characterize exactly the percolation phase transitions associated with edge removal and with the appearance of weighted subgraphs of any order and intensity. We find that even this completely null model displays a percolation behaviour similar to what is observed in real weighted networks, implying that edge removal cannot be used to detect community structure empirically. By contrast, the analysis of clustering successfully reveals different patterns between the WRG and real networks.

  12. Greater deciduous shrub abundance extends tundra peak season and increases modeled net CO2 uptake.

    Science.gov (United States)

    Sweet, Shannan K; Griffin, Kevin L; Steltzer, Heidi; Gough, Laura; Boelman, Natalie T

    2015-06-01

    Satellite studies of the terrestrial Arctic report increased summer greening and longer overall growing and peak seasons since the 1980s, which increases productivity and the period of carbon uptake. These trends are attributed to increasing air temperatures and reduced snow cover duration in spring and fall. Concurrently, deciduous shrubs are becoming increasingly abundant in tundra landscapes, which may also impact canopy phenology and productivity. Our aim was to determine the influence of greater deciduous shrub abundance on tundra canopy phenology and subsequent impacts on net ecosystem carbon exchange (NEE) during the growing and peak seasons in the arctic foothills region of Alaska. We compared deciduous shrub-dominated and evergreen/graminoid-dominated community-level canopy phenology throughout the growing season using the normalized difference vegetation index (NDVI). We used a tundra plant-community-specific leaf area index (LAI) model to estimate LAI throughout the green season and a tundra-specific NEE model to estimate the impact of greater deciduous shrub abundance and associated shifts in both leaf area and canopy phenology on tundra carbon flux. We found that deciduous shrub canopies reached the onset of peak greenness 13 days earlier and the onset of senescence 3 days earlier compared to evergreen/graminoid canopies, resulting in a 10-day extension of the peak season. The combined effect of the longer peak season and greater leaf area of deciduous shrub canopies almost tripled the modeled net carbon uptake of deciduous shrub communities compared to evergreen/graminoid communities, while the longer peak season alone resulted in 84% greater carbon uptake in deciduous shrub communities. These results suggest that greater deciduous shrub abundance increases carbon uptake not only due to greater leaf area, but also due to an extension of the period of peak greenness, which extends the period of maximum carbon uptake. © 2015 John Wiley & Sons Ltd.

  13. Dynamic randomization and a randomization model for clinical trials data.

    Science.gov (United States)

    Kaiser, Lee D

    2012-12-20

    Randomization models are useful in supporting the validity of linear model analyses applied to data from a clinical trial that employed randomization via permuted blocks. Here, a randomization model for clinical trials data with arbitrary randomization methodology is developed, with treatment effect estimators and standard error estimators valid from a randomization perspective. A central limit theorem for the treatment effect estimator is also derived. As with permuted-blocks randomization, a typical linear model analysis provides results similar to the randomization model results when, roughly, unit effects display no pattern over time. A key requirement for the randomization inference is that the unconditional probability that any patient receives active treatment is constant across patients; when this probability condition is violated, the treatment effect estimator is biased from a randomization perspective. Most randomization methods for balanced, 1 to 1, treatment allocation satisfy this condition. However, many dynamic randomization methods for planned unbalanced treatment allocation, like 2 to 1, do not satisfy this constant probability condition, and these methods should be avoided. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Finite population size effects in quasispecies models with single-peak fitness landscape

    OpenAIRE

    Saakian, David B.; Deem, Michael W.; Hu, Chin Kun

    2012-01-01

    We consider finite population size effects for Crow-Kimura and Eigen quasispecies models with single peak fitness landscape. We formulate accurately the iteration procedure for the finite population models, then derive Hamilton-Jacobi equation (HJE) to describe the dynamic of the probability distribution. The steady state solution of HJE gives the variance of the mean fitness. Our results are useful for understanding population sizes of virus in which the infinite population models can give r...

  15. PREDICTION OF FLU EPIDEMIC PEAKS IN ST. PETERSBURG THROUGH POPULATION-BASED MATHEMATICAL MODELS

    Directory of Open Access Journals (Sweden)

    Vasiliy N. Leonenko

    2016-11-01

    Full Text Available The paper presents two methods of predicting the peak of influenza epidemics using population-based mathematical models: Baroyan-Rvachev and modified Kermack-McKendrick model, proposed by the authors. We perform the comparison of the prediction accuracy of time and the value of epidemics peaks on long-term data of ARI incidence in the city of St. Petersburg. The methodology of comparison is based on three criteria of accuracy conventionally named as "square", "vertical stripe" and "horizontal stripe", and two variants of the model parameters estimation. In the first variant we calibrate the model on the data of the first city impacted by the epidemic, and use these parameters in the future for the other cities, that allows taking into account the spatial characteristics of the epidemic in the country. In the second case, we only use historical data available at the time of the prediction for a given city. The advantage of this approach is the lack of need for additional, not always available, external data to predict the epidemic. The results of test calculations have demonstrated that the first method shows good results in the case of significant delays between the peaks of epidemics in different cities. If the outbreak in St. Petersburg started soon after the registration of the first outbreaks in the other cities of the Russian Federation, the second method shows comparable results to an accuracy of 90% to predict the peak of the epidemic. In most cases, it is sufficient for the use of the results of calculations for planning antiviral activities. The lead time of the peak prediction is still at a relatively low level, that seems to be associated with a variety of patterns of virus spread and permanent changes in transport communications within the country.

  16. Peaks, plateaus, canyons, and craters: The complex geometry of simple mid-domain effect models

    DEFF Research Database (Denmark)

    Colwell, Robert K.; Gotelli, Nicholas J.; Rahbek, Carsten

    2009-01-01

    dye algorithm to place assemblages of species of uniform We used a spreading dye algorithm to place assemblages of species of uniform range size in one-dimensional or two-dimensional bounded domains. In some models, we allowed dispersal to introduce range discontinuity. Results: As uniform range size...... increases from small to medium, a flat pattern of species As uniform range size increases from small to medium, a flat pattern of species richness is replaced by a pair of peripheral peaks, separated by a valley (one-dimensional models), or by a cratered ring (two-dimensional models) of species richness...... of a uniform size generate more complex patterns, including peaks, plateaus, canyons, and craters of species richness....

  17. A comparison with theory of peak to peak sound level for a model helicopter rotor generating blade slap at low tip speeds

    Science.gov (United States)

    Fontana, R. R.; Hubbard, J. E., Jr.

    1983-01-01

    Mini-tuft and smoke flow visualization techniques have been developed for the investigation of model helicopter rotor blade vortex interaction noise at low tip speeds. These techniques allow the parameters required for calculation of the blade vortex interaction noise using the Widnall/Wolf model to be determined. The measured acoustics are compared with the predicted acoustics for each test condition. Under the conditions tested it is determined that the dominating acoustic pulse results from the interaction of the blade with a vortex 1-1/4 revolutions old at an interaction angle of less than 8 deg. The Widnall/Wolf model predicts the peak sound pressure level within 3 dB for blade vortex separation distances greater than 1 semichord, but it generally over predicts the peak S.P.L. by over 10 dB for blade vortex separation distances of less than 1/4 semichord.

  18. A Mixed Effects Randomized Item Response Model

    Science.gov (United States)

    Fox, J.-P.; Wyrick, Cheryl

    2008-01-01

    The randomized response technique ensures that individual item responses, denoted as true item responses, are randomized before observing them and so-called randomized item responses are observed. A relationship is specified between randomized item response data and true item response data. True item response data are modeled with a (non)linear…

  19. The Critical Role of the Routing Scheme in Simulating Peak River Discharge in Global Hydrological Models

    Science.gov (United States)

    Zhao, Fang; Veldkamp, Ted I. E.; Frieler, Katja; Schewe, Jacob; Ostberg, Sebastian; Willner, Sven; Schauberger, Bernhard; Gosling, Simon N.; Schmied, Hannes Muller; Portmann, Felix T.; hide

    2017-01-01

    Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge which is crucial in flood simulations has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a (Inter-Sectoral Impact Model Intercomparison Project phase 2a) project. The runoff simulations were used as input for the global river routing model CaMa-Flood (Catchment-based Macro-scale Floodplain). The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC (Global Runoff Data Centre) stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about two-thirds of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.

  20. How would peak rainfall intensity affect runoff predictions using conceptual water balance models?

    Directory of Open Access Journals (Sweden)

    B. Yu

    2015-06-01

    Full Text Available Most hydrological models use continuous daily precipitation and potential evapotranspiration for streamflow estimation. With the projected increase in mean surface temperature, hydrological processes are set to intensify irrespective of the underlying changes to the mean precipitation. The effect of an increase in rainfall intensity on the long-term water balance is, however, not adequately accounted for in the commonly used hydrological models. This study follows from a previous comparative analysis of a non-stationary daily series of stream flow of a forested watershed (River Rimbaud in the French Alps (area = 1.478 km2 (1966–2006. Non-stationarity in the recorded stream flow occurred as a result of a severe wild fire in 1990. Two daily models (AWBM and SimHyd were initially calibrated for each of three distinct phases in relation to the well documented land disturbance. At the daily and monthly time scales, both models performed satisfactorily with the Nash–Sutcliffe coefficient of efficiency (NSE varying from 0.77 to 0.92. When aggregated to the annual time scale, both models underestimated the flow by about 22% with a reduced NSE at about 0.71. Exploratory data analysis was undertaken to relate daily peak hourly rainfall intensity to the discrepancy between the observed and modelled daily runoff amount. Preliminary results show that the effect of peak hourly rainfall intensity on runoff prediction is insignificant, and model performance is unlikely to improve when peak daily precipitation is included. Trend analysis indicated that the large decrease of precipitation when daily precipitation amount exceeded 10–20 mm may have contributed greatly to the decrease in stream flow of this forested watershed.

  1. The critical role of the routing scheme in simulating peak river discharge in global hydrological models

    Science.gov (United States)

    Zhao, Fang; Veldkamp, Ted I. E.; Frieler, Katja; Schewe, Jacob; Ostberg, Sebastian; Willner, Sven; Schauberger, Bernhard; Gosling, Simon N.; Müller Schmied, Hannes; Portmann, Felix T.; Leng, Gobias; Huang, Maoyi; Liu, Xingcai; Tang, Qiuhong; Hanasaki, Naota; Biemans, Hester; Gerten, Dieter; Satoh, Yusuke; Pokhrel, Yadu; Stacke, Tobias; Ciais, Philippe; Chang, Jinfeng; Ducharne, Agnes; Guimberteau, Matthieu; Wada, Yoshihide; Kim, Hyungjun; Yamazaki, Dai

    2017-07-01

    Global hydrological models (GHMs) have been applied to assess global flood hazards, but their capacity to capture the timing and amplitude of peak river discharge—which is crucial in flood simulations—has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a project. The runoff simulations were used as input for the global river routing model CaMa-Flood. The simulated daily discharge was compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, likely induced by the buffering capacity of floodplain reservoirs. For a majority of river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over about 2/3 of the analysed basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme choice in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not represented in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.

  2. Choice of routing scheme considerably influences peak river discharge simulation in global hydrological models

    Science.gov (United States)

    Zhao, Fang; Veldkamp, Ted; Schauberger, Bernhard; Willner, Sven; Yamazaki, Dai

    2017-04-01

    Global hydrological models (GHMs) have been applied to assess global flood hazards. However, their capacity to capture the timing and amplitude of peak river discharge—which is crucial in flood simulations—has traditionally not been the focus of examination. Here we evaluate to what degree the choice of river routing scheme affects simulations of peak discharge and may help to provide better agreement with observations. To this end we use runoff and discharge simulations of nine GHMs forced by observational climate data (1971-2010) within the ISIMIP2a project. The runoff simulations were used as input for the global river routing model CaMa-Flood. The simulated daily discharges were compared to the discharge generated by each GHM using its native river routing scheme. For each GHM both versions of simulated discharge were compared to monthly and daily discharge observations from 1701 GRDC stations as a benchmark. CaMa-Flood routing shows a general reduction of peak river discharge and a delay of about two to three weeks in its occurrence, probably induced by the buffering capacity of floodplain reservoirs. For most river basins, discharge produced by CaMa-Flood resulted in a better agreement with observations. In particular, maximum daily discharge was adjusted, with a multi-model averaged reduction in bias over more than 60% of the basin area. The increase in agreement was obtained in both managed and near-natural basins. Overall, this study demonstrates the importance of routing scheme in peak discharge simulation, where CaMa-Flood routing accounts for floodplain storage and backwater effects that are not present in most GHMs. Our study provides important hints that an explicit parameterisation of these processes may be essential in future impact studies.

  3. A Mathematical Model for Singly-Peaked Population Processes from Breeding to Extinction

    CERN Document Server

    Huzimura, R; Huzimura, Ryoitiro; Matsuyama, Toyoki

    1999-01-01

    When a small number of individuals of organism of single species is confined in a closed space with limited amount of indispensable resources, their breading may start initially under suitable conditions, and after peaking, the population should go extinct as the resources are exhausted. Assuming that the carrying capacity of the environment is a function of resource amount, a mathematical model describing such pattern of population change is obtained. An application of this model to typical population records, that of deer herds by Scheffer (1951) and O'Roke and Hamerstrome (1948), yields estimations of the initial amount of indispensable food and its availability or nutritional efficiency which were previously unspecified.

  4. Characterization methods and modelling of ultracapacitors for use as peak power sources

    Science.gov (United States)

    Lajnef, W.; Vinassa, J.-M.; Briat, O.; Azzopardi, S.; Woirgard, E.

    This paper suggests both a methodology to characterize ultracapacitors and to model their electrical behaviour. Current levels, frequency intervals, and voltage ranges are adapted to ultracapacitors testing. Experimental data results in the determination of the ultracapacitors performances in terms of energy and power densities, the quantification of the capacitance dependence on voltage, and the modelling of the dynamic behaviour of the device. Then, an electric model is proposed taking into account the ultracapacitors characteristics and their future use as peak power source for hybrid and electric vehicles. After, the parameters identification procedure is explained. Finally, the model validation, both in frequency and time domains, proves the validity of this methodology and the performances of the proposed model.

  5. Characterization methods and modelling of ultracapacitors for use as peak power sources

    Energy Technology Data Exchange (ETDEWEB)

    Lajnef, W.; Vinassa, J.-M.; Briat, O.; Azzopardi, S.; Woirgard, E. [Laboratoire IXL CNRS UMR 5818 - ENSEIRB, Universite Bordeaux 1, 351 Cours de la Liberation, 33405 Talence Cedex (France)

    2007-06-01

    This paper suggests both a methodology to characterize ultracapacitors and to model their electrical behaviour. Current levels, frequency intervals, and voltage ranges are adapted to ultracapacitors testing. Experimental data results in the determination of the ultracapacitors performances in terms of energy and power densities, the quantification of the capacitance dependence on voltage, and the modelling of the dynamic behaviour of the device. Then, an electric model is proposed taking into account the ultracapacitors characteristics and their future use as peak power source for hybrid and electric vehicles. After, the parameters identification procedure is explained. Finally, the model validation, both in frequency and time domains, proves the validity of this methodology and the performances of the proposed model. (author)

  6. A new approach for modeling the peak utility impacts from a proposed CUAC standard

    Energy Technology Data Exchange (ETDEWEB)

    LaCommare, Kristina Hamachi; Gumerman, Etan; Marnay, Chris; Chan, Peter; Coughlin, Katie

    2004-08-01

    This report describes a new Berkeley Lab approach for modeling the likely peak electricity load reductions from proposed energy efficiency programs in the National Energy Modeling System (NEMS). This method is presented in the context of the commercial unitary air conditioning (CUAC) energy efficiency standards. A previous report investigating the residential central air conditioning (RCAC) load shapes in NEMS revealed that the peak reduction results were lower than expected. This effect was believed to be due in part to the presence of the squelch, a program algorithm designed to ensure changes in the system load over time are consistent with the input historic trend. The squelch applies a system load-scaling factor that scales any differences between the end-use bottom-up and system loads to maintain consistency with historic trends. To obtain more accurate peak reduction estimates, a new approach for modeling the impact of peaky end uses in NEMS-BT has been developed. The new approach decrements the system load directly, reducing the impact of the squelch on the final results. This report also discusses a number of additional factors, in particular non-coincidence between end-use loads and system loads as represented within NEMS, and their impacts on the peak reductions calculated by NEMS. Using Berkeley Lab's new double-decrement approach reduces the conservation load factor (CLF) on an input load decrement from 25% down to 19% for a SEER 13 CUAC trial standard level, as seen in NEMS-BT output. About 4 GW more in peak capacity reduction results from this new approach as compared to Berkeley Lab's traditional end-use decrement approach, which relied solely on lowering end use energy consumption. The new method has been fully implemented and tested in the Annual Energy Outlook 2003 (AEO2003) version of NEMS and will routinely be applied to future versions. This capability is now available for use in future end-use efficiency or other policy analysis

  7. Distribution of peak expiratory flow variability by age, gender and smoking habits in a random population sample aged 20-70 yrs

    NARCIS (Netherlands)

    Boezen, H M; Schouten, J. P.; Postma, D S; Rijcken, B

    1994-01-01

    Peak expiratory flow (PEF) variability can be considered as an index of bronchial lability. Population studies on PEF variability are few. The purpose of the current paper is to describe the distribution of PEF variability in a random population sample of adults with a wide age range (20-70 yrs),

  8. Independent effects of endurance training and weight loss on peak fat oxidation in moderately overweight men: a randomized controlled trial.

    Science.gov (United States)

    Nordby, Pernille; Rosenkilde, Mads; Ploug, Thorkil; Westh, Karina; Feigh, Michael; Nielsen, Ninna B; Helge, Jørn W; Stallknecht, Bente

    2015-04-01

    Endurance training increases peak fat oxidation (PFO) during exercise, but whether this is independent of changes in body weight is not known. The aim of the present study was to investigate the effects of endurance training with or without weight loss or a diet-induced weight loss on PFO and on key skeletal muscle mitochondrial proteins involved in fat oxidation. Sixty moderately overweight, sedentary but otherwise healthy men were randomized to 12 wk of training (T), diet (D), training and increased caloric intake (T-iD), or continuous sedentary control (C). Isoenergetic deficits corresponding to 600 kcal/day were comprised of endurance exercise for T and caloric restriction for D. T-iD completed similar training but was not in 600 kcal deficit because of dietary replacement. PFO and the exercise intensity at which this occurred (FatMax) were measured by a submaximal exercise test and calculated by polynomial regression. As intended by study design, a similar weight loss was observed in T (-5.9 ± 0.7 kg) and D (-5.2 ± 0.8 kg), whereas T-iD (-1.0 ± 0.5 kg) and C (0.1 ± 0.6 kg) remained weight stable. PFO increased to a similar extent with 42% in T [0.16 g/min; 95% confidence intervals (CI): 0.02; 0.30, P = 0.02] and 41% in T-iD (0.16 g/min; 95% CI: 0.01; 0.30, P = 0.04) compared with C, but did not increase in D (P = 0.96). In addition, the analysis of covariance showed that changes in both PFO (0.10 g/min; 95% CI: 0.03; 0.17, P = 0.03) and FatMax (6.3% V̇o2max; 95% CI: 1.4; 11.3, P < 0.01) were independently explained by endurance training. In conclusion, endurance training per se increases PFO in moderately overweight men. Copyright © 2015 the American Physiological Society.

  9. The Impact of the Twin Peaks Model on the Insurance Industry

    Directory of Open Access Journals (Sweden)

    Daleen Millard

    2017-02-01

    Full Text Available Financial regulation in South Africa changes constantly. In the quest to find the ideal regulatory framework for optimal consumer protection, rules change all the time and international trends have an important influence on lawmakers nationally. The Financial Sector Regulation Bill, also known as the "Twin Peaks" Bill, is the latest invention from the table of the legislature, and some expect this Bill to have far-reaching consequences for the financial services industry. The question is, of course, whether the current dispensation will change so quickly and so dramatically that it will literally be the end of the world as we know it or whether there will be a gradual shift in emphasis away from the so-called silo regulatory approach to an approach that distinguishes between prudential regulation on the one hand and market conduct regulation on the other. A further question is whether insurance as a financial service will change dramatically in the light of the expected twin peak dispensation. The purpose of this paper is to discuss the implications of the FSR Bill for the insurance industry. Instead of analysing the Bill feature for feature, the method that will be used in this enquiry is to identify trends and issues from 2014 and to discuss whether the Twin Peaks model, once implemented, can successfully eradicate similar problems in future. The impact of Twin Peaks will of course have to be tested, but at this point in time it may be very useful to take an educated guess by using recent cases as examples. Recent cases before the courts, the Enforcement Committee and the FAIS Ombud will be discussed not only as examples of the most prevalent issues of the past year or so, but also as examples of how consumer issues and systemic risks are currently being dealt with and how this may change with the implementation of the FSR Bill.

  10. Numerical Model and Analysis of Peak Temperature Reduction in LiFePO4 Battery Packs Using Phase Change Materials

    DEFF Research Database (Denmark)

    Coman, Paul Tiberiu; Veje, Christian

    2013-01-01

    Numerical model and analysis of peak temperature reduction in LiFePO4 battery packs using phase change materials......Numerical model and analysis of peak temperature reduction in LiFePO4 battery packs using phase change materials...

  11. AN ITEM RESPONSE MODEL WITH SINGLE PEAKED ITEM CHARACTERISTIC CURVES - THE PARELLA MODEL

    NARCIS (Netherlands)

    HOIJTINK, H; MOLENAAR, [No Value

    In this paper an item response model (the PARELLA model) designed specifically for the measurement of attitudes and preferences will be introduced. In contrast with the item response models currently used (e.g. the Rasch model and, the two and three parameter logistic model) the item characteristic

  12. Estimating Peak Outflow of Earth Fill Dam Failures by Multivariable Statistical Models

    Directory of Open Access Journals (Sweden)

    mahsa noori

    2016-02-01

    Full Text Available Introduction: Dam failure and its flooding is one of the destructive phenomena today. Therefore, estimating the peak outflow (QP with reasonable accuracy and determining the related flood zone can reduce risks. Qp of dam failure depends on important factors such as: depth above breach (Hw, volume of water above breach bottom at failure (Vw, reservoir surface area (A, storage (S and dam height (Hd. Various researchers have proposed equations to estimate QP. They used the regression method to obtain an appropriate equation. Regression is a mathematical technique that requires initial test and diagnosis. These researchers present a new regression model for a better estimation of Qp. Materials and Methods: The data used in this study are related to 140 broken dams in the world for 34 of which sufficient data are available for analysis. Dam failure phenomenon is a rapidly varied unsteady flow that is explained by shallow waters equations. The equations in the one-dimensional form are known as Saint-Venant equations and are based on hydrostatic pressure distribution and uniform flow under rectangular steep assumption. Although hydraulic methods to predict the dam failure flood have been developed by different software, due to the complex nature of the problem and the impossibility of considering all parameters in hydraulic analysis, statistical methods have been developed in this field. Statistical methods determine the equations that can approximate the required factors from the observed parameters. Multiple regression is a useful technique to model effective parameters in Qp, which can examine the statistical aspects of the model. This work is done by different tests, such as the model coefficients necessity test, analysis of variance table and it creates confidence intervals. Data analysis in this paper is done by SPSS 16 software. This software can provide fit model, various characteristics and related tests in the Tables. Results and Discussion

  13. The parabolic Anderson model random walk in random potential

    CERN Document Server

    König, Wolfgang

    2016-01-01

    This is a comprehensive survey on the research on the parabolic Anderson model – the heat equation with random potential or the random walk in random potential – of the years 1990 – 2015. The investigation of this model requires a combination of tools from probability (large deviations, extreme-value theory, e.g.) and analysis (spectral theory for the Laplace operator with potential, variational analysis, e.g.). We explain the background, the applications, the questions and the connections with other models and formulate the most relevant results on the long-time behavior of the solution, like quenched and annealed asymptotics for the total mass, intermittency, confinement and concentration properties and mass flow. Furthermore, we explain the most successful proof methods and give a list of open research problems. Proofs are not detailed, but concisely outlined and commented; the formulations of some theorems are slightly simplified for better comprehension.

  14. A New-Trend Model-Based to Solve the Peak Power Problems in OFDM Systems

    Directory of Open Access Journals (Sweden)

    Ashraf A. Eltholth

    2008-01-01

    Full Text Available The high peak to average power ration (PAR levels of orthogonal frequency division multiplexing (OFDM signals attract the attention of many researchers during the past decade. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exists to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this paper, we propose a new trend in mitigating the peak power problem in OFDM system based on modeling the effects of clipping and amplifier nonlinearities in an OFDM system. We showed that the distortion due to these effects is highly related to the dynamic range itself rather than the clipping level or the saturation level of the nonlinear amplifier, and thus we propose two criteria to reduce the dynamic range of the OFDM, namely, the use of MSK modulation and the use of Hadamard transform. Computer simulations of the OFDM system using Matlab are completely matched with the deduced model in terms of OFDM signal quality metrics such as BER, ACPR, and EVM. Also simulation results show that even the reduction of PAR using the two proposed criteria is not significat, and the reduction in the amount of distortion due to HPA is truley delightful.

  15. Numerically Modeling the First Peak of the Type IIb SN 2016gkg

    Science.gov (United States)

    Piro, Anthony L.; Muhleisen, Marc; Arcavi, Iair; Sand, David J.; Tartaglia, Leonardo; Valenti, Stefano

    2017-09-01

    Many Type IIb supernovae (SNe) show a prominent additional early peak in their light curves, which is generally thought to be due to the shock cooling of extended hydrogen-rich material surrounding the helium core of the exploding star. The recent SN 2016gkg was a nearby Type IIb SN discovered shortly after explosion, which makes it an excellent candidate for studying this first peak. We numerically explode a large grid of extended envelope models and compare these to SN 2016gkg to investigate what constraints can be derived from its light curve. This includes exploring density profiles for both a convective envelope and an optically thick steady-state wind, the latter of which has not typically been considered for Type IIb SNe models. We find that roughly ˜ 0.02 {M}⊙ of extended material with a radius of ≈ 180{--}260 {R}⊙ reproduces the photometric light curve data, consistent with pre-explosion imaging. These values are independent of the assumed density profile of this material, although a convective profile provides a somewhat better fit. We infer from our modeling that the explosion must have occurred within ≈2-3 hr of the first observed data point, demonstrating that this event was caught very close to the moment of explosion. Nevertheless, our best-fitting 1D models overpredict the earliest velocity measurements, which suggests that the hydrogen-rich material is not distributed in a spherically symmetric manner. We compare this to the asymmetries that have also been seen in the SN IIb remnant Cas A, and we discuss the implications of this for Type IIb SN progenitors and explosion models.

  16. A theoretical model for predicting the Peak Cutting Force of conical picks

    Directory of Open Access Journals (Sweden)

    Gao Kuidong

    2014-01-01

    Full Text Available In order to predict the PCF (Peak Cutting Force of conical pick in rock cutting process, a theoretical model is established based on elastic fracture mechanics theory. The vertical fracture model of rock cutting fragment is also established based on the maximum tensile criterion. The relation between vertical fracture angle and associated parameters (cutting parameter  and ratio B of rock compressive strength to tensile strength is obtained by numerical analysis method and polynomial regression method, and the correctness of rock vertical fracture model is verified through experiments. Linear regression coefficient between the PCF of prediction and experiments is 0.81, and significance level less than 0.05 shows that the model for predicting the PCF is correct and reliable. A comparative analysis between the PCF obtained from this model and Evans model reveals that the result of this prediction model is more reliable and accurate. The results of this work could provide some guidance for studying the rock cutting theory of conical pick and designing the cutting mechanism.

  17. Satellite peaks in the scattering of light from the two-dimensional randomly rough surface of a dielectric film on a planar metal surface.

    Science.gov (United States)

    Nordam, T; Letnes, P A; Simonsen, I; Maradudin, A A

    2012-05-07

    A nonperturbative, purely numerical, solution of the reduced Rayleigh equation for the scattering of p- and s-polarized light from a dielectric film with a two-dimensional randomly rough surface deposited on a planar metallic substrate, has been carried out. It is found that satellite peaks are present in the angular dependence of the elements of the mean differential reflection coefficient in addition to an enhanced backscattering peak. This result resolves a conflict between the results of earlier approximate theoretical studies of scattering from this system.

  18. Observation, modeling, and temperature dependence of doubly peaked electric fields in irradiated silicon pixel sensors

    CERN Document Server

    Swartz, M.; Allkofer, Y.; Bortoletto, D.; Cremaldi, L.; Cucciarelli, S.; Dorokhov, A.; Hoermann, C.; Kim, D.; Konecki, M.; Kotlinski, D.; Prokofiev, Kirill; Regenfus, Christian; Rohe, T.; Sanders, D.A.; Son, S.; Speer, T.

    2006-01-01

    We show that doubly peaked electric fields are necessary to describe grazing-angle charge collection measurements of irradiated silicon pixel sensors. A model of irradiated silicon based upon two defect levels with opposite charge states and the trapping of charge carriers can be tuned to produce a good description of the measured charge collection profiles in the fluence range from 0.5x10^{14} Neq/cm^2 to 5.9x10^{14} Neq/cm^2. The model correctly predicts the variation in the profiles as the temperature is changed from -10C to -25C. The measured charge collection profiles are inconsistent with the linearly-varying electric fields predicted by the usual description based upon a uniform effective doping density. This observation calls into question the practice of using effective doping densities to characterize irradiated silicon.

  19. Overlap Synchronisation in Multipartite Random Energy Models

    Science.gov (United States)

    Genovese, Giuseppe; Tantari, Daniele

    2017-12-01

    In a multipartite random energy model, made of a number of coupled generalised random energy models (GREMs), we determine the joint law of the overlaps in terms of the ones of the single GREMs. This provides the simplest example of the so-called overlap synchronisation.

  20. TESTING FOR DIF IN A MODEL WITH SINGLE PEAKED ITEM CHARACTERISTIC CURVES - THE PARELLA MODEL

    NARCIS (Netherlands)

    HOIJTINK, H; MOLENAAR, IW

    The PARELLA model is a probabilistic parallelogram model that can be used for the measurement of latent attitudes or latent preferences. The data analyzed are the dichotomous responses of persons to items, with a one (zero) indicating agreement (disagreement) with the content of the item. The model

  1. THE EXTREMELY HIGH PEAK ENERGY OF GRB 110721A IN THE CONTEXT OF A DISSIPATIVE PHOTOSPHERE SYNCHROTRON EMISSION MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Veres, Peter; Zhang Binbin; Meszaros, Peter, E-mail: veresp@psu.edu [Department of Astronomy and Astrophysics, Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States)

    2012-12-20

    The Fermi observations of GRB 110721A have revealed an unusually high peak energy {approx}15 MeV in the first time bin of the prompt emission. We find that an interpretation is unlikely in terms of internal shock models, and confirm that a standard blackbody photospheric model also falls short. On the other hand, we show that dissipative photospheric synchrotron models ranging from extreme magnetically dominated to baryon dominated dynamics are able to accommodate such high peak values.

  2. A global model of the ionospheric F2 peak height based on EOF analysis

    Directory of Open Access Journals (Sweden)

    M.-L. Zhang

    2009-08-01

    Full Text Available The ionospheric F2 peak height hmF2 is an important parameter that is much needed in ionospheric research and practical applications. In this paper, an attempt is made to develop a global model of hmF2. The hmF2 data, used to construct the global model, are converted from the monthly median hourly values of the ionospheric propagation factor M(3000F2 observed by ionosondes/digisondes distributed globally, based on the strong anti-correlation existed between hmF2 and M(3000F2. The empirical orthogonal function (EOF analysis method, combined with harmonic function and regression analysis, is used to construct the model. The technique used in the global modelling involves two layers of EOF analysis of the dataset. The first layer EOF analysis is applied to the hmF2 dataset which decomposed the dataset into a series of orthogonal functions (EOF base functions Ek and their associated EOF coefficients Pk. The base functions Ek represent the intrinsic characteristic variations of the dataset with the modified dip latitude and local time, the coefficients Pk represents the variations of the dataset with the universal time, season as well as solar cycle activity levels. The second layer EOF analysis is applied to the EOF coefficients Pk obtained in the first layer EOF analysis. The coefficients Ak, obtained in the second layer EOF analysis, are then modelled with the harmonic functions representing the seasonal (annual and semi-annual and solar cycle variations, with their amplitudes changing with the F10.7 index, a proxy of the solar activity level. Thus, the constructed global model incorporates the geographical location, diurnal, seasonal as well as solar cycle variations of hmF2 through the combination of EOF analysis and the harmonic function expressions of the associated EOF coefficients. Comparisons between the model results and observational data were consistent, indicating that the modelling technique used is very promising when used to

  3. Experimental discrimination of ion stopping models near the Bragg peak in highly ionized matter.

    Science.gov (United States)

    Cayzac, W; Frank, A; Ortner, A; Bagnoud, V; Basko, M M; Bedacht, S; Bläser, C; Blažević, A; Busold, S; Deppert, O; Ding, J; Ehret, M; Fiala, P; Frydrych, S; Gericke, D O; Hallo, L; Helfrich, J; Jahn, D; Kjartansson, E; Knetsch, A; Kraus, D; Malka, G; Neumann, N W; Pépitone, K; Pepler, D; Sander, S; Schaumann, G; Schlegel, T; Schroeter, N; Schumacher, D; Seibert, M; Tauschwitz, An; Vorberger, J; Wagner, F; Weih, S; Zobus, Y; Roth, M

    2017-06-01

    The energy deposition of ions in dense plasmas is a key process in inertial confinement fusion that determines the α-particle heating expected to trigger a burn wave in the hydrogen pellet and resulting in high thermonuclear gain. However, measurements of ion stopping in plasmas are scarce and mostly restricted to high ion velocities where theory agrees with the data. Here, we report experimental data at low projectile velocities near the Bragg peak, where the stopping force reaches its maximum. This parameter range features the largest theoretical uncertainties and conclusive data are missing until today. The precision of our measurements, combined with a reliable knowledge of the plasma parameters, allows to disprove several standard models for the stopping power for beam velocities typically encountered in inertial fusion. On the other hand, our data support theories that include a detailed treatment of strong ion-electron collisions.

  4. Modeling of GE Appliances in GridLAB-D: Peak Demand Reduction

    Energy Technology Data Exchange (ETDEWEB)

    Fuller, Jason C.; Vyakaranam, Bharat GNVSR; Prakash Kumar, Nirupama; Leistritz, Sean M.; Parker, Graham B.

    2012-04-29

    The widespread adoption of demand response enabled appliances and thermostats can result in significant reduction to peak electrical demand and provide potential grid stabilization benefits. GE has developed a line of appliances that will have the capability of offering several levels of demand reduction actions based on information from the utility grid, often in the form of price. However due to a number of factors, including the number of demand response enabled appliances available at any given time, the reduction of diversity factor due to the synchronizing control signal, and the percentage of consumers who may override the utility signal, it can be difficult to predict the aggregate response of a large number of residences. The effects of these behaviors can be modeled and simulated in open-source software, GridLAB-D, including evaluation of appliance controls, improvement to current algorithms, and development of aggregate control methodologies. This report is the first in a series of three reports describing the potential of GE's demand response enabled appliances to provide benefits to the utility grid. The first report will describe the modeling methodology used to represent the GE appliances in the GridLAB-D simulation environment and the estimated potential for peak demand reduction at various deployment levels. The second and third reports will explore the potential of aggregated group actions to positively impact grid stability, including frequency and voltage regulation and spinning reserves, and the impacts on distribution feeder voltage regulation, including mitigation of fluctuations caused by high penetration of photovoltaic distributed generation and the effects on volt-var control schemes.

  5. Peak oxygen uptake after cardiac rehabilitation: A randomized controlled trial of a 12-month maintenance program versus usual care

    OpenAIRE

    Erik Madssen; Ingerid Arbo; Ingrid Granøien; Liv Walderhaug; Trine Moholdt

    2014-01-01

    BACKGROUND: Exercise capacity is a strong predictor of survival in patients with coronary artery disease (CAD). Exercise capacity improves after cardiac rehabilitation exercise training, but previous studies have demonstrated a decline in peak oxygen uptake after ending a formal rehabilitation program. There is a lack of knowledge on how long-term exercise adherence can be achieved in CAD patients. We therefore assessed if a 12-month maintenance program following cardiac rehabilitation would ...

  6. Peak oxygen uptake after cardiac rehabilitation: A randomized controlled trial of a 12-month maintenance program versus usual care

    OpenAIRE

    Madssen, Erik; Arbo, Ingerid Brænne; Granøien, Ingrid; Walderhaug, Liv; Moholdt, Trine Tegdan

    2014-01-01

    Background: Exercise capacity is a strong predictor of survival in patients with coronary artery disease (CAD). Exercise capacity improves after cardiac rehabilitation exercise training, but previous studies have demonstrated a decline in peak oxygen uptake after ending a formal rehabilitation program. There is a lack of knowledge on how long-term exercise adherence can be achieved in CAD patients. We therefore assessed if a 12-month maintenance program following cardiac rehabilitation wo...

  7. Peak Vertical Ground Reaction Force during Two-Leg Landing: A Systematic Review and Mathematical Modeling

    Directory of Open Access Journals (Sweden)

    Wenxin Niu

    2014-01-01

    Full Text Available Objectives. (1 To systematically review peak vertical ground reaction force (PvGRF during two-leg drop landing from specific drop height (DH, (2 to construct a mathematical model describing correlations between PvGRF and DH, and (3 to analyze the effects of some factors on the pooled PvGRF regardless of DH. Methods. A computerized bibliographical search was conducted to extract PvGRF data on a single foot when participants landed with both feet from various DHs. An innovative mathematical model was constructed to analyze effects of gender, landing type, shoes, ankle stabilizers, surface stiffness and sample frequency on PvGRF based on the pooled data. Results. Pooled PvGRF and DH data of 26 articles showed that the square root function fits their relationship well. An experimental validation was also done on the regression equation for the medicum frequency. The PvGRF was not significantly affected by surface stiffness, but was significantly higher in men than women, the platform than suspended landing, the barefoot than shod condition, and ankle stabilizer than control condition, and higher than lower frequencies. Conclusions. The PvGRF and root DH showed a linear relationship. The mathematical modeling method with systematic review is helpful to analyze the influence factors during landing movement without considering DH.

  8. New specifications for exponential random graph models

    NARCIS (Netherlands)

    Snijders, Tom A. B.; Pattison, Philippa E.; Robins, Garry L.; Handcock, Mark S.; Stolzenberg, RM

    2006-01-01

    The most promising class of statistical models for expressing structural properties of social networks observed atone moment in time is the class of exponential random graph models (ERGMs), also known as p* models. The strong point of these models is that they can represent a variety of structural

  9. Comments on the random Thirring model

    Science.gov (United States)

    Berkooz, Micha; Narayan, Prithvi; Rozali, Moshe; Simón, Joan

    2017-09-01

    The Thirring model with random couplings is a translationally invariant generalisation of the SYK model to 1+1 dimensions, which is tractable in the large N limit. We compute its two point function, at large distances, for any strength of the random coupling. For a given realisation, the couplings contain both irrelevant and relevant marginal operators, but statistically, in the large N limit, the random couplings are overall always marginally irrelevant, in sharp distinction to the usual Thirring model. We show the leading term to the β function in conformal perturbation theory, which is quadratic in the couplings, vanishes, while its usually subleading cubic term matches our RG flow.

  10. Neopuff T-piece resuscitator mask ventilation: Does mask leak vary with different peak inspiratory pressures in a manikin model?

    Science.gov (United States)

    Maheshwari, Rajesh; Tracy, Mark; Hinder, Murray; Wright, Audrey

    2017-08-01

    The aim of this study was to compare mask leak with three different peak inspiratory pressure (PIP) settings during T-piece resuscitator (TPR; Neopuff) mask ventilation on a neonatal manikin model. Participants were neonatal unit staff members. They were instructed to provide mask ventilation with a TPR with three PIP settings (20, 30, 40 cm H2 O) chosen in a random order. Each episode was for 2 min with 2-min rest period. Flow rate and positive end-expiratory pressure (PEEP) were kept constant. Airway pressure, inspiratory and expiratory tidal volumes, mask leak, respiratory rate and inspiratory time were recorded. Repeated measures analysis of variance was used for statistical analysis. A total of 12 749 inflations delivered by 40 participants were analysed. There were no statistically significant differences (P > 0.05) in the mask leak with the three PIP settings. No statistically significant differences were seen in respiratory rate and inspiratory time with the three PIP settings. There was a significant rise in PEEP as the PIP increased. Failure to achieve the desired PIP was observed especially at the higher settings. In a neonatal manikin model, the mask leak does not vary as a function of the PIP when the flow rate is constant. With a fixed rate and inspiratory time, there seems to be a rise in PEEP with increasing PIP. © 2017 Paediatrics and Child Health Division (The Royal Australasian College of Physicians).

  11. Random matrix model for disordered conductors

    Indian Academy of Sciences (India)

    Keywords. Disordered conductors; random matrix theory; Dyson's Coulomb gas model. ... An interesting random walk problem associated with the joint probability distribution of the ensuing ensemble is discussed and its connection with level dynamics is brought out. It is further proved that Dyson's Coulomb gas analogy ...

  12. Influence of peak power in ablation rate of dental hard tissues: mathematical model

    Science.gov (United States)

    Colojoara, Carmen; Gabay, Shimon; van der Meulen, Freerk W.; van Gemert, Martin J. C.

    1996-12-01

    Pulsed Er:YAG and CO2 lasers should be suitable instruments for dentin and enamel ablation because both tissues have absorption peaks for radiation at 2.9 and 9.6 micrometers wavelengths. This is the context of our research that emphasizes the way in which the diameter and the depth of the crater made in enamel and dentin with the laser Er:YAG and CO2 is influenced in quantity and quality. Freshly extracted human third molar were used for this experiment. The laser source is Er:YAG Kavo Key dental model 1240 and CO2 Laser Sonics LS 860. The dimensions of the obtained craters were measured using the optical microscopy method. The obtained results were modelled experimentally with programs: GRAPHER and STATGRAPHICS. After the mathematical processing to the results what we obtain is relevant regarding the influence of the key parameters in the efficiency of the ablation according to the type of laser. On the whole, from our research results that both lasers ablate efficiently the dentin when the laser energy is between 200 and 300 mJ.

  13. Supersymmetric SYK model and random matrix theory

    Science.gov (United States)

    Li, Tianlin; Liu, Junyu; Xin, Yuan; Zhou, Yehao

    2017-06-01

    In this paper, we investigate the effect of supersymmetry on the symmetry classification of random matrix theory ensembles. We mainly consider the random matrix behaviors in the N=1 supersymmetric generalization of Sachdev-Ye-Kitaev (SYK) model, a toy model for two-dimensional quantum black hole with supersymmetric constraint. Some analytical arguments and numerical results are given to show that the statistics of the supersymmetric SYK model could be interpreted as random matrix theory ensembles, with a different eight-fold classification from the original SYK model and some new features. The time-dependent evolution of the spectral form factor is also investigated, where predictions from random matrix theory are governing the late time behavior of the chaotic hamiltonian with supersymmetry.

  14. Entropy Characterization of Random Network Models

    Directory of Open Access Journals (Sweden)

    Pedro J. Zufiria

    2017-06-01

    Full Text Available This paper elaborates on the Random Network Model (RNM as a mathematical framework for modelling and analyzing the generation of complex networks. Such framework allows the analysis of the relationship between several network characterizing features (link density, clustering coefficient, degree distribution, connectivity, etc. and entropy-based complexity measures, providing new insight on the generation and characterization of random networks. Some theoretical and computational results illustrate the utility of the proposed framework.

  15. Early diet and peak bone mass: 20 year follow-up of a randomized trial of early diet in infants born preterm.

    Science.gov (United States)

    Fewtrell, Mary S; Williams, Jane E; Singhal, Atul; Murgatroyd, Peter R; Fuller, Nigel; Lucas, Alan

    2009-07-01

    Preterm infants are at risk of metabolic bone disease due to inadequate mineral intake with unknown consequences for later bone health. To test the hypotheses that (1) early diet programs peak bone mass and bone turnover; (2) human milk has a beneficial effect on these outcomes; (3) preterm subjects have reduced peak bone mass compared to population reference data. 20 year follow-up of 202 subjects (43% male; 24% of survivors) who were born preterm and randomized to: (i) preterm formula versus banked breast milk or (ii) preterm versus term formula; as sole diet or supplement to maternal milk. Outcome measures were (i) anthropometry; (ii) hip, lumbar spine (LS) and whole body (WB) bone mineral content (BMC) and bone area (BA) measured using DXA; (iii) bone turnover markers. Infant dietary randomization group did not influence peak bone mass or turnover. The proportion of human milk in the diet was significantly positively associated with WBBA and BMC. Subjects receiving >90% human milk had significantly higher WBBA (by 3.5%, p=0.01) and BMC (by 4.8%, p=0.03) than those receiving milk intake, despite its low nutrient content, may reflect non-nutritive factors in breast milk. These findings may have implications for later osteoporosis risk and require further investigation.

  16. A new model of Random Regret Minimization

    NARCIS (Netherlands)

    Chorus, C.G.

    2010-01-01

    A new choice model is derived, rooted in the framework of Random Regret Minimization (RRM). The proposed model postulates that when choosing, people anticipate and aim to minimize regret. Whereas previous regret-based discrete choice-models assume that regret is experienced with respect to only the

  17. A Generalized Random Regret Minimization Model

    NARCIS (Netherlands)

    Chorus, C.G.

    2013-01-01

    This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM

  18. Wind interference between two high-rise building models: On the influence of shielding, channeling and buffeting on peak pressures

    NARCIS (Netherlands)

    Bronkhorst, A.J.; Geurts, C.P.W.; Bentum, C.A. van; Blocken, B.

    2014-01-01

    The influence of interference between two high-rise building models on the minimum peak pressures was investigated in an atmospheric boundary layer wind tunnel. Pressure measur€ments were performed on a square model with an aspect ratio of 1 to 4. The influence of an interfering model with the same

  19. A 5-week whole body vibration training improves peak torque performance but has no effect on stretch reflex in healthy adults: a randomized controlled trial.

    Science.gov (United States)

    Yeung, S S; Yeung, E W

    2015-05-01

    This study aimed to investigate the neuromuscular adaptation following a 5-week high frequency and low amplitude whole body vibration (WBV) exercise training. The study is a prospective, double blind, randomized controlled intervention design with a total of 19 subjects volunteered to participate in the study. They were randomly assigned either to WBV exercise training or control group. Both groups participated in a 5-week training program. The intervention group received WBV in semi-squat position on a device with an amplitude of 0.76 mm, frequency of 40Hz, and peak acceleration of 23.9 m/s2. Each vibration training session consisted of 6 series of 60s on with 30s rest period in between. The control group underwent the same statically mini-squatting position without exposure to WBV. The effectiveness of the vibration program was evaluated by vertical jump test and the isokinetic knee extensor peak torque. The possible neural factors that contributed to the improved muscular performance were evaluated by the stretch induced knee jerk reflex. WBV training significantly enhanced the isokinetic knee extensor peak torque performance. Two-way mixed repeated measures analysis of variance revealed significant time effect of the changes in the peak torque (P=0.043) and the effect was significantly different between the intervention and control group (P=0.042). WBV did not affect vertical jump height, reflex latency of VL, EMGVL, and knee jerk angle. The results of this study do not support the hypothesis that the improvement in the muscular performance when subjects exposed to WBV training is attributed by neuromuscular efficiency via modulation of the muscle spindle sensitivity.

  20. Peak Pc Prediction in Conjunction Analysis: Conjunction Assessment Risk Analysis. Pc Behavior Prediction Models

    Science.gov (United States)

    Vallejo, J.J.; Hejduk, M.D.; Stamey, J. D.

    2015-01-01

    Satellite conjunction risk typically evaluated through the probability of collision (Pc). Considers both conjunction geometry and uncertainties in both state estimates. Conjunction events initially discovered through Joint Space Operations Center (JSpOC) screenings, usually seven days before Time of Closest Approach (TCA). However, JSpOC continues to track objects and issue conjunction updates. Changes in state estimate and reduced propagation time cause Pc to change as event develops. These changes a combination of potentially predictable development and unpredictable changes in state estimate covariance. Operationally useful datum: the peak Pc. If it can reasonably be inferred that the peak Pc value has passed, then risk assessment can be conducted against this peak value. If this value is below remediation level, then event intensity can be relaxed. Can the peak Pc location be reasonably predicted?

  1. RMBNToolbox: random models for biochemical networks

    Directory of Open Access Journals (Sweden)

    Niemi Jari

    2007-05-01

    Full Text Available Abstract Background There is an increasing interest to model biochemical and cell biological networks, as well as to the computational analysis of these models. The development of analysis methodologies and related software is rapid in the field. However, the number of available models is still relatively small and the model sizes remain limited. The lack of kinetic information is usually the limiting factor for the construction of detailed simulation models. Results We present a computational toolbox for generating random biochemical network models which mimic real biochemical networks. The toolbox is called Random Models for Biochemical Networks. The toolbox works in the Matlab environment, and it makes it possible to generate various network structures, stoichiometries, kinetic laws for reactions, and parameters therein. The generation can be based on statistical rules and distributions, and more detailed information of real biochemical networks can be used in situations where it is known. The toolbox can be easily extended. The resulting network models can be exported in the format of Systems Biology Markup Language. Conclusion While more information is accumulating on biochemical networks, random networks can be used as an intermediate step towards their better understanding. Random networks make it possible to study the effects of various network characteristics to the overall behavior of the network. Moreover, the construction of artificial network models provides the ground truth data needed in the validation of various computational methods in the fields of parameter estimation and data analysis.

  2. Nonparametric estimation in random sum models

    Directory of Open Access Journals (Sweden)

    Hassan S. Bakouch

    2013-05-01

    Full Text Available Let X1,X2,…,XN be independent, identically distributed, non-negative, integervalued random variables and let N be a non-negative, integer-valued random variable independent of X1,X2,…,XN . In this paper, we consider two nonparametric estimation problems for the random sum variable. The first is the estimation of the means of Xi and N based on the second-moment assumptions on distributions of Xi and N . The second is the nonparametric estimation of the distribution of Xi given a parametric model for the distribution of N . Some asymptotic properties of the proposed estimators are discussed.

  3. Modelling complex networks by random hierarchical graphs

    Directory of Open Access Journals (Sweden)

    M.Wróbel

    2008-06-01

    Full Text Available Numerous complex networks contain special patterns, called network motifs. These are specific subgraphs, which occur oftener than in randomized networks of Erdős-Rényi type. We choose one of them, the triangle, and build a family of random hierarchical graphs, being Sierpiński gasket-based graphs with random "decorations". We calculate the important characteristics of these graphs - average degree, average shortest path length, small-world graph family characteristics. They depend on probability of decorations. We analyze the Ising model on our graphs and describe its critical properties using a renormalization-group technique.

  4. Infinite Random Graphs as Statistical Mechanical Models

    DEFF Research Database (Denmark)

    Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria

    2011-01-01

    We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe...... a relation to the so-called uniform infinite tree and results on the Hausdorff and spectral dimension of two-dimensional space-time obtained in B. Durhuus, T. Jonsson, J.F. Wheater, J. Stat. Phys. 139, 859 (2010) are briefly outlined. For the latter we discuss results on the absence of spontaneous...

  5. A Dexterous Optional Randomized Response Model

    Science.gov (United States)

    Tarray, Tanveer A.; Singh, Housila P.; Yan, Zaizai

    2017-01-01

    This article addresses the problem of estimating the proportion Pi[subscript S] of the population belonging to a sensitive group using optional randomized response technique in stratified sampling based on Mangat model that has proportional and Neyman allocation and larger gain in efficiency. Numerically, it is found that the suggested model is…

  6. Detailed analysis of an Eigen quasispecies model in a periodically moving sharp-peak landscape

    Science.gov (United States)

    Neves, Armando G. M.

    2010-09-01

    The Eigen quasispecies model in a periodically moving sharp-peak landscape considered in previous seminal works [M. Nilsson and N. Snoad, Phys. Rev. Lett. 84, 191 (2000)10.1103/PhysRevLett.84.191] and [C. Ronnewinkel , in Theoretical Aspects of Evolutionary Computing, edited by L. Kallel, B. Naudts, and A. Rogers (Springer-Verlag, Heidelberg, 2001)] is analyzed in greater detail. We show here, through a more rigorous analysis, that results in those papers are qualitatively correct. In particular, we obtain a phase diagram for the existence of a quasispecies with the same shape as in the above cited paper by C. Ronnewinkel , with upper and lower thresholds for the mutation rate between which a quasispecies may survive. A difference is that the upper value is larger and the lower value is smaller than the previously reported ones, so that the range for quasispecies existence is always larger than thought before. The quantitative information provided might also be important in understanding genetic variability in virus populations and has possible applications in antiviral therapies. The results in the quoted papers were obtained by studying the populations only at some few genomes. As we will show, this amounts to diagonalizing a 3×3 matrix. Our work is based instead in a different division of the population allowing a finer control of the populations at various relevant genetic sequences. The existence of a quasispecies will be related to Perron-Frobenius eigenvalues. Although huge matrices of sizes 2ℓ , where ℓ is the genome length, may seem necessary at a first look, we show that such large sizes are not necessary and easily obtain numerical and analytical results for their eigenvalues.

  7. Peak-power estimation equations in 12- to 16-year old children: comparing linear with allometric models.

    Science.gov (United States)

    Duncan, Michael J; Hankey, Joanne; Nevill, Alan M

    2013-08-01

    This study examined the efficacy of peak-power estimation equations in children using force platform data and determined whether allometric modeling offers a sounder alternative to estimating peak power in pediatric samples. Ninety one boys and girls aged 12-16 years performed 3 countermovement jumps (CMJ) on a force platform. Estimated peak power (PP(est)) was determined using the Harman et al., Sayers SJ, Sayers CMJ, and Canavan and Vescovi equations. All 4 equations were associated with actual peak power (r = 0.893-0.909, all p equations (p allometric model using CMJ height, body mass, and age was then developed with this sample, which predicted 88.8% of the variance in PP(actual) (p equations were cross-validated using the one-third split sample (n = 31), evidencing a significant positive relationship (r = .910, p = .001) and no significant difference (p = .151) between PP(actual) and PP(est) using this equation. The allometric and linear models determined from this study provide accurate models to estimate peak power in children.

  8. Using observed postconstruction peak discharges to evaluate a hydrologic and hydraulic design model, Boneyard Creek, Champaign and Urbana, Illinois

    Science.gov (United States)

    Over, Thomas M.; Soong, David T.; Holmes, Robert R.

    2011-01-01

    Boneyard Creek—which drains an urbanized watershed in the cities of Champaign and Urbana, Illinois, including part of the University of Illinois at Urbana-Champaign (UIUC) campus—has historically been prone to flooding. Using the Stormwater Management Model (SWMM), a hydrologic and hydraulic model of Boneyard Creek was developed for the design of the projects making up the first phase of a long-term plan for flood control on Boneyard Creek, and the construction of the projects was completed in May 2003. The U.S. Geological Survey, in cooperation with the Cities of Champaign and Urbana and UIUC, installed and operated stream and rain gages in order to obtain data for evaluation of the design-model simulations. In this study, design-model simulations were evaluated by using observed postconstruction precipitation and peak-discharge data. Between May 2003 and September 2008, five high-flow events on Boneyard Creek satisfied the study criterion. The five events were simulated with the design model by using observed precipitation. The simulations were run with two different values of the parameter controlling the soil moisture at the beginning of the storms and two different ways of spatially distributing the precipitation, making a total of four simulation scenarios. The simulated and observed peak discharges and stages were compared at gaged locations along the Creek. The discharge at one of these locations was deemed to be critical for evaluating the design model. The uncertainty of the measured peak discharge was also estimated at the critical location with a method based on linear regression of the stage and discharge relation, an estimate of the uncertainty of the acoustic Doppler velocity meter measurements, and the uncertainty of the stage measurements. For four of the five events, the simulated peak discharges lie within the 95-percent confidence interval of the observed peak discharges at the critical location; the fifth was just outside the upper end of

  9. An inventory model with random demand

    Science.gov (United States)

    Mitsel, A. A.; Kritski, O. L.; Stavchuk, LG

    2017-01-01

    The article describes a three-product inventory model with random demand at equal frequencies of delivery. A feature of this model is that the additional purchase of resources required is carried out within the scope of their deficit. This fact allows reducing their storage costs. A simulation based on the data on arrival of raw and materials at an enterprise in Kazakhstan has been prepared. The proposed model is shown to enable savings up to 40.8% of working capital.

  10. Progressive exercise for anabolism in kidney disease (PEAK): a randomized, controlled trial of resistance training during hemodialysis.

    Science.gov (United States)

    Cheema, Bobby; Abas, Haifa; Smith, Benjamin; O'Sullivan, Anthony; Chan, Maria; Patwardhan, Aditi; Kelly, John; Gillin, Adrian; Pang, Glen; Lloyd, Brad; Singh, Maria Fiatarone

    2007-05-01

    Skeletal muscle wasting is common and insidious in patients who receive maintenance hemodialysis treatment for the management of ESRD. The objective of this study was to determine whether 12 wk of high-intensity, progressive resistance training (PRT) administered during routine hemodialysis treatment could improve skeletal muscle quantity and quality versus usual care. Forty-nine patients (62.6 +/- 14.2 yr; 0.3 to 16.7 yr on dialysis) were recruited from the outpatient hemodialysis unit of the St. George Public Hospital (Sydney, Australia). Patients were randomized to PRT + usual care (n = 24) or usual care control only (n = 25). The PRT group performed two sets of 10 exercises at a high intensity (15 to 17/20 on the Borg Scale) using free weights, three times per week for 12 wk during routine hemodialysis treatment. Primary outcomes included thigh muscle quantity (cross-sectional area [CSA]) and quality (intramuscular lipid content via attenuation) evaluated by computed tomography scan. Secondary outcomes included muscle strength, exercise capacity, body circumference measures, proinflammatory cytokine C-reactive protein, and quality of life. There was no statistically significant difference in muscle CSA change between groups. However, there were statistically significant improvements in muscle attenuation, muscle strength, mid-thigh and mid-arm circumference, body weight, and C-reactive protein in the PRT group relative to the nonexercising control group. These findings suggest that patients with ESRD can improve skeletal muscle quality and derive other health-related adaptations solely by engaging in a 12-wk high-intensity PRT regimen during routine hemodialysis treatment sessions. Longer training durations or more sensitive analysis techniques may be required to document alterations in muscle CSA.

  11. Classifying the variability in impact and active peak vertical ground reaction forces during running using DFA and ARFIMA models.

    Science.gov (United States)

    Winter, Samantha L; Challis, John H

    2017-01-01

    The vertical ground reaction force (VGRF) during rear-foot striking running typically exhibits peaks referred to as the impact peak and the active peak; their timings and magnitudes have been implicated in injury. Identifying the structure of time-series can provide insight into associated control processes. The purpose here was to detect long-range correlations associated with the time from first contact to impact peak (TIP) and active peak (TAP); and the magnitudes of impact (IPM) and active peaks (APM) using a Detrended Fluctuation Analysis, and Auto-Regressive Fractionally Integrated Moving Average models. Twelve subjects performed an 8min trial at their preferred running speed on an instrumented treadmill. TIP, TAP; IPM, and APM were identified from the VGRF profile for each footfall. TIP and TAP time-series did not demonstrate long-range correlations, conversely IPM and APM time-series did. Short range correlations appeared as well as or instead of long range correlations for IPM. Conversely pure powerlaw behaviour was demonstrated in 11 of the 24 time series for APM, and long range dependencies along with short range correlations were present in a further 9 time series. It has been hypothesised that control mechanisms for IPM and APM are different, these results support this hypothesis. Copyright © 2016. Published by Elsevier B.V.

  12. Simulating low-probability peak discharges for the Rhine basin using resampled climate modeling data

    NARCIS (Netherlands)

    te Linde, A.H.; Aerts, J.C.J.M.; Bakker, A.; Kwadijk, J.

    2010-01-01

    Climate change will increase winter precipitation, and in combination with earlier snowmelt it will cause a shift in peak discharge in the Rhine basin from spring to winter. This will probably lead to an increase in the frequency and magnitude of extreme floods. In this paper we aim to enhance the

  13. The Research of Indoor Positioning Based on Double-peak Gaussian Model

    Directory of Open Access Journals (Sweden)

    Lina Chen

    2014-04-01

    Full Text Available Location fingerprinting using Wi-Fi signals has been very popular and is a well accepted indoor positioning method. The key issue of the fingerprinting approach is generating the fingerprint radio map. Limited by the practical workload, only a few samples of the received signal strength are collected at each reference point. Unfortunately, fewer samples cannot accurately represent the actual distribution of the signal strength from each access point. This study finds most Wi- Fi signals have two peaks. According to the new finding, a double-peak Gaussian arithmetic is proposed to generate a fingerprint radio map. This approach requires little time to receive WiFi signals and it easy to estimate the parameters of the double-peak Gaussian function. Compared to the Gaussian function and histogram method to generate a fingerprint radio map, this method better approximates the occurrence signal distribution. This paper also compared the positioning accuracy using K-Nearest Neighbour theory for three radio maps, the test results show that the positioning distance error utilizing the double-peak Gaussian function is better than the other two methods.

  14. The impact of the twin peaks model on the insurance industry ...

    African Journals Online (AJOL)

    The Financial Sector Regulation Bill, also known as the "Twin Peaks" Bill, is the latest invention from the table of the legislature, and some expect this Bill to have far-reaching consequences for the financial services industry. The question is, of course, whether the current dispensation will change so quickly and so ...

  15. Modelling the impact of retention–detention units on sewer surcharge and peak and annual runoff reduction

    DEFF Research Database (Denmark)

    Locatelli, Luca; Gabriel, S.; Mark, O.

    2015-01-01

    Stormwater management using water sensitive urban design is expected to be part of future drainage systems. This paper aims to model the combination of local retention units, such as soakaways, with subsurface detention units. Soakaways are employed to reduce (by storage and infiltration) peak...... and volume stormwater runoff; however, large retention volumes are required for a significant peak reduction. Peak runoff can therefore be handled by combining detention units with soakaways. This paper models the impact of retrofitting retention-detention units for an existing urbanized catchment in Denmark....... The impact of retrofitting a retention-detention unit of 3.3 m(3)/100 m(2) (volume/impervious area) was simulated for a small catchment in Copenhagen using MIKE URBAN. The retention-detention unit was shown to prevent flooding from the sewer for a 10-year rainfall event. Statistical analysis of continuous...

  16. Optimal Allocation in Stratified Randomized Response Model

    Directory of Open Access Journals (Sweden)

    Javid Shabbir

    2005-07-01

    Full Text Available A Warner (1965 randomized response model based on stratification is used to determine the allocation of samples. Both linear and log-linear cost functions are discussed under uni and double stratification. It observed that by using a log-linear cost function, one can get better allocations.

  17. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest ...

  18. Improving randomness characterization through Bayesian model selection.

    Science.gov (United States)

    Díaz Hernández Rojas, Rafael; Solís, Aldo; Angulo Martínez, Alí M; U'Ren, Alfred B; Hirsch, Jorge G; Marsili, Matteo; Pérez Castillo, Isaac

    2017-06-08

    Random number generation plays an essential role in technology with important applications in areas ranging from cryptography to Monte Carlo methods, and other probabilistic algorithms. All such applications require high-quality sources of random numbers, yet effective methods for assessing whether a source produce truly random sequences are still missing. Current methods either do not rely on a formal description of randomness (NIST test suite) on the one hand, or are inapplicable in principle (the characterization derived from the Algorithmic Theory of Information), on the other, for they require testing all the possible computer programs that could produce the sequence to be analysed. Here we present a rigorous method that overcomes these problems based on Bayesian model selection. We derive analytic expressions for a model's likelihood which is then used to compute its posterior distribution. Our method proves to be more rigorous than NIST's suite and Borel-Normality criterion and its implementation is straightforward. We applied our method to an experimental device based on the process of spontaneous parametric downconversion to confirm it behaves as a genuine quantum random number generator. As our approach relies on Bayesian inference our scheme transcends individual sequence analysis, leading to a characterization of the source itself.

  19. Model-based peak alignment of metabolomic profiling from comprehensive two-dimensional gas chromatography mass spectrometry.

    Science.gov (United States)

    Jeong, Jaesik; Shi, Xue; Zhang, Xiang; Kim, Seongho; Shen, Changyu

    2012-02-08

    Comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GCxGC/TOF-MS) has been used for metabolite profiling in metabolomics. However, there is still much experimental variation to be controlled including both within-experiment and between-experiment variation. For efficient analysis, an ideal peak alignment method to deal with such variations is in great need. Using experimental data of a mixture of metabolite standards, we demonstrated that our method has better performance than other existing method which is not model-based. We then applied our method to the data generated from the plasma of a rat, which also demonstrates good performance of our model. We developed a model-based peak alignment method to process both homogeneous and heterogeneous experimental data. The unique feature of our method is the only model-based peak alignment method coupled with metabolite identification in an unified framework. Through the comparison with other existing method, we demonstrated that our method has better performance. Data are available at http://stage.louisville.edu/faculty/x0zhan17/software/software-development/mspa. The R source codes are available at http://www.biostat.iupui.edu/~ChangyuShen/CodesPeakAlignment.zip. 2136949528613691.

  20. Model-based peak alignment of metabolomic profiling from comprehensive two-dimensional gas chromatography mass spectrometry

    Directory of Open Access Journals (Sweden)

    Jeong Jaesik

    2012-02-01

    Full Text Available Abstract Background Comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GCxGC/TOF-MS has been used for metabolite profiling in metabolomics. However, there is still much experimental variation to be controlled including both within-experiment and between-experiment variation. For efficient analysis, an ideal peak alignment method to deal with such variations is in great need. Results Using experimental data of a mixture of metabolite standards, we demonstrated that our method has better performance than other existing method which is not model-based. We then applied our method to the data generated from the plasma of a rat, which also demonstrates good performance of our model. Conclusions We developed a model-based peak alignment method to process both homogeneous and heterogeneous experimental data. The unique feature of our method is the only model-based peak alignment method coupled with metabolite identification in an unified framework. Through the comparison with other existing method, we demonstrated that our method has better performance. Data are available at http://stage.louisville.edu/faculty/x0zhan17/software/software-development/mspa. The R source codes are available at http://www.biostat.iupui.edu/~ChangyuShen/CodesPeakAlignment.zip. Trial Registration 2136949528613691

  1. Modelling of radon concentration peaks in thermal spas: application to Polichnitos and Eftalou spas (Lesvos Island--Greece).

    Science.gov (United States)

    Vogiannis, Efstratios; Nikolopoulos, Dimitrios

    2008-11-01

    A mathematical model was developed for the description of radon concentration peaks observed in thermal spas. Modelling was based on a pragmatic mix of estimation and measurement of involved physical parameters. The model utilised non-linear first order derivative mass balance differential equations. The equations were described and solved numerically by the use of specially developed computer codes. To apply and check the model, measurements were performed in two thermal spas in Greece (Polichnitos and Eftalou-Lesvos Island). Forty different measurement sets were collected to estimate the concentration variations of indoor-outdoor radon, radon in the entering thermal water, the ventilation rate, the bathtub surface and the bath volume. Turbulence and diffusive phenomena involved in radon concentration variations were attributed to a time varying contact interfacial area (equivalent area). This area was approximated with the use of a mathematical function. Other model parameters were estimated from the literature. Through numerical solving and use of non-linear statistics, the time variations of the equivalent area were estimated for every measurement set. Computationally applied non-linear uncertainty analysis showed less sensitive variations of the coefficients of the equivalent area compared to parameters of the model. Modelled and measured radon concentration peaks were compared by the use of three statistical criteria for the goodness-of-fit. All the investigated peaks exhibited low error probability (***p<0.001) for all criteria. It was concluded that the present modelling achieved to predict the measured radon concentration peaks. Through adequate selection of model parameters the model may be applied to other thermal spas.

  2. The Impact of the Twin Peaks Model on the Insurance Industry

    OpenAIRE

    Daleen Millard

    2017-01-01

    Financial regulation in South Africa changes constantly. In the quest to find the ideal regulatory framework for optimal consumer protection, rules change all the time and international trends have an important influence on lawmakers nationally. The Financial Sector Regulation Bill, also known as the "Twin Peaks" Bill, is the latest invention from the table of the legislature, and some expect this Bill to have far-reaching consequences for the financial services industry. The question is, of ...

  3. A random walk model to evaluate autism

    Science.gov (United States)

    Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.

    2018-02-01

    A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.

  4. A model of market power in electricity industries subject to peak load pricing

    Energy Technology Data Exchange (ETDEWEB)

    Arellano, Maria-Soledad [Center for Applied Economics (CEA), Department of Industrial Engineering, University of Chile, Republica 701, Santiago (Chile); Serra, Pablo [Department of Economics, University of Chile, Diagonal Paraguay 257, Santiago (Chile)

    2007-10-15

    This paper studies the exercise of market power in price-regulated electricity industries under peak-load pricing and merit order dispatching, but where investment decisions are taken by independent generating companies. Within this context, we show that producers can exercise market power by under-investing in base-load capacity, compared to the welfare-maximizing configuration. We also show that when there is free entry with an exogenous fixed entry cost that is later sunk, more intense competition results in higher welfare but fewer firms. (author)

  5. Modelling the Peak Elongation of Nylon6 and Fe Powder Based Composite Wire for FDM Feedstock Filament

    Science.gov (United States)

    Garg, Harish Kumar; Singh, Rupinder

    2017-10-01

    In the present work, to increase the application domain of fused deposition modelling (FDM) process, Nylon6-Fe powder based composite wire has been prepared as feed stock filament. Further for smooth functioning of feed stock filament without any change in the hardware and software of the commercial FDM setup, the mechanical properties of the newly prepared composite wire must be comparable/at par to the existing material i.e. ABS, P-430. So, keeping this in consideration; an effort has been made to model the peak elongation of in house developed feedstock filament comprising of Nylon6 and Fe powder (prepared on single screw extrusion process) for commercial FDM setup. The input parameters of single screw extruder (namely: barrel temperature, temperature of the die, speed of the screw, speed of the winding machine) and rheological property of material (melt flow index) has been modelled with peak elongation as the output by using response surface methodology. For validation of model the result of peak elongation obtained from the model equation the comparison was made with the results of actual experimentation which shows the variation of ±1 % only.

  6. Benchmarking hydrological model predictive capability for UK River flows and flood peaks.

    Science.gov (United States)

    Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten

    2017-04-01

    Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results

  7. Estimation of Instantaneous Peak Flow Using Machine-Learning Models and Empirical Formula in Peninsular Spain

    Directory of Open Access Journals (Sweden)

    Patricia Jimeno-Sáez

    2017-05-01

    Full Text Available The design of hydraulic structures and flood risk management is often based on instantaneous peak flow (IPF. However, available flow time series with high temporal resolution are scarce and of limited length. A correct estimation of the IPF is crucial to reducing the consequences derived from flash floods, especially in Mediterranean countries. In this study, empirical methods to estimate the IPF based on maximum mean daily flow (MMDF, artificial neural networks (ANN, and adaptive neuro-fuzzy inference system (ANFIS have been compared. These methods have been applied in 14 different streamflow gauge stations covering the diversity of flashiness conditions found in Peninsular Spain. Root-mean-square error (RMSE, and coefficient of determination (R2 have been used as evaluation criteria. The results show that: (1 the Fuller equation and its regionalization is more accurate and has lower error compared with other empirical methods; and (2 ANFIS has demonstrated a superior ability to estimate IPF compared to any empirical formula.

  8. Particle filters for random set models

    CERN Document Server

    Ristic, Branko

    2013-01-01

    “Particle Filters for Random Set Models” presents coverage of state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based  on the Monte Carlo statistical method. The resulting  algorithms, known as particle filters, in the last decade have become one of the essential tools for stochastic filtering, with applications ranging from  navigation and autonomous vehicles to bio-informatics and finance. While particle filters have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. These recent developments have dramatically widened the scope of applications, from single to multiple appearing/disappearing objects, from precise to imprecise measurements and measurement models. This book...

  9. An Analytical Model for Spectral Peak Frequency Prediction of Substrate Noise in CMOS Substrates

    DEFF Research Database (Denmark)

    Shen, Ming; Mikkelsen, Jan H.

    2013-01-01

    This paper proposes an analytical model describing the generation of switching current noise in CMOS substrates. The model eliminates the need for SPICE simulations in existing methods by conducting a transient analysis on a generic CMOS inverter and approximating the switching current waveform us...

  10. Wetland Restoration as a Tool for Peak Flow Mitigation: Combining Watershed Scale Modeling with a Genetic Algorithm Approach.

    Science.gov (United States)

    Dalzell, B. J.; Gassman, P. W.; Kling, C.

    2015-12-01

    In the Minnesota River Basin, sediments originating from failing stream banks and bluffs account for the majority of the riverine load and contribute to water quality impairments in the Minnesota River as well as portions of the Mississippi River upstream of Lake Pepin. One approach for mitigating this problem may be targeted wetland restoration in Minnesota River Basin tributaries in order to reduce the magnitude and duration of peak flow events which contribute to bluff and stream bank failures. In order to determine effective arrangements and properties of wetlands to achieve peak flow reduction, we are employing a genetic algorithm approach coupled with a SWAT model of the Cottonwood River, a tributary of the Minnesota River. The genetic algorithm approach will evaluate combinations of basic wetland features as represented by SWAT: surface area, volume, contributing area, and hydraulic conductivity of the wetland bottom. These wetland parameters will be weighed against economic considerations associated with land use trade-offs in this agriculturally productive landscape. Preliminary results show that the SWAT model is capable of simulating daily hydrology very well and genetic algorithm evaluation of wetland scenarios is ongoing. Anticipated results will include (1) combinations of wetland parameters that are most effective for reducing peak flows, and (2) evaluation of economic trade-offs between wetland restoration, water quality, and agricultural productivity in the Cottonwood River watershed.

  11. Binomial probability distribution model-based protein identification algorithm for tandem mass spectrometry utilizing peak intensity information.

    Science.gov (United States)

    Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu

    2013-01-04

    Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .

  12. Modeling the Quality of Videos Displayed With Local Dimming Backlight at Different Peak White and Ambient Light Levels

    DEFF Research Database (Denmark)

    Mantel, Claire; Søgaard, Jacob; Bech, Søren

    2016-01-01

    This paper investigates the impact of ambient light and peak white (maximum brightness of a display) on the perceived quality of videos displayed using local backlight dimming. Two subjective tests providing quality evaluations are presented and analyzed. The analyses of variance show significant...... is computed using a model of the display. Widely used objective quality metrics are applied based on the rendering models of the videos to predict the subjective evaluations. As these predictions are not satisfying, three machine learning methods are applied: partial least square regression, elastic net...

  13. Can the minimal supersymmetric standard model with light bottom squark and light gluino survive Z-peak constraints?

    Science.gov (United States)

    Cao, Junjie; Xiong, Zhaohua; Yang, Jin Min

    2002-03-18

    In the framework of the minimal supersymmetric model we examine the Z-peak constraints on the scenario of one light bottom squark (sbottom) ( approximately 2-5.5 GeV) and light gluino (approximately 12-16 GeV), which has been successfully used to explain the excess of bottom quark production in hadron collisions. Such a scenario is found to be severely constrained by the CERN LEP Z-peak observables, especially by R(b), due to the large effect of gluino-sbottom loops. To account for the R(b) data in this scenario, the other mass eigenstate of sbottom, i.e., the heavier one, must be lighter than 125 (195) GeV at 2sigma(3sigma) level, which, however, is disfavored by CERN LEP II experiments.

  14. Spartan random processes in time series modeling

    Science.gov (United States)

    Žukovič, M.; Hristopulos, D. T.

    2008-06-01

    A Spartan random process (SRP) is used to estimate the correlation structure of time series and to predict (interpolate and extrapolate) the data values. SRPs are motivated from statistical physics, and they can be viewed as Ginzburg-Landau models. The temporal correlations of the SRP are modeled in terms of ‘interactions’ between the field values. Model parameter inference employs the computationally fast modified method of moments, which is based on matching sample energy moments with the respective stochastic constraints. The parameters thus inferred are then compared with those obtained by means of the maximum likelihood method. The performance of the Spartan predictor (SP) is investigated using real time series of the quarterly S&P 500 index. SP prediction errors are compared with those of the Kolmogorov-Wiener predictor. Two predictors, one of which is explicit, are derived and used for extrapolation. The performance of the predictors is similarly evaluated.

  15. Forecasting peak asthma admissions in London: an application of quantile regression models

    Science.gov (United States)

    Soyiri, Ireneous N.; Reidpath, Daniel D.; Sarran, Christophe

    2013-07-01

    Asthma is a chronic condition of great public health concern globally. The associated morbidity, mortality and healthcare utilisation place an enormous burden on healthcare infrastructure and services. This study demonstrates a multistage quantile regression approach to predicting excess demand for health care services in the form of asthma daily admissions in London, using retrospective data from the Hospital Episode Statistics, weather and air quality. Trivariate quantile regression models (QRM) of asthma daily admissions were fitted to a 14-day range of lags of environmental factors, accounting for seasonality in a hold-in sample of the data. Representative lags were pooled to form multivariate predictive models, selected through a systematic backward stepwise reduction approach. Models were cross-validated using a hold-out sample of the data, and their respective root mean square error measures, sensitivity, specificity and predictive values compared. Two of the predictive models were able to detect extreme number of daily asthma admissions at sensitivity levels of 76 % and 62 %, as well as specificities of 66 % and 76 %. Their positive predictive values were slightly higher for the hold-out sample (29 % and 28 %) than for the hold-in model development sample (16 % and 18 %). QRMs can be used in multistage to select suitable variables to forecast extreme asthma events. The associations between asthma and environmental factors, including temperature, ozone and carbon monoxide can be exploited in predicting future events using QRMs.

  16. Validation of hamstrings musculoskeletal modeling by calculating peak hamstrings length at different hip angles

    NARCIS (Netherlands)

    van der Krogt, M.M.; Doorenbosch, C.A.M.; Harlaar, J.

    2008-01-01

    Accurate estimates of hamstrings lengths are useful, for example, to facilitate planning for surgical lengthening of the hamstrings in patients with cerebral palsy. In this study, three models used to estimate hamstrings length (M1: Delp, M2: Klein Horsman, M3: Hawkins and Hull) were evaluated. This

  17. Rat injury model under controlled field-relevant primary blast conditions: acute response to a wide range of peak overpressures.

    Science.gov (United States)

    Skotak, Maciej; Wang, Fang; Alai, Aaron; Holmberg, Aaron; Harris, Seth; Switzer, Robert C; Chandra, Namas

    2013-07-01

    We evaluated the acute (up to 24 h) pathophysiological response to primary blast using a rat model and helium driven shock tube. The shock tube generates animal loadings with controlled pure primary blast parameters over a wide range and field-relevant conditions. We studied the biomechanical loading with a set of pressure gauges mounted on the surface of the nose, in the cranial space, and in the thoracic cavity of cadaver rats. Anesthetized rats were exposed to a single blast at precisely controlled five peak overpressures over a wide range (130, 190, 230, 250, and 290 kPa). We observed 0% mortality rates in 130 and 230 kPa groups, and 30%, 24%, and 100% mortality rates in 190, 250, and 290 kPa groups, respectively. The body weight loss was statistically significant in 190 and 250 kPa groups 24 h after exposure. The data analysis showed the magnitude of peak-to-peak amplitude of intracranial pressure (ICP) fluctuations correlates well with mortality rates. The ICP oscillations recorded for 190, 250, and 290 kPa are characterized by higher frequency (10-20 kHz) than in other two groups (7-8 kHz). We noted acute bradycardia and lung hemorrhage in all groups of rats subjected to the blast. We established the onset of both corresponds to 110 kPa peak overpressure. The immunostaining against immunoglobulin G (IgG) of brain sections of rats sacrificed 24-h post-exposure indicated the diffuse blood-brain barrier breakdown in the brain parenchyma. At high blast intensities (peak overpressure of 190 kPa or more), the IgG uptake by neurons was evident, but there was no evidence of neurodegeneration after 24 h post-exposure, as indicated by cupric silver staining. We observed that the acute response as well as mortality is a non-linear function over the peak overpressure and impulse ranges explored in this work.

  18. Kinetic Models with Randomly Perturbed Binary Collisions

    Science.gov (United States)

    Bassetti, Federico; Ladelli, Lucia; Toscani, Giuseppe

    2011-02-01

    We introduce a class of Kac-like kinetic equations on the real line, with general random collisional rules which, in some special cases, identify models for granular gases with a background heat bath (Carrillo et al. in Discrete Contin. Dyn. Syst. 24(1):59-81, 2009), and models for wealth redistribution in an agent-based market (Bisi et al. in Commun. Math. Sci. 7:901-916, 2009). Conditions on these collisional rules which guarantee both the existence and uniqueness of equilibrium profiles and their main properties are found. The characterization of these stationary states is of independent interest, since we show that they are stationary solutions of different evolution problems, both in the kinetic theory of rarefied gases (Cercignani et al. in J. Stat. Phys. 105:337-352, 2001; Villani in J. Stat. Phys. 124:781-822, 2006) and in the econophysical context (Bisi et al. in Commun. Math. Sci. 7:901-916, 2009).

  19. Selecting optimal monitoring site locations for peak ambient particulate material concentrations using the MM5-CAMx4 numerical modelling system.

    Science.gov (United States)

    Sturman, Andrew; Titov, Mikhail; Zawar-Reza, Peyman

    2011-01-15

    Installation of temporary or long term monitoring sites is expensive, so it is important to rationally identify potential locations that will achieve the requirements of regional air quality management strategies. A simple, but effective, numerical approach to selecting ambient particulate matter (PM) monitoring site locations has therefore been developed using the MM5-CAMx4 air pollution dispersion modelling system. A new method, 'site efficiency,' was developed to assess the ability of any monitoring site to provide peak ambient air pollution concentrations that are representative of the urban area. 'Site efficiency' varies from 0 to 100%, with the latter representing the most representative site location for monitoring peak PM concentrations. Four heavy pollution episodes in Christchurch (New Zealand) during winter 2005, representing 4 different aerosol dispersion patterns, were used to develop and test this site assessment technique. Evaluation of the efficiency of monitoring sites was undertaken for night and morning aerosol peaks for 4 different particulate material (PM) spatial patterns. The results demonstrate that the existing long term monitoring site at Coles Place is quite well located, with a site efficiency value of 57.8%. A temporary ambient PM monitoring site (operating during winter 2006) showed a lower ability to capture night and morning peak aerosol concentrations. Evaluation of multiple site locations used during an extensive field campaign in Christchurch (New Zealand) in 2000 indicated that the maximum efficiency achieved by any site in the city would be 60-65%, while the efficiency of a virtual background site is calculated to be about 7%. This method of assessing the appropriateness of any potential monitoring site can be used to optimize monitoring site locations for any air pollution measurement programme. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Bridges in the random-cluster model

    Directory of Open Access Journals (Sweden)

    Eren Metin Elçi

    2016-02-01

    Full Text Available The random-cluster model, a correlated bond percolation model, unifies a range of important models of statistical mechanics in one description, including independent bond percolation, the Potts model and uniform spanning trees. By introducing a classification of edges based on their relevance to the connectivity we study the stability of clusters in this model. We prove several exact relations for general graphs that allow us to derive unambiguously the finite-size scaling behavior of the density of bridges and non-bridges. For percolation, we are also able to characterize the point for which clusters become maximally fragile and show that it is connected to the concept of the bridge load. Combining our exact treatment with further results from conformal field theory, we uncover a surprising behavior of the (normalized variance of the number of (non-bridges, showing that it diverges in two dimensions below the value 4cos2⁡(π/3=0.2315891⋯ of the cluster coupling q. Finally, we show that a partial or complete pruning of bridges from clusters enables estimates of the backbone fractal dimension that are much less encumbered by finite-size corrections than more conventional approaches.

  1. Random graph models for dynamic networks

    Science.gov (United States)

    Zhang, Xiao; Moore, Cristopher; Newman, Mark E. J.

    2017-10-01

    Recent theoretical work on the modeling of network structure has focused primarily on networks that are static and unchanging, but many real-world networks change their structure over time. There exist natural generalizations to the dynamic case of many static network models, including the classic random graph, the configuration model, and the stochastic block model, where one assumes that the appearance and disappearance of edges are governed by continuous-time Markov processes with rate parameters that can depend on properties of the nodes. Here we give an introduction to this class of models, showing for instance how one can compute their equilibrium properties. We also demonstrate their use in data analysis and statistical inference, giving efficient algorithms for fitting them to observed network data using the method of maximum likelihood. This allows us, for example, to estimate the time constants of network evolution or infer community structure from temporal network data using cues embedded both in the probabilities over time that node pairs are connected by edges and in the characteristic dynamics of edge appearance and disappearance. We illustrate these methods with a selection of applications, both to computer-generated test networks and real-world examples.

  2. Nowcasting of peak lightning density using geostationary satellite data and model simulations

    Science.gov (United States)

    Saari, Matthew Dale William

    As lightning poses dangers to both humans and industry, the ability to nowcast lightning initiation (LI) and extent is a formidable endeavor. The GOES-R Convective Initiation (CI) Algorithm provides 0-1 hr. nowcasts of both CI and LI, but gives no indication of lightning density. Through the use of geostationary satellite data, IR interest fields were analyzed leading up to convective storms that produced varying amounts of lightning, as observed by a Lightning Mapping Array. The goal was to determine if these interest fields could provide information on how much lightning future convection may produce. Lightning threat forecasts from the Weather Research and Forecasting (WRF) model were included to determine if these data provide any nowcast value. While data from higher flash density storms were limited, results indicated that trends unique to higher flash rate cases may exist. Furthermore, WRF output may provide a possible range of future flash density.

  3. Enhanced Isotopic Ratio Outlier Analysis (IROA Peak Detection and Identification with Ultra-High Resolution GC-Orbitrap/MS: Potential Application for Investigation of Model Organism Metabolomes

    Directory of Open Access Journals (Sweden)

    Yunping Qiu

    2018-01-01

    Full Text Available Identifying non-annotated peaks may have a significant impact on the understanding of biological systems. In silico methodologies have focused on ESI LC/MS/MS for identifying non-annotated MS peaks. In this study, we employed in silico methodology to develop an Isotopic Ratio Outlier Analysis (IROA workflow using enhanced mass spectrometric data acquired with the ultra-high resolution GC-Orbitrap/MS to determine the identity of non-annotated metabolites. The higher resolution of the GC-Orbitrap/MS, together with its wide dynamic range, resulted in more IROA peak pairs detected, and increased reliability of chemical formulae generation (CFG. IROA uses two different 13C-enriched carbon sources (randomized 95% 12C and 95% 13C to produce mirror image isotopologue pairs, whose mass difference reveals the carbon chain length (n, which aids in the identification of endogenous metabolites. Accurate m/z, n, and derivatization information are obtained from our GC/MS workflow for unknown metabolite identification, and aids in silico methodologies for identifying isomeric and non-annotated metabolites. We were able to mine more mass spectral information using the same Saccharomyces cerevisiae growth protocol (Qiu et al. Anal. Chem 2016 with the ultra-high resolution GC-Orbitrap/MS, using 10% ammonia in methane as the CI reagent gas. We identified 244 IROA peaks pairs, which significantly increased IROA detection capability compared with our previous report (126 IROA peak pairs using a GC-TOF/MS machine. For 55 selected metabolites identified from matched IROA CI and EI spectra, using the GC-Orbitrap/MS vs. GC-TOF/MS, the average mass deviation for GC-Orbitrap/MS was 1.48 ppm, however, the average mass deviation was 32.2 ppm for the GC-TOF/MS machine. In summary, the higher resolution and wider dynamic range of the GC-Orbitrap/MS enabled more accurate CFG, and the coupling of accurate mass GC/MS IROA methodology with in silico fragmentation has great

  4. An analytical reconstruction model of the spread-out Bragg peak using laser-accelerated proton beams.

    Science.gov (United States)

    Tao, Li; Zhu, Kun; Zhu, Jungao; Xu, Xiaohan; Lin, Chen; Ma, Wenjun; Lu, Haiyang; Zhao, Yanying; Lu, Yuanrong; Chen, Jia-Er; Yan, Xueqing

    2017-07-07

    With the development of laser technology, laser-driven proton acceleration provides a new method for proton tumor therapy. However, it has not been applied in practice because of the wide and decreasing energy spectrum of laser-accelerated proton beams. In this paper, we propose an analytical model to reconstruct the spread-out Bragg peak (SOBP) using laser-accelerated proton beams. Firstly, we present a modified weighting formula for protons of different energies. Secondly, a theoretical model for the reconstruction of SOBPs with laser-accelerated proton beams has been built. It can quickly calculate the number of laser shots needed for each energy interval of the laser-accelerated protons. Finally, we show the 2D reconstruction results of SOBPs for laser-accelerated proton beams and the ideal situation. The final results show that our analytical model can give an SOBP reconstruction scheme that can be used for actual tumor therapy.

  5. Understand the impacts of wetland restoration on peak flow and baseflow by coupling hydrologic and hydrodynamic models

    Science.gov (United States)

    Gao, H.; Sabo, J. L.

    2016-12-01

    Wetlands as the earth's kidneys provides various ecosystem services, such as absorbing pollutants, purifying freshwater, providing habitats for diverse ecosystems, sustaining species richness and biodiversity. From hydrologic perspective, wetlands can store storm-flood water in flooding seasons and release it afterwards, which will reduce flood peaks and reshape hydrograph. Therefore, as a green infrastructure and natural capital, wetlands provides a competent alternative to manage water resources in a green way, with potential to replace the widely criticized traditional gray infrastructure (i.e. dams and dikes) in certain cases. However, there are few systematic scientific tools to support our decision-making on site selection and allow us to quantitatively investigate the impacts of restored wetlands on hydrological process, not only in local scale but also in the view of entire catchment. In this study, we employed a topographic index, HAND (the Height Above the Nearest Drainage), to support our decision on potential site selection. Subsequently, a hydrological model (VIC, Variable Infiltration Capacity) was coupled with a macro-scale hydrodynamic model (CaMa-Flood, Catchment-Based Macro-scale Floodplain) to simulate the impact of wetland restoration on flood peaks and baseflow. The results demonstrated that topographic information is an essential factor to select wetland restoration location. Different reaches, wetlands area and the change of roughness coefficient should be taken into account while evaluating the impacts of wetland restoration. The simulated results also clearly illustrated that wetland restoration will increase the local storage and decrease the downstream peak flow which is beneficial for flood prevention. However, its impact on baseflow is ambiguous. Theoretically, restored wetlands will increase the baseflow due to the slower release of the stored flood water, but the increase of wetlands area may also increase the actual evaporation

  6. Achieving Peak Flow and Sediment Loading Reductions through Increased Water Storage in the Le Sueur Watershed, Minnesota: A Modeling Approach

    Science.gov (United States)

    Mitchell, N. A.; Gran, K. B.; Cho, S. J.; Dalzell, B. J.; Kumarasamy, K.

    2015-12-01

    A combination of factors including climate change, land clearing, and artificial drainage have increased many agricultural regions' stream flows and rates at which channel banks and bluffs are eroded. Increasing erosion rates within the Minnesota River Basin have contributed to higher sediment-loading rates, excess turbidity levels, and increases in sedimentation rates in Lake Pepin further downstream. Water storage sites (e.g., wetlands) have been discussed as a means to address these issues. This study uses the Soil and Water Assessment Tool (SWAT) to assess a range of water retention site (WRS) implementation scenarios in the Le Sueur watershed in south-central Minnesota, a subwatershed of the Minnesota River Basin. Sediment loading from bluffs was assessed through an empirical relationship developed from gauging data. Sites were delineated as topographic depressions with specific land uses, minimum areas (3000 m2), and high compound topographic index values. Contributing areas for the WRS were manually measured and used with different site characteristics to create 210 initial WRS scenarios. A generalized relationship between WRS area and contributing area was identified from measurements, and this relationship was used with different site characteristics (e.g., depth, hydraulic conductivity (K), and placement) to create 225 generalized WRS scenarios. Reductions in peak flow volumes and sediment-loading rates are generally maximized by placing site with high K values in the upper half of the watershed. High K values allow sites to lose more water through seepage, emptying their storages between precipitation events and preventing frequent overflowing. Reductions in peak flow volumes and sediment-loading rates also level off at high WRS extents due to the decreasing frequencies of high-magnitude events. The generalized WRS scenarios were also used to create a simplified empirical model capable of generating peak flows and sediment-loading rates from near

  7. Modeling an emissions peak in China around 2030: Synergies or trade-offs between economy, energy and climate security

    Directory of Open Access Journals (Sweden)

    Qi-Min Chai

    2014-12-01

    Full Text Available China has achieved a political consensus around the need to transform the path of economic growth toward one that lowers carbon intensity and ultimately leads to reductions in carbon emissions, but there remain different views on pathways that could achieve such a transformation. The essential question is whether radical or incremental reforms are required in the coming decades. This study explores relevant pathways in China beyond 2020, particularly modeling the major target choices of carbon emission peaking in China around 2030 as China-US Joint Announcement by an integrated assessment model for climate change IAMC based on carbon factor theory. Here scenarios DGS-2020, LGS2025, LBS-2030 and DBS-2040 derived from the historical pathways of developed countries are developed to access the comprehensive impacts on the economy, energy and climate security for the greener development in China. The findings suggest that the period of 2025–2030 is the window of opportunity to achieve a peak in carbon emissions at a level below 12 Gt CO2 and 8.5 t per capita by reasonable trade-offs from economy growth, annually −0.2% in average and cumulatively −3% deviation to BAU in 2030. The oil and natural gas import dependence will exceed 70% and 45% respectively while the non-fossil energy and electricity share will rise to above 20% and 45%. Meantime, the electrification level in end use sectors will increase substantially and the electricity energy ratio approaching 50%, the labor and capital productivity should be double in improvements and the carbon intensity drop by 65% by 2030 compared to the 2005 level, and the cumulative emission reductions are estimated to be more than 20 Gt CO2 in 2015–2030.

  8. The Effect of Salt Space on Clinical Findings and Peak Expiratory Flow in Children with Mild to Moderate Asthma: A Randomized Crossover Trial

    National Research Council Canada - National Science Library

    Saeideh Mazloomzadeh; Niousha Bakhshi; Akefeh Ahmadiafshar; Mehdi Gholami

    2017-01-01

    ...‚ thus exploring other therapeutic plans could be desirable. The aim of this study was to investigate the effect of salt space on clinical findings and peak expiratory flow rate among children with asthma...

  9. Applying Physically Representative Watershed Modelling to Assess Peak and Low Flow Response to Timber Harvest: Application for Watershed Assessments

    Science.gov (United States)

    MacDonald, R. J.; Anderson, A.; Silins, U.; Craig, J. R.

    2014-12-01

    Forest harvesting, insects, disease, wildfire, and other disturbances can combine with climate change to cause unknown changes to the amount and timing of streamflow from critical forested watersheds. Southern Alberta forest and alpine areas provide downstream water supply for agriculture and water utilities that supply approximately two thirds of the Alberta population. This project uses datasets from intensely monitored study watersheds and hydrological model platforms to extend our understanding of how disturbances and climate change may impact various aspects of the streamflow regime that are of importance to downstream users. The objectives are 1) to use the model output of watershed response to disturbances to inform assessments of forested watersheds in the region, and 2) to investigate the use of a new flexible modelling platform as a tool for detailed watershed assessments and hypothesis testing. Here we applied the RAVEN hydrological modelling framework to quantify changes in key hydrological processes driving peak and low flows in a headwater catchment along the eastern slopes of the Canadian Rocky Mountains. The model was applied to simulate the period from 2006 to 2011 using data from the Star Creek watershed in southwestern Alberta. The representation of relevant hydrological processes was verified using snow survey, meteorological, and vegetation data collected through the Southern Rockies Watershed Project. Timber harvest scenarios were developed to estimate the effects of cut levels ranging from 20 to 100% over a range of elevations, slopes, and aspects. We quantified changes in the timing and magnitude of low flow and high flow events during the 2006 to 2011 period. Future work will assess changes in the probability of low and high flow events using a long-term meteorological record. This modelling framework enables relevant processes at the watershed scale to be accounted in a physically robust and computational efficient manner. Hydrologic

  10. Numerical modeling of injection, stress and permeability enhancement during shear stimulation at the Desert Peak Enhanced Geothermal System

    Science.gov (United States)

    Dempsey, David; Kelkar, Sharad; Davatzes, Nick; Hickman, Stephen H.; Moos, Daniel

    2015-01-01

    Creation of an Enhanced Geothermal System relies on stimulation of fracture permeability through self-propping shear failure that creates a complex fracture network with high surface area for efficient heat transfer. In 2010, shear stimulation was carried out in well 27-15 at Desert Peak geothermal field, Nevada, by injecting cold water at pressure less than the minimum principal stress. An order-of-magnitude improvement in well injectivity was recorded. Here, we describe a numerical model that accounts for injection-induced stress changes and permeability enhancement during this stimulation. In a two-part study, we use the coupled thermo-hydrological-mechanical simulator FEHM to: (i) construct a wellbore model for non-steady bottom-hole temperature and pressure conditions during the injection, and (ii) apply these pressures and temperatures as a source term in a numerical model of the stimulation. In this model, a Mohr-Coulomb failure criterion and empirical fracture permeability is developed to describe permeability evolution of the fractured rock. The numerical model is calibrated using laboratory measurements of material properties on representative core samples and wellhead records of injection pressure and mass flow during the shear stimulation. The model captures both the absence of stimulation at low wellhead pressure (WHP ≤1.7 and ≤2.4 MPa) as well as the timing and magnitude of injectivity rise at medium WHP (3.1 MPa). Results indicate that thermoelastic effects near the wellbore and the associated non-local stresses further from the well combine to propagate a failure front away from the injection well. Elevated WHP promotes failure, increases the injection rate, and cools the wellbore; however, as the overpressure drops off with distance, thermal and non-local stresses play an ongoing role in promoting shear failure at increasing distance from the well.

  11. Force Limited Random Vibration Test of TESS Camera Mass Model

    Science.gov (United States)

    Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.

    2015-01-01

    The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.

  12. Error of the modelled peak flow of the hydraulically reconstructed 1907 flood of the Ebro River in Xerta (NE Iberian Peninsula)

    Science.gov (United States)

    Lluís Ruiz-Bellet, Josep; Castelltort, Xavier; Carles Balasch, J.; Tuset, Jordi

    2016-04-01

    The estimation of the uncertainty of the results of the hydraulic modelling has been deeply analysed, but no clear methodological procedures as to its determination have been formulated when applied to historical hydrology. The main objective of this study was to calculate the uncertainty of the resulting peak flow of a typical historical flood reconstruction. The secondary objective was to identify the input variables that influenced the result the most and their contribution to peak flow total error. The uncertainty of 21-23 October 1907 flood of the Ebro River (NE Iberian Peninsula) in the town of Xerta (83,000 km2) was calculated with a series of local sensitivity analyses of the main variables affecting the resulting peak flow. Besides, in order to see to what degree the result depended on the chosen model, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation. The peak flow of 1907 flood in the Ebro River in Xerta, reconstructed with HEC-RAS, was 11500 m3·s-1 and its total error was ±31%. The most influential input variable over HEC-RAS peak flow results was water height; however, the one that contributed the most to peak flow error was Manning's n, because its uncertainty was far greater than water height's. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed. The peak flow was 12000 m3·s-1 when calculated with the 2D model Iber and 11500 m3·s-1 when calculated with the Manning equation.

  13. A Note on the Correlated Random Coefficient Model

    DEFF Research Database (Denmark)

    Kolodziejczyk, Christophe

    In this note we derive the bias of the OLS estimator for a correlated random coefficient model with one random coefficient, but which is correlated with a binary variable. We provide set-identification to the parameters of interest of the model. We also show how to reduce the bias of the estimator...

  14. The Analysis of Random Effects in Modeling Studies.

    Science.gov (United States)

    Scheirer, C. James; Geller, Sanford E.

    1979-01-01

    Argues that in research on the effects of modeling, models must be analyzed as a random factor in order to avoid a positive bias in the results. The concept of a random factor is discussed, worked examples are provided, and a practical solution to the problem is proposed. (JMB)

  15. Compensatory and non-compensatory multidimensional randomized item response models

    NARCIS (Netherlands)

    Fox, J.P.; Entink, R.K.; Avetisyan, M.

    2014-01-01

    Randomized response (RR) models are often used for analysing univariate randomized response data and measuring population prevalence of sensitive behaviours. There is much empirical support for the belief that RR methods improve the cooperation of the respondents. Recently, RR models have been

  16. A random energy model for size dependence : recurrence vs. transience

    NARCIS (Netherlands)

    Külske, Christof

    1998-01-01

    We investigate the size dependence of disordered spin models having an infinite number of Gibbs measures in the framework of a simplified 'random energy model for size dependence'. We introduce two versions (involving either independent random walks or branching processes), that can be seen as

  17. X-Ray Emitting GHz-Peaked Spectrum Galaxies: Testing a Dynamical-Radiative Model with Broad-Band Spectra

    Energy Technology Data Exchange (ETDEWEB)

    Ostorero, L.; /Turin U. /INFN, Turin; Moderski, R.; /Warsaw, Copernicus Astron. Ctr. /KIPAC, Menlo Park; Stawarz, L.; /KIPAC, Menlo Park /Jagiellonian U., Astron. Observ.; Diaferio, A.; /Turin U. /INFN, Turin; Kowalska, I.; /Warsaw U. Observ.; Cheung, C.C.; /NASA, Goddard /Naval Research Lab, Wash., D.C.; Kataoka, J.; /Waseda U., RISE; Begelman, M.C.; /JILA, Boulder; Wagner, S.J.; /Heidelberg Observ.

    2010-06-07

    In a dynamical-radiative model we recently developed to describe the physics of compact, GHz-Peaked-Spectrum (GPS) sources, the relativistic jets propagate across the inner, kpc-sized region of the host galaxy, while the electron population of the expanding lobes evolves and emits synchrotron and inverse-Compton (IC) radiation. Interstellar-medium gas clouds engulfed by the expanding lobes, and photoionized by the active nucleus, are responsible for the radio spectral turnover through free-free absorption (FFA) of the synchrotron photons. The model provides a description of the evolution of the GPS spectral energy distribution (SED) with the source expansion, predicting significant and complex high-energy emission, from the X-ray to the {gamma}-ray frequency domain. Here, we test this model with the broad-band SEDs of a sample of eleven X-ray emitting GPS galaxies with Compact-Symmetric-Object (CSO) morphology, and show that: (i) the shape of the radio continuum at frequencies lower than the spectral turnover is indeed well accounted for by the FFA mechanism; (ii) the observed X-ray spectra can be interpreted as non-thermal radiation produced via IC scattering of the local radiation fields off the lobe particles, providing a viable alternative to the thermal, accretion-disk dominated scenario. We also show that the relation between the hydrogen column densities derived from the X-ray (N{sub H}) and radio (N{sub HI}) data of the sources is suggestive of a positive correlation, which, if confirmed by future observations, would provide further support to our scenario of high-energy emitting lobes.

  18. The Ising model on random lattices in arbitrary dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Bonzom, Valentin, E-mail: vbonzom@perimeterinstitute.ca [Perimeter Institute for Theoretical Physics, 31 Caroline St. N, ON N2L 2Y5, Waterloo (Canada); Gurau, Razvan, E-mail: rgurau@perimeterinstitute.ca [Perimeter Institute for Theoretical Physics, 31 Caroline St. N, ON N2L 2Y5, Waterloo (Canada); Rivasseau, Vincent, E-mail: vincent.rivasseau@gmail.com [Laboratoire de Physique Theorique, CNRS UMR 8627, Universite Paris XI, F-91405 Orsay Cedex (France)

    2012-05-01

    We study analytically the Ising model coupled to random lattices in dimension three and higher. The family of random lattices we use is generated by the large N limit of a colored tensor model generalizing the two-matrix model for Ising spins on random surfaces. We show that, in the continuum limit, the spin system does not exhibit a phase transition at finite temperature, in agreement with numerical investigations. Furthermore we outline a general method to study critical behavior in colored tensor models.

  19. Robot-assisted gait training improves brachial-ankle pulse wave velocity and peak aerobic capacity in subacute stroke patients with totally dependent ambulation: Randomized controlled trial.

    Science.gov (United States)

    Han, Eun Young; Im, Sang Hee; Kim, Bo Ryun; Seo, Min Ji; Kim, Myeong Ok

    2016-10-01

    Brachial-ankle pulse wave velocity (baPWV) evaluates arterial stiffness and also predicts early outcome in stroke patients. The objectives of this study were to investigate arterial stiffness of subacute nonfunctional ambulatory stroke patients and to compare the effects of robot-assisted gait therapy (RAGT) combined with rehabilitation therapy (RT) on arterial stiffness and functional recovery with those of RT alone. The RAGT group (N = 30) received 30 minutes of robot-assisted gait therapy and 30 minutes of conventional RT, and the control group (N = 26) received 60 minutes of RT, 5 times a week for 4 weeks. baPWV was measured and calculated using an automated device. The patients also performed a symptom-limited graded exercise stress test using a bicycle ergometer, and parameters of cardiopulmonary fitness were recorded. Clinical outcome measures were categorized into 4 categories: activities of daily living, balance, ambulatory function, and paretic leg motor function and were evaluated before and after the 4-week intervention. Both groups exhibited significant functional recovery in all clinical outcome measures after the 4-week intervention. However, peak aerobic capacity, peak heart rate, exercise tolerance test duration, and baPWV improved only in the RAGT group, and the improvements in baPWV and peak aerobic capacity were more noticeable in the RAGT group than in the control group. Robot-assisted gait therapy combined with conventional rehabilitation therapy represents an effective method for reversing arterial stiffness and improving peak aerobic capacity in subacute stroke patients with totally dependent ambulation. However, further large-scale studies with longer term follow-up periods are warranted to measure the effects of RAGT on secondary prevention after stroke.

  20. Modeling Gene Regulation in Liver Hepatocellular Carcinoma with Random Forests

    National Research Council Canada - National Science Library

    Hilal Kazan

    2016-01-01

    .... We developed a random forest model that incorporates copy-number variation, DNA methylation, transcription factor, and microRNA binding information as features to predict gene expression in HCC...

  1. Superstatistical analysis and modelling of heterogeneous random walks

    Science.gov (United States)

    Metzner, Claus; Mark, Christoph; Steinwachs, Julian; Lautscham, Lena; Stadler, Franz; Fabry, Ben

    2015-06-01

    Stochastic time series are ubiquitous in nature. In particular, random walks with time-varying statistical properties are found in many scientific disciplines. Here we present a superstatistical approach to analyse and model such heterogeneous random walks. The time-dependent statistical parameters can be extracted from measured random walk trajectories with a Bayesian method of sequential inference. The distributions and correlations of these parameters reveal subtle features of the random process that are not captured by conventional measures, such as the mean-squared displacement or the step width distribution. We apply our new approach to migration trajectories of tumour cells in two and three dimensions, and demonstrate the superior ability of the superstatistical method to discriminate cell migration strategies in different environments. Finally, we show how the resulting insights can be used to design simple and meaningful models of the underlying random processes.

  2. PEAK SHAVING CONSIDERING STREAMFLOW UNCERTAINTIES

    African Journals Online (AJOL)

    user

    The main thrust of this paper is peak shaving with a Stochastic hydro model. In peak sharing, the amount of hydro energy scheduled may be a minimum but it serves to replace less efficient thermal units. The sample system is die Kainji .... ni = average number of times the system load is in state Li in period k. 5. Numerical ...

  3. Random-Field Model of a Cooper Pair Insulator

    Science.gov (United States)

    Proctor, Thomas; Chudnovsky, Eugene; Garanin, Dmitry

    2013-03-01

    The model of a disordered superconducting film with quantum phase fluctuations is mapped on a random-field XY spin model in 2+1 dimensions. Analytical studies within continuum field theory, supported by our recent numerical calculations on discrete lattices, show the onset of the low-temperature Cooper pair insulator phase. The constant external field in the random-field spin model maps on the Josephson coupling between the disordered film and a bulk superconductor. Such a coupling, if sufficiently strong, restores superconductivity in the film. This provides an experimental test for the quantum fluctuation model of a superinsulator.

  4. A generalized model via random walks for information filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)

    2016-08-06

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  5. Model C critical dynamics of random anisotropy magnets

    Energy Technology Data Exchange (ETDEWEB)

    Dudka, M [Institute for Condensed Matter Physics, National Acad. Sci. of Ukraine, UA-79011 Lviv (Ukraine); Folk, R [Institut fuer Theoretische Physik, Johannes Kepler Universitaet Linz, A-4040 Linz (Austria); Holovatch, Yu [Institute for Condensed Matter Physics, National Acad. Sci. of Ukraine, UA-79011 Lviv (Ukraine); Moser, G [Institut fuer Physik und Biophysik, Universitaet Salzburg, A-5020 Salzburg (Austria)

    2007-07-20

    We study the relaxational critical dynamics of the three-dimensional random anisotropy magnets with the non-conserved n-component order parameter coupled to a conserved scalar density. In the random anisotropy magnets, the structural disorder is present in the form of local quenched anisotropy axes of random orientation. When the anisotropy axes are randomly distributed along the edges of the n-dimensional hypercube, asymptotical dynamical critical properties coincide with those of the random-site Ising model. However the structural disorder gives rise to considerable effects for non-asymptotic critical dynamics. We investigate this phenomenon by a field-theoretical renormalization group analysis in the two-loop order. We study critical slowing down and obtain quantitative estimates for the effective and asymptotic critical exponents of the order parameter and scalar density. The results predict complex scenarios for the effective critical exponent approaching the asymptotic regime.

  6. Random matrix model for disordered conductors

    Indian Academy of Sciences (India)

    1. Introduction. Matrix models are being successfully employed in a variety of domains of physics includ- ing studies on heavy nuclei [1], mesoscopic disordered conductors [2,3], two-dimensional quantum gravity [4], and chaotic quantum systems [5]. Universal conductance fluctuations in metals [6] and spectral fluctuations in ...

  7. Single-cluster dynamics for the random-cluster model

    NARCIS (Netherlands)

    Deng, Y.; Qian, X.; Blöte, H.W.J.

    We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q-state Potts model to noninteger values q>1. Its results for static quantities are in a satisfactory agreement with those

  8. Single-cluster dynamics for the random-cluster model

    NARCIS (Netherlands)

    Deng, Y.; Qian, X.; Blöte, H.W.J.

    2009-01-01

    We formulate a single-cluster Monte Carlo algorithm for the simulation of the random-cluster model. This algorithm is a generalization of the Wolff single-cluster method for the q-state Potts model to noninteger values q>1. Its results for static quantities are in a satisfactory agreement with those

  9. Simulating intrafraction prostate motion with a random walk model

    Directory of Open Access Journals (Sweden)

    Tobias Pommer, PhD

    2017-07-01

    Conclusions: Random walk modeling is feasible and recreated the characteristics of the observed prostate motion. Introducing artificial transient motion did not improve the overall agreement, although the first 30 seconds of the traces were better reproduced. The model provides a simple estimate of prostate motion during delivery of radiation therapy.

  10. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  11. Application of Poisson random effect models for highway network screening.

    Science.gov (United States)

    Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer

    2014-02-01

    In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. A novel algorithm for calling mRNA m6A peaks by modeling biological variances in MeRIP-seq data.

    Science.gov (United States)

    Cui, Xiaodong; Meng, Jia; Zhang, Shaowu; Chen, Yidong; Huang, Yufei

    2016-06-15

    N(6)-methyl-adenosine (m(6)A) is the most prevalent mRNA methylation but precise prediction of its mRNA location is important for understanding its function. A recent sequencing technology, known as Methylated RNA Immunoprecipitation Sequencing technology (MeRIP-seq), has been developed for transcriptome-wide profiling of m(6)A. We previously developed a peak calling algorithm called exomePeak. However, exomePeak over-simplifies data characteristics and ignores the reads' variances among replicates or reads dependency across a site region. To further improve the performance, new model is needed to address these important issues of MeRIP-seq data. We propose a novel, graphical model-based peak calling method, MeTPeak, for transcriptome-wide detection of m(6)A sites from MeRIP-seq data. MeTPeak explicitly models read count of an m(6)A site and introduces a hierarchical layer of Beta variables to capture the variances and a Hidden Markov model to characterize the reads dependency across a site. In addition, we developed a constrained Newton's method and designed a log-barrier function to compute analytically intractable, positively constrained Beta parameters. We applied our algorithm to simulated and real biological datasets and demonstrated significant improvement in detection performance and robustness over exomePeak. Prediction results on publicly available MeRIP-seq datasets are also validated and shown to be able to recapitulate the known patterns of m(6)A, further validating the improved performance of MeTPeak. The package 'MeTPeak' is implemented in R and C ++, and additional details are available at https://github.com/compgenomics/MeTPeak yufei.huang@utsa.edu or xdchoi@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. Random Multi-Hopper Model. Super-Fast Random Walks on Graphs

    OpenAIRE

    Estrada, Ernesto; Delvenne, Jean-Charles; Hatano, Naomichi; Mateos, José L.; Metzler, Ralf; Riascos ( Universidad Mariana, Pasto, Colombia), Alejandro P; Schaub, Michael T.

    2016-01-01

    We develop a model for a random walker with long-range hops on general graphs. This random multi-hopper jumps from a node to any other node in the graph with a probability that decays as a function of the shortest-path distance between the two nodes. We consider here two decaying functions in the form of the Laplace and Mellin transforms of the shortest-path distances. Remarkably, when the parameters of these transforms approach zero asymptotically, the multi-hopper's hitting times between an...

  14. Semiparametric Bayesian Estimation of Random Coefficients Discrete Choice Models

    OpenAIRE

    Tchumtchoua, Sylvie; Dey, Dipak

    2007-01-01

    Heterogeneity in choice models is typically assumed to have a normal distribution in both Bayesian and classical setups. In this paper, we propose a semiparametric Bayesian framework for the analysis of random coefficients discrete choice models that can be applied to both individual as well as aggregate data. Heterogeneity is modeled using a Dirichlet process prior which varies with consumers characteristics through covariates. We develop a Markov chain Monte Carlo algorithm for fitting such...

  15. RANDOM CLOSED SET MODELS: ESTIMATING AND SIMULATING BINARY IMAGES

    Directory of Open Access Journals (Sweden)

    Ángeles M Gallego

    2011-05-01

    Full Text Available In this paper we show the use of the Boolean model and a class of RACS models that is a generalization of it to obtain simulations of random binary images able to imitate natural textures such as marble or wood. The different tasks required, parameter estimation, goodness-of-fit test and simulation, are reviewed. In addition to a brief review of the theory, simulation studies of each model are included.

  16. Effects of random noise in a dynamical model of love

    Energy Technology Data Exchange (ETDEWEB)

    Xu Yong, E-mail: hsux3@nwpu.edu.cn [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Gu Rencai; Zhang Huiqing [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)

    2011-07-15

    Highlights: > We model the complexity and unpredictability of psychology as Gaussian white noise. > The stochastic system of love is considered including bifurcation and chaos. > We show that noise can both suppress and induce chaos in dynamical models of love. - Abstract: This paper aims to investigate the stochastic model of love and the effects of random noise. We first revisit the deterministic model of love and some basic properties are presented such as: symmetry, dissipation, fixed points (equilibrium), chaotic behaviors and chaotic attractors. Then we construct a stochastic love-triangle model with parametric random excitation due to the complexity and unpredictability of the psychological system, where the randomness is modeled as the standard Gaussian noise. Stochastic dynamics under different three cases of 'Romeo's romantic style', are examined and two kinds of bifurcations versus the noise intensity parameter are observed by the criteria of changes of top Lyapunov exponent and shape of stationary probability density function (PDF) respectively. The phase portraits and time history are carried out to verify the proposed results, and the good agreement can be found. And also the dual roles of the random noise, namely suppressing and inducing chaos are revealed.

  17. Peak flow meter (image)

    Science.gov (United States)

    A peak flow meter is commonly used by a person with asthma to measure the amount of air that can be ... become narrow or blocked due to asthma, peak flow values will drop because the person cannot blow ...

  18. Positive random fields for modeling material stiffness and compliance

    DEFF Research Database (Denmark)

    Hasofer, Abraham Michael; Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob

    1998-01-01

    with material properties modeled in terms of the considered random fields.The paper addsthe gamma field, the Fisher field, the beta field, and their reciprocal fields to the catalogue. These fields are all defined on the basis of sums of squares of independent standard Gaussian random variables.All the existing......Positive random fields with known marginal properties and known correlation function are not numerous in the literature. The most prominent example is the log\\-normal field for which the complete distribution is known and for which the reciprocal field is also lognormal. It is of interest...... to supplement the catalogue of positive fields beyond the class of those obtained by simple marginal transformation of a Gaussian field, this class containing the lognormal field.As a minimum for a random field to be included in the catalogue itis required that an algorithm for simulation of realizations can...

  19. Using Random Forest Models to Predict Organizational Violence

    Science.gov (United States)

    Levine, Burton; Bobashev, Georgly

    2012-01-01

    We present a methodology to access the proclivity of an organization to commit violence against nongovernment personnel. We fitted a Random Forest model using the Minority at Risk Organizational Behavior (MAROS) dataset. The MAROS data is longitudinal; so, individual observations are not independent. We propose a modification to the standard Random Forest methodology to account for the violation of the independence assumption. We present the results of the model fit, an example of predicting violence for an organization; and finally, we present a summary of the forest in a "meta-tree,"

  20. Are Discrepancies in RANS Modeled Reynolds Stresses Random?

    CERN Document Server

    Xiao, Heng; Wang, Jian-xun; Paterson, Eric G

    2016-01-01

    In the turbulence modeling community, significant efforts have been made to quantify the uncertainties in the Reynolds-Averaged Navier--Stokes (RANS) models and to improve their predictive capabilities. Of crucial importance in these efforts is the understanding of the discrepancies in the RANS modeled Reynolds stresses. However, to what extent these discrepancies can be predicted or whether they are completely random remains a fundamental open question. In this work we used a machine learning algorithm based on random forest regression to predict the discrepancies. The success of the regression--prediction procedure indicates that, to a large extent, the discrepancies in the modeled Reynolds stresses can be explained by the mean flow feature, and thus they are universal quantities that can be extrapolated from one flow to another, at least among different flows sharing the same characteristics such as separation. This finding has profound implications to the future development of RANS models, opening up new ...

  1. Buffalos milk yield analysis using random regression models

    Directory of Open Access Journals (Sweden)

    A.S. Schierholt

    2010-02-01

    Full Text Available Data comprising 1,719 milk yield records from 357 females (predominantly Murrah breed, daughters of 110 sires, with births from 1974 to 2004, obtained from the Programa de Melhoramento Genético de Bubalinos (PROMEBUL and from records of EMBRAPA Amazônia Oriental - EAO herd, located in Belém, Pará, Brazil, were used to compare random regression models for estimating variance components and predicting breeding values of the sires. The data were analyzed by different models using the Legendre’s polynomial functions from second to fourth orders. The random regression models included the effects of herd-year, month of parity date of the control; regression coefficients for age of females (in order to describe the fixed part of the lactation curve and random regression coefficients related to the direct genetic and permanent environment effects. The comparisons among the models were based on the Akaike Infromation Criterion. The random effects regression model using third order Legendre’s polynomials with four classes of the environmental effect were the one that best described the additive genetic variation in milk yield. The heritability estimates varied from 0.08 to 0.40. The genetic correlation between milk yields in younger ages was close to the unit, but in older ages it was low.

  2. Application of Random-Effects Probit Regression Models.

    Science.gov (United States)

    Gibbons, Robert D.; Hedeker, Donald

    1994-01-01

    Develops random-effects probit model for case in which outcome of interest is series of correlated binary responses, obtained as product of longitudinal response process where individual is repeatedly classified on binary outcome variable or in multilevel or clustered problems in which individuals within groups are considered to share…

  3. Asthma Self-Management Model: Randomized Controlled Trial

    Science.gov (United States)

    Olivera, Carolina M. X.; Vianna, Elcio Oliveira; Bonizio, Roni C.; de Menezes, Marcelo B.; Ferraz, Erica; Cetlin, Andrea A.; Valdevite, Laura M.; Almeida, Gustavo A.; Araujo, Ana S.; Simoneti, Christian S.; de Freitas, Amanda; Lizzi, Elisangela A.; Borges, Marcos C.; de Freitas, Osvaldo

    2016-01-01

    Information for patients provided by the pharmacist is reflected in adhesion to treatment, clinical results and patient quality of life. The objective of this study was to assess an asthma self-management model for rational medicine use. This was a randomized controlled trial with 60 asthmatic patients assigned to attend five modules presented by…

  4. First principles modeling of magnetic random access memory devices (invited)

    Energy Technology Data Exchange (ETDEWEB)

    Butler, W.H.; Zhang, X.; Schulthess, T.C.; Nicholson, D.M.; Oparin, A.B. [Metals and Ceramics Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); MacLaren, J.M. [Department of Physics, Tulane University, New Orleans, Louisiana 70018 (United States)

    1999-04-01

    Giant magnetoresistance (GMR) and spin-dependent tunneling may be used to make magnetic random access memory devices. We have applied first-principles based electronic structure techniques to understand these effects and in the case of GMR to model the transport properties of the devices. {copyright} {ital 1999 American Institute of Physics.}

  5. Performance of Random Effects Model Estimators under Complex Sampling Designs

    Science.gov (United States)

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  6. Scale-free random graphs and Potts model

    Indian Academy of Sciences (India)

    real-world networks such as the world-wide web, the Internet, the coauthorship, the protein interaction networks and so on display power-law behaviors in the degree ... in this paper, we study the evolution of SF random graphs from the perspective of equilibrium statistical physics. The formulation in terms of the spin model ...

  7. Modeling fiber type grouping by a binary Markov random field

    NARCIS (Netherlands)

    Venema, H. W.

    1992-01-01

    A new approach to the quantification of fiber type grouping is presented, in which the distribution of histochemical type in a muscle cross section is regarded as a realization of a binary Markov random field (BMRF). Methods for the estimation of the parameters of this model are discussed. The first

  8. Quantum random oracle model for quantum digital signature

    Science.gov (United States)

    Shang, Tao; Lei, Qi; Liu, Jianwei

    2016-10-01

    The goal of this work is to provide a general security analysis tool, namely, the quantum random oracle (QRO), for facilitating the security analysis of quantum cryptographic protocols, especially protocols based on quantum one-way function. QRO is used to model quantum one-way function and different queries to QRO are used to model quantum attacks. A typical application of quantum one-way function is the quantum digital signature, whose progress has been hampered by the slow pace of the experimental realization. Alternatively, we use the QRO model to analyze the provable security of a quantum digital signature scheme and elaborate the analysis procedure. The QRO model differs from the prior quantum-accessible random oracle in that it can output quantum states as public keys and give responses to different queries. This tool can be a test bed for the cryptanalysis of more quantum cryptographic protocols based on the quantum one-way function.

  9. Investigating flow properties of partially cemented fractures in Travis Peak Formation using image-based pore-scale modeling

    Science.gov (United States)

    Tokan-Lawal, Adenike; Prodanović, Maša.; Eichhubl, Peter

    2015-08-01

    Natural fractures can provide preferred flow pathways in otherwise low-permeability reservoirs. In deep subsurface reservoirs including tight oil and gas reservoirs, as well as in hydrothermal systems, fractures are frequently lined or completely filled with mineral cement that reduces or occludes fracture porosity and permeability. Fracture cement linings potentially reduce flow connectivity between the fracture and host rock and increase fracture wall roughness, which constricts flow. We combined image-based fracture space characterization, mercury injection capillary pressure and permeability experiments, and numerical simulations to evaluate the influence of fracture-lining cement on single-phase and multiphase flows along a natural fracture from the Travis Peak Formation, a tight gas reservoir sandstone in East Texas. Using X-ray computed microtomographic image analysis, we characterized fracture geometry and the connectivity and geometric tortuosity of the fracture pore space. Combining level set method-based progressive quasistatic and lattice Boltzmann simulations, we assessed the capillary-dominated displacement properties and the (relative) permeability of a cement-lined fracture. Published empirical correlations between aperture and permeability for barren fractures provide permeability estimates that vary among each other, and differ from our results, vary by several orders of magnitude. Compared to barren fractures, cement increases the geometric tortuosity, aperture variation of the pore space, and capillary pressure while reducing the single-phase permeability by up to 2 orders of magnitude. For multiphase displacement, relative permeability and fluid entrapment geometry resemble those of porous media and differ from those characteristic of barren fractures.

  10. Evolution of the concentration PDF in random environments modeled by global random walk

    Science.gov (United States)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and

  11. A monoecious and diploid Moran model of random mating.

    Science.gov (United States)

    Hössjer, Ola; Tyvand, Peder A

    2016-04-07

    An exact Markov chain is developed for a Moran model of random mating for monoecious diploid individuals with a given probability of self-fertilization. The model captures the dynamics of genetic variation at a biallelic locus. We compare the model with the corresponding diploid Wright-Fisher (WF) model. We also develop a novel diffusion approximation of both models, where the genotype frequency distribution dynamics is described by two partial differential equations, on different time scales. The first equation captures the more slowly varying allele frequencies, and it is the same for the Moran and WF models. The other equation captures departures of the fraction of heterozygous genotypes from a large population equilibrium curve that equals Hardy-Weinberg proportions in the absence of selfing. It is the distribution of a continuous time Ornstein-Uhlenbeck process for the Moran model and a discrete time autoregressive process for the WF model. One application of our results is to capture dynamics of the degree of non-random mating of both models, in terms of the fixation index fIS. Although fIS has a stable fixed point that only depends on the degree of selfing, the normally distributed oscillations around this fixed point are stochastically larger for the Moran than for the WF model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Why is the diffraction peak a peak?

    CERN Document Server

    Cornille, H

    1975-01-01

    It is proved that the high-energy differential cross section for an elastic process has a maximum exactly in the forward direction and that the slope of the diffraction peak is at most (log s)/sup 2/. The widths of the diffraction peaks defined by the absorptive part and the differential cross section are compared. The assumptions are that the amplitude is dominated by the even signature amplitude and that the total cross section, if it decreases, decreases less fast than s/sup -1/2/. Strictly speaking, the results hold only for a sequence of energies approaching infinity. The proofs are given for the spin-O- spin-O case, but it is not unreasonable to hope that they can be generalized to arbitrary spins. (13 refs).

  13. Global model of the F2 layer peak height for low solar activity based on GPS radio-occultation data

    Science.gov (United States)

    Shubin, V. N.; Karpachev, A. T.; Tsybulya, K. G.

    2013-11-01

    We propose a global median model SMF2 (Satellite Model of the F2 layer) of the ionospheric F2-layer height maximum (hmF2), based on GPS radio-occultation data for low solar activity periods (F10.7Aregion. Ground-based ionospheric sounding data were used for comparison and validation. Spatial dependence of hmF2 is modeled by a Legendre-function expansion. Temporal dependence, as a function of Universal Time (UT), is described by a Fourier expansion. Inputs of the model are: geographical coordinates, month and F10.7A solar activity index. The model is designed for quiet geomagnetic conditions (Kр=1-2), typical for low solar activity. SMF2 agrees well with the International Reference Ionosphere model (IRI) in those regions, where the ground-based ionosonde network is dense. Maximal difference between the models is found in the equatorial belt, over the oceans and the polar caps. Standard deviations of the radio-occultation and Digisonde data from the predicted SMF2 median are 10-16 km for all seasons, against 13-29 km for IRI-2012. Average relative deviations are 3-4 times less than for IRI, 3-4% against 9-12%. Therefore, the proposed hmF2 model is more accurate than IRI-2012.

  14. Random matrices as models for the statistics of quantum mechanics

    Science.gov (United States)

    Casati, Giulio; Guarneri, Italo; Mantica, Giorgio

    1986-05-01

    Random matrices from the Gaussian unitary ensemble generate in a natural way unitary groups of evolution in finite-dimensional spaces. The statistical properties of this time evolution can be investigated by studying the time autocorrelation functions of dynamical variables. We prove general results on the decay properties of such autocorrelation functions in the limit of infinite-dimensional matrices. We discuss the relevance of random matrices as models for the dynamics of quantum systems that are chaotic in the classical limit. Permanent address: Dipartimento di Fisica, Via Celoria 16, 20133 Milano, Italy.

  15. Stochastic geometry, spatial statistics and random fields models and algorithms

    CERN Document Server

    2015-01-01

    Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.

  16. Two-dimensional hydrodynamic modeling to quantify effects of peak-flow management on channel morphology and salmon-spawning habitat in the Cedar River, Washington

    Science.gov (United States)

    Czuba, Christiana; Czuba, Jonathan A.; Gendaszek, Andrew S.; Magirl, Christopher S.

    2010-01-01

    The Cedar River in Washington State originates on the western slope of the Cascade Range and provides the City of Seattle with most of its drinking water, while also supporting a productive salmon habitat. Water-resource managers require detailed information on how best to manage high-flow releases from Chester Morse Lake, a large reservoir on the Cedar River, during periods of heavy precipitation to minimize flooding, while mitigating negative effects on fish populations. Instream flow-management practices include provisions for adaptive management to promote and maintain healthy aquatic habitat in the river system. The current study is designed to understand the linkages between peak flow characteristics, geomorphic processes, riverine habitat, and biological responses. Specifically, two-dimensional hydrodynamic modeling is used to simulate and quantify the effects of the peak-flow magnitude, duration, and frequency on the channel morphology and salmon-spawning habitat. Two study reaches, representative of the typical geomorphic and ecologic characteristics of the Cedar River, were selected for the modeling. Detailed bathymetric data, collected with a real-time kinematic global positioning system and an acoustic Doppler current profiler, were combined with a LiDAR-derived digital elevation model in the overbank area to develop a computational mesh. The model is used to simulate water velocity, benthic shear stress, flood inundation, and morphologic changes in the gravel-bedded river under the current and alternative flood-release strategies. Simulations of morphologic change and salmon-redd scour by floods of differing magnitude and duration enable water-resource managers to incorporate model simulation results into adaptive management of peak flows in the Cedar River. PDF version of a presentation on hydrodynamic modelling in the Cedar River in Washington state. Presented at the American Geophysical Union Fall Meeting 2010.

  17. An initial abstraction and constant loss model, and methods for estimating unit hydrographs, peak streamflows, and flood volumes for urban basins in Missouri

    Science.gov (United States)

    Huizinga, Richard J.

    2014-01-01

    Streamflow data, basin characteristics, and rainfall data from 39 streamflow-gaging stations for urban areas in and adjacent to Missouri were used by the U.S. Geological Survey in cooperation with the Metropolitan Sewer District of St. Louis to develop an initial abstraction and constant loss model (a time-distributed basin-loss model) and a gamma unit hydrograph (GUH) for urban areas in Missouri. Study-specific methods to determine peak streamflow and flood volume for a given rainfall event also were developed.

  18. A generalized model via random walks for information filtering

    Science.gov (United States)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-08-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.

  19. Connectivity properties of the random-cluster model

    Science.gov (United States)

    Weigel, Martin; Metin Elci, Eren; Fytas, Nikolaos G.

    2016-02-01

    We investigate the connectivity properties of the random-cluster model mediated by bridge bonds that, if removed, lead to the generation of new connected components. We study numerically the density of bridges and the fragmentation kernel, i.e., the relative sizes of the generated fragments, and find that these quantities follow a scaling description. The corresponding scaling exponents are related to well known equilibrium critical exponents of the model. Using the Russo-Margulis formalism, we derive an exact relation between the expected density of bridges and the number of active edges. The same approach allows us to study the fluctuations in the numbers of bridges, thereby uncovering a new singularity in the random- cluster model as q clusters connected by bridges and candidate-bridges play a pivotal role. We discuss several different implementations of the necessary connectivity algorithms and assess their relative performance.

  20. Generalized random sign and alert delay models for imperfect maintenance.

    Science.gov (United States)

    Dijoux, Yann; Gaudoin, Olivier

    2014-04-01

    This paper considers the modelling of the process of Corrective and condition-based Preventive Maintenance, for complex repairable systems. In order to take into account the dependency between both types of maintenance and the possibility of imperfect maintenance, Generalized Competing Risks models have been introduced in "Doyen and Gaudoin (J Appl Probab 43:825-839, 2006)". In this paper, we study two classes of these models, the Generalized Random Sign and Generalized Alert Delay models. A Generalized Competing Risks model can be built as a generalization of a particular Usual Competing Risks model, either by using a virtual age framework or not. The models properties are studied and their parameterizations are discussed. Finally, simulation results and an application to real data are presented.

  1. Two-dimensional hydrodynamic modeling to quantify effects of peak-flow management on channel morphology and salmon-spawning habitat in the Cedar River, Washington

    Science.gov (United States)

    Barnas, C. R.; Czuba, J. A.; Gendaszek, A. S.; Magirl, C. S.

    2010-12-01

    The Cedar River in Washington State originates on the western slope of the Cascade Range and provides the City of Seattle with most of its drinking water, while also supporting a productive salmon habitat. Water-resource managers require detailed information on how best to manage high-flow releases from Chester Morse Lake, a large reservoir on the Cedar River, during periods of heavy precipitation to minimize flooding, while mitigating negative effects on fish populations. Instream flow-management practices include provisions for adaptive management to promote and maintain healthy aquatic habitat in the river system. The current study is designed to understand the linkages between peak flow characteristics, geomorphic processes, riverine habitat, and biological responses. Specifically, two-dimensional hydrodynamic modeling is used to simulate and quantify the effects of the peak-flow magnitude, duration, and frequency on the channel morphology and salmon-spawning habitat. Two study reaches, representative of the typical geomorphic and ecologic characteristics of the Cedar River, were selected for the modeling. Detailed bathymetric data, collected with a real-time kinematic global positioning system and an acoustic Doppler current profiler, were combined with a LiDAR-derived digital elevation model in the overbank area to develop a computational mesh. The model is used to simulate water velocity, benthic shear stress, flood inundation, and morphologic changes in the gravel-bedded river under the current and alternative flood-release strategies. Simulations of morphologic change and salmon-redd scour by floods of differing magnitude and duration enable water-resource managers to incorporate model simulation results into adaptive management of peak flows in the Cedar River.

  2. Improving calibration of two key parameters in Hydrologic Engineering Center hydrologic modelling system, and analysing the influence of initial loss on flood peak flows.

    Science.gov (United States)

    Lin, Musheng; Chen, Xingwei; Chen, Ying; Yao, Huaxia

    2013-01-01

    Parameter calibration is a key and difficult issue for a hydrological model. Taking the Jinjiang Xixi watershed of south-east China as the study area, we proposed methods to improve the calibration of two very sensitive parameters, Muskingum K and initial loss, in the Hydrologic Engineering Center hydrologic modelling system (HEC-HMS) model. Twenty-three rainstorm flood events occurring from 1972 to 1977 were used to calibrate the model using a trial-and-error approach, and a relationship between initial loss and initial discharge for these flood events was established; seven rainstorm events occurring from 1978 to 1979 were used to validate the two parameters. The influence of initial loss change on different return-period floods was evaluated. A fixed Muskingum K value, which was calibrated by assuming a flow wave velocity at 3 m/s, could be used to simulate a flood hydrograph, and the empirical power-function relationship between initial loss and initial discharge made the model more applicable for flood forecasting. The influence of initial loss on peak floods was significant but not identical for different flood levels, and the change rate of peak floods caused by the same initial loss change was more remarkable when the return period increased.

  3. Shape modelling using Markov random field restoration of point correspondences.

    Science.gov (United States)

    Paulsen, Rasmus R; Hilger, Klaus B

    2003-07-01

    A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized shapes and improves the capability of reconstruction of the training data. Furthermore, the method leads to an overall reduction in the total variance of the point distribution model. Thus, it finds correspondence between semi-landmarks that are highly correlated in the shape tangent space. The method is demonstrated on a set of human ear canals extracted from 3D-laser scans.

  4. Many-body localization in the quantum random energy model

    Science.gov (United States)

    Laumann, Chris; Pal, Arijeet

    2014-03-01

    The quantum random energy model is a canonical toy model for a quantum spin glass with a well known phase diagram. We show that the model exhibits a many-body localization-delocalization transition at finite energy density which significantly alters the interpretation of the statistical ``frozen'' phase at lower temperature in isolated quantum systems. The transition manifests in many-body level statistics as well as the long time dynamics of on-site observables. CRL thanks the Perimeter Institute for hospitality and support.

  5. Shape Modelling Using Markov Random Field Restoration of Point Correspondences

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen

    2003-01-01

    A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized...... shapes and improves the capability of reconstruction of the training data. Furthermore, the method leads to an overall reduction in the total variance of the point distribution model. Thus, it finds correspondence between semilandmarks that are highly correlated in the shape tangent space. The method...

  6. A Random Dot Product Model for Weighted Networks

    CERN Document Server

    DeFord, Daryl R

    2016-01-01

    This paper presents a generalization of the random dot product model for networks whose edge weights are drawn from a parametrized probability distribution. We focus on the case of integer weight edges and show that many previously studied models can be recovered as special cases of this generalization. Our model also determines a dimension--reducing embedding process that gives geometric interpretations of community structure and centrality. The dimension of the embedding has consequences for the derived community structure and we exhibit a stress function for determining appropriate dimensions. We use this approach to analyze a coauthorship network and voting data from the U.S. Senate.

  7. Modeling random combustion of lycopodium particles and gas

    Directory of Open Access Journals (Sweden)

    M Bidabadi

    2016-06-01

    Full Text Available The random modeling combustion of lycopodium particles has been researched by many authors. In this paper, we extend this model and we also generate a different method by analyzing the effect of random distributed sources of combustible mixture. The flame structure is assumed to consist of a preheat-vaporization zone, a reaction zone and finally a post flame zone. We divide the preheat zone to different parts. We assumed that there is different distribution of particles in sections which are really random. Meanwhile, it is presumed that the fuel particles vaporize first to yield gaseous fuel. In other words, most of the fuel particles are vaporized at the end of the preheat zone. It is assumed that the Zel’dovich number is large; therefore, the reaction term in preheat zone is negligible. In this work, the effect of random distribution of particles in the preheat zone on combustion characteristics such as burning velocity, flame temperature for different particle radius is obtained.

  8. Peak Experience Project

    Science.gov (United States)

    Scott, Daniel G.; Evans, Jessica

    2010-01-01

    This paper emerges from the continued analysis of data collected in a series of international studies concerning Childhood Peak Experiences (CPEs) based on developments in understanding peak experiences in Maslow's hierarchy of needs initiated by Dr Edward Hoffman. Bridging from the series of studies, Canadian researchers explore collected…

  9. Spatially random models, estimation theory, and robot arm dynamics

    Science.gov (United States)

    Rodriguez, G.

    1987-01-01

    Spatially random models provide an alternative to the more traditional deterministic models used to describe robot arm dynamics. These alternative models can be used to establish a relationship between the methodologies of estimation theory and robot dynamics. A new class of algorithms for many of the fundamental robotics problems of inverse and forward dynamics, inverse kinematics, etc. can be developed that use computations typical in estimation theory. The algorithms make extensive use of the difference equations of Kalman filtering and Bryson-Frazier smoothing to conduct spatial recursions. The spatially random models are very easy to describe and are based on the assumption that all of the inertial (D'Alembert) forces in the system are represented by a spatially distributed white-noise model. The models can also be used to generate numerically the composite multibody system inertia matrix. This is done without resorting to the more common methods of deterministic modeling involving Lagrangian dynamics, Newton-Euler equations, etc. These methods make substantial use of human knowledge in derivation and minipulation of equations of motion for complex mechanical systems.

  10. Prediction models for clustered data: comparison of a random intercept and standard regression model.

    Science.gov (United States)

    Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne

    2013-02-15

    When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only

  11. Least squares estimation in a simple random coefficient autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren; Lange, Theis

    2013-01-01

    we prove the curious result that View the MathML source. The proof applies the notion of a tail index of sums of positive random variables with infinite variance to find the order of magnitude of View the MathML source and View the MathML source and hence the limit of View the MathML source......The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p...

  12. Random unitary evolution model of quantum Darwinism with pure decoherence

    Science.gov (United States)

    Balanesković, Nenad

    2015-10-01

    We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.

  13. Statistical Modeling of Robotic Random Walks on Different Terrain

    Science.gov (United States)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  14. Social aggregation in pea aphids: experiment and random walk modeling.

    Directory of Open Access Journals (Sweden)

    Christa Nilsen

    Full Text Available From bird flocks to fish schools and ungulate herds to insect swarms, social biological aggregations are found across the natural world. An ongoing challenge in the mathematical modeling of aggregations is to strengthen the connection between models and biological data by quantifying the rules that individuals follow. We model aggregation of the pea aphid, Acyrthosiphon pisum. Specifically, we conduct experiments to track the motion of aphids walking in a featureless circular arena in order to deduce individual-level rules. We observe that each aphid transitions stochastically between a moving and a stationary state. Moving aphids follow a correlated random walk. The probabilities of motion state transitions, as well as the random walk parameters, depend strongly on distance to an aphid's nearest neighbor. For large nearest neighbor distances, when an aphid is essentially isolated, its motion is ballistic with aphids moving faster, turning less, and being less likely to stop. In contrast, for short nearest neighbor distances, aphids move more slowly, turn more, and are more likely to become stationary; this behavior constitutes an aggregation mechanism. From the experimental data, we estimate the state transition probabilities and correlated random walk parameters as a function of nearest neighbor distance. With the individual-level model established, we assess whether it reproduces the macroscopic patterns of movement at the group level. To do so, we consider three distributions, namely distance to nearest neighbor, angle to nearest neighbor, and percentage of population moving at any given time. For each of these three distributions, we compare our experimental data to the output of numerical simulations of our nearest neighbor model, and of a control model in which aphids do not interact socially. Our stochastic, social nearest neighbor model reproduces salient features of the experimental data that are not captured by the control.

  15. Information inefficiency in a random linear economy model

    CERN Document Server

    Jerico, Joao Pedro

    2016-01-01

    We study the effects of introducing information inefficiency in a model for a random linear economy with a representative consumer. This is done by considering statistical, instead of classical, economic general equilibria. Employing two different approaches we show that inefficiency increases the consumption set of a consumer but decreases her expected utility. In this scenario economic activity grows while welfare shrinks, that is the opposite of the behavior obtained by considering a rational consumer.

  16. The impacts of data constraints on the predictive performance of a general process-based crop model (PeakN-crop v1.0)

    Science.gov (United States)

    Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.

    2017-04-01

    Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.

  17. On the Efficacy of PCM to Shave Peak Temperature of Crystalline Photovoltaic Panels: An FDM Model and Field Validation

    Directory of Open Access Journals (Sweden)

    Valerio Lo Brano

    2013-11-01

    Full Text Available The exploitation of renewable energy sources and specifically photovoltaic (PV devices have been showing significant growth; however, for a more effective development of this technology it is essential to have higher energy conversion performances. PV producers often declare a higher efficiency respect to real conditions and this deviation is mainly due to the difference between nominal and real temperature conditions of the PV. In order to improve the solar cell energy conversion efficiency many authors have proposed a methodology to keep the temperature of a PV system lower: a modified crystalline PV system built with a normal PV panel coupled with a Phase Change Material (PCM heat storage device. In this paper a thermal model analysis of the crystalline PV-PCM system based on a theoretical study using finite difference approach is described. The authors developed an algorithm based on an explicit finite difference formulation of energy balance of the crystalline PV-PCM system. Two sets of recursive equations were developed for two types of spatial domains: a boundary domain and an internal domain. The reliability of the developed model is tested by a comparison with data coming from a test facility. The results of numerical simulations are in good agreement with experimental data.

  18. Calibration of stormwater quality regression models: a random process?

    Science.gov (United States)

    Dembélé, A; Bertrand-Krajewski, J-L; Barillon, B

    2010-01-01

    Regression models are among the most frequently used models to estimate pollutants event mean concentrations (EMC) in wet weather discharges in urban catchments. Two main questions dealing with the calibration of EMC regression models are investigated: i) the sensitivity of models to the size and the content of data sets used for their calibration, ii) the change of modelling results when models are re-calibrated when data sets grow and change with time when new experimental data are collected. Based on an experimental data set of 64 rain events monitored in a densely urbanised catchment, four TSS EMC regression models (two log-linear and two linear models) with two or three explanatory variables have been derived and analysed. Model calibration with the iterative re-weighted least squares method is less sensitive and leads to more robust results than the ordinary least squares method. Three calibration options have been investigated: two options accounting for the chronological order of the observations, one option using random samples of events from the whole available data set. Results obtained with the best performing non linear model clearly indicate that the model is highly sensitive to the size and the content of the data set used for its calibration.

  19. Richly parameterized linear models additive, time series, and spatial models using random effects

    CERN Document Server

    Hodges, James S

    2013-01-01

    A First Step toward a Unified Theory of Richly Parameterized Linear ModelsUsing mixed linear models to analyze data often leads to results that are mysterious, inconvenient, or wrong. Further compounding the problem, statisticians lack a cohesive resource to acquire a systematic, theory-based understanding of models with random effects.Richly Parameterized Linear Models: Additive, Time Series, and Spatial Models Using Random Effects takes a first step in developing a full theory of richly parameterized models, which would allow statisticians to better understand their analysis results. The aut

  20. High-temperature series expansions for random Potts models

    Directory of Open Access Journals (Sweden)

    M.Hellmund

    2005-01-01

    Full Text Available We discuss recently generated high-temperature series expansions for the free energy and the susceptibility of random-bond q-state Potts models on hypercubic lattices. Using the star-graph expansion technique, quenched disorder averages can be calculated exactly for arbitrary uncorrelated coupling distributions while keeping the disorder strength p as well as the dimension d as symbolic parameters. We present analyses of the new series for the susceptibility of the Ising (q=2 and 4-state Potts model in three dimensions up to the order 19 and 18, respectively, and compare our findings with results from field-theoretical renormalization group studies and Monte Carlo simulations.

  1. Marginal and Random Intercepts Models for Longitudinal Binary Data with Examples from Criminology

    Science.gov (United States)

    Long, Jeffrey D.; Loeber, Rolf; Farrington, David P.

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides…

  2. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  3. Random matrices and the six-vertex model

    CERN Document Server

    Bleher, Pavel

    2013-01-01

    This book provides a detailed description of the Riemann-Hilbert approach (RH approach) to the asymptotic analysis of both continuous and discrete orthogonal polynomials, and applications to random matrix models as well as to the six-vertex model. The RH approach was an important ingredient in the proofs of universality in unitary matrix models. This book gives an introduction to the unitary matrix models and discusses bulk and edge universality. The six-vertex model is an exactly solvable two-dimensional model in statistical physics, and thanks to the Izergin-Korepin formula for the model with domain wall boundary conditions, its partition function matches that of a unitary matrix model with nonpolynomial interaction. The authors introduce in this book the six-vertex model and include a proof of the Izergin-Korepin formula. Using the RH approach, they explicitly calculate the leading and subleading terms in the thermodynamic asymptotic behavior of the partition function of the six-vertex model with domain wa...

  4. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  5. Statistical Downscaling of Temperature with the Random Forest Model

    Directory of Open Access Journals (Sweden)

    Bo Pang

    2017-01-01

    Full Text Available The issues with downscaling the outputs of a global climate model (GCM to a regional scale that are appropriate to hydrological impact studies are investigated using the random forest (RF model, which has been shown to be superior for large dataset analysis and variable importance evaluation. The RF is proposed for downscaling daily mean temperature in the Pearl River basin in southern China. Four downscaling models were developed and validated by using the observed temperature series from 61 national stations and large-scale predictor variables derived from the National Center for Environmental Prediction–National Center for Atmospheric Research reanalysis dataset. The proposed RF downscaling model was compared to multiple linear regression, artificial neural network, and support vector machine models. Principal component analysis (PCA and partial correlation analysis (PAR were used in the predictor selection for the other models for a comprehensive study. It was shown that the model efficiency of the RF model was higher than that of the other models according to five selected criteria. By evaluating the predictor importance, the RF could choose the best predictor combination without using PCA and PAR. The results indicate that the RF is a feasible tool for the statistical downscaling of temperature.

  6. Relations between Lagrangian models and synthetic random velocity fields.

    Science.gov (United States)

    Olla, Piero; Paradisi, Paolo

    2004-10-01

    The authors propose an alternative interpretation of Markovian transport models based on the well-mixed condition, in terms of the properties of a random velocity field with second order structure functions scaling linearly in the space-time increments. This interpretation allows direct association of the drift and noise terms entering the model, with the geometry of the turbulent fluctuations. In particular, the well-known nonuniqueness problem in the well-mixed approach is solved in terms of the antisymmetric part of the velocity correlations; its relation with the presence of nonzero mean helicity and other geometrical properties of the flow is elucidated. The well-mixed condition appears to be a special case of the relation between conditional velocity increments of the random field and the one-point Eulerian velocity distribution, allowing generalization of the approach to the transport of nontracer quantities. Application to solid particle transport leads to a model satisfying, in the homogeneous isotropic turbulence case, all the conditions on the behavior of the correlation times for the fluid velocity sampled by the particles. In particular, correlation times in the gravity and in the inertia dominated case, respectively, longer and shorter than in the passive tracer case; in the gravity dominated case, correlation times longer for velocity components along gravity, than for the perpendicular ones. The model produces, in channel flow geometry, particle deposition rates in agreement with experiments.

  7. Genetic evaluation of European quails by random regression models

    Directory of Open Access Journals (Sweden)

    Flaviana Miranda Gonçalves

    2012-09-01

    Full Text Available The objective of this study was to compare different random regression models, defined from different classes of heterogeneity of variance combined with different Legendre polynomial orders for the estimate of (covariance of quails. The data came from 28,076 observations of 4,507 female meat quails of the LF1 lineage. Quail body weights were determined at birth and 1, 14, 21, 28, 35 and 42 days of age. Six different classes of residual variance were fitted to Legendre polynomial functions (orders ranging from 2 to 6 to determine which model had the best fit to describe the (covariance structures as a function of time. According to the evaluated criteria (AIC, BIC and LRT, the model with six classes of residual variances and of sixth-order Legendre polynomial was the best fit. The estimated additive genetic variance increased from birth to 28 days of age, and dropped slightly from 35 to 42 days. The heritability estimates decreased along the growth curve and changed from 0.51 (1 day to 0.16 (42 days. Animal genetic and permanent environmental correlation estimates between weights and age classes were always high and positive, except for birth weight. The sixth order Legendre polynomial, along with the residual variance divided into six classes was the best fit for the growth rate curve of meat quails; therefore, they should be considered for breeding evaluation processes by random regression models.

  8. Using Historical Data and Quasi-Likelihood Logistic Regression Modeling to Test Spatial Patterns of Channel Response to Peak Flows in a Mountain Watershed

    Science.gov (United States)

    Faustini, J. M.; Jones, J. A.

    2001-12-01

    This study used an empirical modeling approach to explore landscape controls on spatial variations in reach-scale channel response to peak flows in a mountain watershed. We used historical cross-section surveys spanning 20 years at five sites on 2nd to 5th-order channels and stream gaging records spanning up to 50 years. We related the observed proportion of cross-sections at a site exhibiting detectable change between consecutive surveys to the recurrence interval of the largest peak flow during the corresponding period using a quasi-likelihood logistic regression model. Stream channel response was linearly related to flood size or return period through the logit function, but the shape of the response function varied according to basin size, bed material, and the presence or absence of large wood. At the watershed scale, we hypothesized that the spatial scale and frequency of channel adjustment should increase in the downstream direction as sediment supply increases relative to transport capacity, resulting in more transportable sediment in the channel and hence increased bed mobility. Consistent with this hypothesis, cross sections from the 4th and 5th-order main stem channels exhibit more frequent detectable changes than those at two steep third-order tributary sites. Peak flows able to mobilize bed material sufficiently to cause detectable changes in 50% of cross-section profiles had an estimated recurrence interval of 3 years for the 4th and 5th-order channels and 4 to 6 years for the 3rd-order sites. This difference increased for larger magnitude channel changes; peak flows with recurrence intervals of about 7 years produced changes in 90% of cross sections at the main stem sites, but flows able to produce the same level of response at tributary sites were three times less frequent. At finer scales, this trend of increasing bed mobility in the downstream direction is modified by variations in the degree of channel confinement by bedrock and landforms, the

  9. Estimating Random Regret Minimization models in the route choice context

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo

    The discrete choice paradigm of random regret minimization (RRM) has been recently proposed in several choice contexts. In the route choice context, the paradigm has been used to model the choice among three routes, to define regret-based equilibrium in risky conditions, and to formulate regret......-based stochastic user equilibrium. However, in the same context the RRM literature has not confronted three major challenges: (i) accounting for similarity across alternative routes, (ii) analyzing choice set composition effects on choice probabilities, and (iii) comparing the RRM model with advanced RUM...... counterparts. This paper looks into RRM-based route choice models from these three perspectives by (i) proposing utility-based and regret-based correction terms to account for similarity across alternatives, (ii) analyzing the variation of choice set probabilities with the choice set composition, and (iii...

  10. Super Yang-Mills theory as a random matrix model

    Energy Technology Data Exchange (ETDEWEB)

    Siegel, W. [Institute for Theoretical Physics, State University of New York, Stony Brook, New York 11794-3840 (United States)

    1995-07-15

    We generalize the Gervais-Neveu gauge to four-dimensional {ital N}=1 superspace. The model describes an {ital N}=2 super Yang-Mills theory. All chiral superfields ({ital N}=2 matter and ghost multiplets) exactly cancel to all loops. The remaining Hermitian scalar superfield (matrix) has a renormalizable massive propagator and simplified vertices. These properties are associated with {ital N}=1 supergraphs describing a superstring theory on a random lattice world sheet. We also consider all possible finite matrix models, and find they have a universal large-color limit. These could describe gravitational strings if the matrix-model coupling is fixed to unity, for exact electric-magnetic self-duality.

  11. Exponential random graph models for networks with community structure.

    Science.gov (United States)

    Fronczak, Piotr; Fronczak, Agata; Bujok, Maksymilian

    2013-09-01

    Although the community structure organization is an important characteristic of real-world networks, most of the traditional network models fail to reproduce the feature. Therefore, the models are useless as benchmark graphs for testing community detection algorithms. They are also inadequate to predict various properties of real networks. With this paper we intend to fill the gap. We develop an exponential random graph approach to networks with community structure. To this end we mainly built upon the idea of blockmodels. We consider both the classical blockmodel and its degree-corrected counterpart and study many of their properties analytically. We show that in the degree-corrected blockmodel, node degrees display an interesting scaling property, which is reminiscent of what is observed in real-world fractal networks. A short description of Monte Carlo simulations of the models is also given in the hope of being useful to others working in the field.

  12. : The origins of the random walk model in financial theory

    OpenAIRE

    Walter, Christian

    2013-01-01

    Ce texte constitue le chapitre 2 de l'ouvrage Le modèle de marche au hasard en finance, de Christian Walter, à paraître chez Economica, collection " Audit, assurance, actuariat ", en juin 2013. Il est publié ici avec l'accord de l'éditeur.; Three main concerns pave the way for the birth of the random walk model in financial theory: an ethical issue with Jules Regnault (1834-1894), a scientific issue with Louis Bachelier (1870-1946) and a pratical issue with Alfred Cowles (1891-1984). Three to...

  13. Geometric Models for Isotropic Random Porous Media: A Review

    Directory of Open Access Journals (Sweden)

    Helmut Hermann

    2014-01-01

    Full Text Available Models for random porous media are considered. The models are isotropic both from the local and the macroscopic point of view; that is, the pores have spherical shape or their surface shows piecewise spherical curvature, and there is no macroscopic gradient of any geometrical feature. Both closed-pore and open-pore systems are discussed. The Poisson grain model, the model of hard spheres packing, and the penetrable sphere model are used; variable size distribution of the pores is included. A parameter is introduced which controls the degree of open-porosity. Besides systems built up by a single solid phase, models for porous media with the internal surface coated by a second phase are treated. Volume fraction, surface area, and correlation functions are given explicitly where applicable; otherwise numerical methods for determination are described. Effective medium theory is applied to calculate physical properties for the models such as isotropic elastic moduli, thermal and electrical conductivity, and static dielectric constant. The methods presented are exemplified by applications: small-angle scattering of systems showing fractal-like behavior in limited ranges of linear dimension, optimization of nanoporous insulating materials, and improvement of properties of open-pore systems by atomic layer deposition of a second phase on the internal surface.

  14. Rigorously testing multialternative decision field theory against random utility models.

    Science.gov (United States)

    Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg

    2014-06-01

    Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  15. A note on modeling vehicle accident frequencies with random-parameters count models.

    Science.gov (United States)

    Anastasopoulos, Panagiotis Ch; Mannering, Fred L

    2009-01-01

    In recent years there have been numerous studies that have sought to understand the factors that determine the frequency of accidents on roadway segments over some period of time, using count data models and their variants (negative binomial and zero-inflated models). This study seeks to explore the use of random-parameters count models as another methodological alternative in analyzing accident frequencies. The empirical results show that random-parameters count models have the potential to provide a fuller understanding of the factors determining accident frequencies.

  16. Interpreting parameters in the logistic regression model with random effects

    DEFF Research Database (Denmark)

    Larsen, Klaus; Petersen, Jørgen Holm; Budtz-Jørgensen, Esben

    2000-01-01

    interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects......interpretation, interval odds ratio, logistic regression, median odds ratio, normally distributed random effects...

  17. A random parameters probit model of urban and rural intersection crashes.

    Science.gov (United States)

    Tay, Richard

    2015-11-01

    Intersections are hazardous locations and many studies have been conducted to identify the factors contributing to the frequency and severity of intersection crashes. However, little attention has been devoted to investigating the differences between crashes at urban and rural intersections, which have different road, traffic and environmental characteristics. By applying a random parameters probit model to the data from the Canadian Province of Alberta between 2008 and 2012, we find that urban intersection crashes are more likely to be associated with hit and run behaviours, roads with higher traffic volume, wet surfaces, four lanes and skewed intersections, and crashes on weekdays and off-peak hours, whereas rural crashes are likely to be associated with increases in fatalities and injuries, roads with higher speed limits, special road features, exit and entrance terminals, gravel, curvature and two lanes, crashes during weekends, peak hours and night-time, run-off-road crashes, and police visit to crash scene. Hence, road safety professionals in urban and rural areas should consider these differences when designing and implementing counter-measures to improve intersection safety, especially their safety audits and reviews, enforcement activities and education campaigns, to target the more vulnerable times and locations in the different areas. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. A Poisson random field model of pathogen transport in surface water

    Science.gov (United States)

    Yeghiazarian, L.; Samorodnitsky, G.; Montemagno, C. D.

    2009-11-01

    To address the uncertainty associated with microbial transport and surface water contamination events, we developed a new comprehensive stochastic framework that combines processes on the microscopic (single microorganism) and macroscopic (ensembles of microorganisms) scales. The spatial and temporal population behavior is modeled as a nonhomogeneous Poisson random field with Markovian field dynamics. The model parameters are based on the actual physical and biological characteristics of the Cryptosporidium parvum transport process and can be extended to cover a variety of other pathogens. Since soil particles have been shown to be a major vehicle in microbial transport, a U.S. Department of Agriculture approved erosion model (Water Erosion Prediction Project) is incorporated into the model. Risk assessment is an integral part of the stochastic model and is conducted using a set of simple calculations. Poisson intensity functions and correlations are computed. The results consistently indicate that surface water contamination events are transient, with traveling high peaks of microorganism concentrations. Correlations between microorganism populations at different points in time and space reach relatively significant levels even at large distances from one another. This information is aimed to assist water resources management teams in the decision-making process to identify the likely timing and locations of high-risk areas and thus to avoid collection of contaminated water.

  19. Genetic parameters for various random regression models to describe the weight data of pigs

    NARCIS (Netherlands)

    Huisman, A.E.; Veerkamp, R.F.; Arendonk, van J.A.M.

    2002-01-01

    Various random regression models have been advocated for the fitting of covariance structures. It was suggested that a spline model would fit better to weight data than a random regression model that utilizes orthogonal polynomials. The objective of this study was to investigate which kind of random

  20. Genetic parameters for different random regression models to describe weight data of pigs

    NARCIS (Netherlands)

    Huisman, A.E.; Veerkamp, R.F.; Arendonk, van J.A.M.

    2001-01-01

    Various random regression models have been advocated for the fitting of covariance structures. It was suggested that a spline model would fit better to weight data than a random regression model that utilizes orthogonal polynomials. The objective of this study was to investigate which kind of random

  1. RIM: A Random Item Mixture Model to Detect Differential Item Functioning

    Science.gov (United States)

    Frederickx, Sofie; Tuerlinckx, Francis; De Boeck, Paul; Magis, David

    2010-01-01

    In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is assumed for the item difficulties such that the…

  2. RIM: A random item mixture model to detect Differential Item Functioning

    NARCIS (Netherlands)

    Frederickx, S.; Tuerlinckx, T.; de Boeck, P.; Magis, D.

    2010-01-01

    In this paper we present a new methodology for detecting differential item functioning (DIF). We introduce a DIF model, called the random item mixture (RIM), that is based on a Rasch model with random item difficulties (besides the common random person abilities). In addition, a mixture model is

  3. Gaussian random bridges and a geometric model for information equilibrium

    Science.gov (United States)

    Mengütürk, Levent Ali

    2018-03-01

    The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.

  4. Peak-interviewet

    DEFF Research Database (Denmark)

    Raalskov, Jesper; Warming-Rasmussen, Bent

    Peak-interviewet er en særlig effektiv metode til at gøre ubevidste menneskelige ressourcer bevidste. Fokuspersonen (den interviewede) interviewes om en selvvalgt, personlig succesoplevelse. Terapeuten/coachen (intervieweren) spørger ind til processen, som ledte hen til denne succes. Herved afdæk...

  5. Automated asteroseismic peak detections

    Science.gov (United States)

    de Montellano, A. García Saravia Ortiz; Hekker, S.; Themeßl, N.

    2018-01-01

    Space observatories such as Kepler have provided data that can potentially revolutionise our understanding of stars. Through detailed asteroseismic analyses we are capable of determining fundamental stellar parameters and reveal the stellar internal structure with unprecedented accuracy. However, such detailed analyses, known as peak bagging, have so far been obtained for only a small percentage of the observed stars while most of the scientific potential of the available data remains unexplored. One of the major challenges in peak bagging is identifying how many solar-like oscillation modes are visible in a power density spectrum. Identification of oscillation modes is usually done by visual inspection which is time-consuming and has a degree of subjectivity. Here, we present a peak detection algorithm specially suited for the detection of solar-like oscillations. It reliably characterises the solar-like oscillations in a power density spectrum and estimates their parameters without human intervention. Furthermore, we provide a metric to characterise the false positive and false negative rates to provide further information about the reliability of a detected oscillation mode or the significance of a lack of detected oscillation modes. The algorithm presented here opens the possibility for detailed and automated peak bagging of the thousands of solar-like oscillators observed by Kepler.

  6. Auxiliary Parameter MCMC for Exponential Random Graph Models

    Science.gov (United States)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  7. A model for predicting daily peak visitation and implications for recreation management and water quality: evidence from two rivers in Puerto Rico.

    Science.gov (United States)

    Santiago, Luis E; Gonzalez-Caban, Armando; Loomis, John

    2008-06-01

    Visitor use surveys and water quality data indicates that high visitor use levels of two rivers in Puerto Rico does not appear to adversely affect several water quality parameters. Optimum visitor use to maximize visitor defined satisfaction is a more constraining limit on visitor use than water quality. Our multiple regression analysis suggests that visitor use of about 150 visitors per day yields the highest level of visitor reported satisfaction, a level that does not appear to affect turbidity of the river. This high level of visitor use may be related to the gregarious nature of Puerto Ricans and their tolerance for crowding on this densely populated island. The daily peak visitation model indicates that regulating the number of parking spaces may be the most effective way to keep visitor use within the social carrying capacity.

  8. Can the Critical Power Model Explain the Increased Peak Velocity/Power During Incremental Test After Concurrent Strength and Endurance Training?

    Science.gov (United States)

    Denadai, Benedito S; Greco, Camila C

    2017-08-01

    Denadai, BS and Greco, CC. Can the critical power model explain the increased peak velocity/power during incremental test after concurrent strength and endurance training? J Strength Cond Res 31(8): 2319-2323, 2017-The highest exercise intensity that can be maintained at the end of a ramp or step incremental test (i.e., velocity or work rate at V[Combining Dot Above]O2max - Vpeak/Wpeak) can be used for endurance performance prediction and individualization of aerobic training. The interindividual variability in Vpeak/Wpeak has been attributed to exercise economy, anaerobic capacity, and neuromuscular capability, alongside the major determinant of aerobic capacity. Interestingly, findings after concurrent strength and endurance training performed by endurance athletes have challenged the actual contribution of these variables. The critical power model usually derived from the performance of constant-work rate exercise can also explain tolerance to a ramp incremental exercise so that, Vpeak/Wpeak can be predicted accurately. However, there is not yet discussion of possible concomitant improvements in the parameters of the critical power model and Vpeak/Wpeak after concurrent training and whether they can be associated with and therefore depend on different neuromuscular adaptations. Therefore, this brief review presents some evidence that the critical power model could explain the improvement of Vpeak/Wpeak and should be used to monitor aerobic performance enhancement after different concurrent strength- and endurance-training designs.

  9. Joint modeling of ChIP-seq data via a Markov random field model

    NARCIS (Netherlands)

    Bao, Yanchun; Vinciotti, Veronica; Wit, Ernst; 't Hoen, Peter A C

    Chromatin ImmunoPrecipitation-sequencing (ChIP-seq) experiments have now become routine in biology for the detection of protein-binding sites. In this paper, we present a Markov random field model for the joint analysis of multiple ChIP-seq experiments. The proposed model naturally accounts for

  10. New morphometric measurements of craters and basins on Mercury and the Moon from MESSENGER and LRO altimetry and image data: An observational framework for evaluating models of peak-ring basin formation

    Science.gov (United States)

    Baker, David M. H.; Head, James W.

    2013-09-01

    Peak-ring basins are important in understanding the formation of large impact basins on planetary bodies; however, debate still exists as to how peak rings form. Using altimetry and image data from the MESSENGER and LRO spacecraft in orbit around Mercury and the Moon, respectively, we measured the morphometric properties of impact structures in the transition from complex craters with central peaks to peak-ring basins. This work provides a comprehensive morphometric framework for craters and basins in this morphological transition that may be used to further develop and refine various models for peak-ring formation. First, we updated catalogs of craters and basins ≥50 km in diameter possessing interior peaks on Mercury and the Moon. Crater degradation states were assessed and morphometric measurements were made on the freshest examples, including depths to the crater floor, areas contained within the outlines of the rim crest and floor, crater volumes, and rim-crest and floor circularity. There is an abrupt decrease in crater depth in the crater to basin transition on both Mercury and the Moon. Peak-ring basins have larger floor area/interior area ratios than complex craters; this ratio is larger in craters on Mercury than on the Moon. The dimensions of central peaks (heights, areas, and volumes exposed above the surface) increase continuously up to the transition to basins. Compared with central peaks, peak rings have reduced heights; however, all interior peaks are typically >1 km below the rim-crest elevations. Topographic profiles of peak-ring basins on Mercury and the Moon are distinct from complex craters and exhibit interior cavities or depressions that are bounded by the peak ring with outer annuli that are at higher elevations. We interpret the trends in floor and interior area to be largely due to differences in impact melt production and retention, although variations in types and thicknesses of impactites, including proximal ejecta, could also

  11. An analytical model for the determination of crystallite size and crystal lattice microstrain distributions in nanocrystalline materials from the variance of the X-ray diffraction peaks

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Bajo, F. [Universidad de Extremadura, Departamento de Fisica Aplicada, Badajoz (Spain); Ortiz, A.L. [Universidad de Extremadura, Departamento de Ingenieria Mecanica, Energetica y de los Materiales, Badajoz (Spain); Cumbrera, F.L. [Universidad de Extremadura, Departamento de Fisica, Badajoz (Spain)

    2009-01-15

    An analytical model for the determination of crystallite size and crystal lattice microstrain distributions in nanocrystalline (nc) materials by X-ray diffractometry (XRD) is presented. It entails generalizing the variance method to establish analytically the connection between the variance coefficients of the physically broadened XRD peaks and the characteristic parameters of explicit distributions of crystallite sizes and crystal lattice microstrains, which results in a more detailed characterization of the nc-materials. The proposed model is generic in nature and has the potential to be used under the assumption of different mathematical functions for the two distributions, which suggests that it may have an important role to play in the characterization of nc-materials. Nevertheless, the specialization to the case of nc-materials with log-normal crystallite size distribution and three typical types of lattice microstrains is used as an illustration and to formulate explicit analytical expressions of interest. Finally, the usefulness of the proposed model is demonstrated on standard XRD profiles. (orig.)

  12. Milk yield persistency in Brazilian Gyr cattle based on a random regression model.

    Science.gov (United States)

    Pereira, R J; Verneque, R S; Lopes, P S; Santana, M L; Lagrotta, M R; Torres, R A; Vercesi Filho, A E; Machado, M A

    2012-06-15

    With the objective of evaluating measures of milk yield persistency, 27,000 test-day milk yield records from 3362 first lactations of Brazilian Gyr cows that calved between 1990 and 2007 were analyzed with a random regression model. Random, additive genetic and permanent environmental effects were modeled using Legendre polynomials of order 4 and 5, respectively. Residual variance was modeled using five classes. The average lactation curve was modeled using a fourth-order Legendre polynomial. Heritability estimates for measures of persistency ranged from 0.10 to 0.25. Genetic correlations between measures of persistency and 305-day milk yield (Y305) ranged from -0.52 to 0.03. At high selection intensities for persistency measures and Y305, few animals were selected in common. As the selection intensity for the two traits decreased, a higher percentage of animals were selected in common. The average predicted breeding values for Y305 according to year of birth of the cows had a substantial annual genetic gain. In contrast, no improvement in the average persistency breeding value was observed. We conclude that selection for total milk yield during lactation does not identify bulls or cows that are genetically superior in terms of milk yield persistency. A measure of persistency represented by the sum of deviations of estimated breeding value for days 31 to 280 in relation to estimated breeding value for day 30 should be preferred in genetic evaluations of this trait in the Gyr breed, since this measure showed a medium heritability and a genetic correlation with 305-day milk yield close to zero. In addition, this measure is more adequate at the time of peak lactation, which occurs between days 25 and 30 after calving in this breed.

  13. Hedonic travel cost and random utility models of recreation

    Energy Technology Data Exchange (ETDEWEB)

    Pendleton, L. [Univ. of Southern California, Los Angeles, CA (United States); Mendelsohn, R.; Davis, E.W. [Yale Univ., New Haven, CT (United States). School of Forestry and Environmental Studies

    1998-07-09

    Micro-economic theory began as an attempt to describe, predict and value the demand and supply of consumption goods. Quality was largely ignored at first, but economists have started to address quality within the theory of demand and specifically the question of site quality, which is an important component of land management. This paper demonstrates that hedonic and random utility models emanate from the same utility theoretical foundation, although they make different estimation assumptions. Using a theoretically consistent comparison, both approaches are applied to examine the quality of wilderness areas in the Southeastern US. Data were collected on 4778 visits to 46 trails in 20 different forest areas near the Smoky Mountains. Visitor data came from permits and an independent survey. The authors limited the data set to visitors from within 300 miles of the North Carolina and Tennessee border in order to focus the analysis on single purpose trips. When consistently applied, both models lead to results with similar signs but different magnitudes. Because the two models are equally valid, recreation studies should continue to use both models to value site quality. Further, practitioners should be careful not to make simplifying a priori assumptions which limit the effectiveness of both techniques.

  14. Vibrations in glasses and Euclidean random matrix theory

    Energy Technology Data Exchange (ETDEWEB)

    Grigera, T.S.; Martin-Mayor, V.; Parisi, G. [Dipartimento di Fisica, Universita di Roma ' La Sapienza' , Rome (Italy); INFN Sezione di Roma - INFM Unita di Roma, Rome (Italy); Verrocchio, P. [Dipartimento di Fisica, Universita di Trento, Povo, Trento (Italy); INFM Unita di Trento, Trento (Italy)

    2002-03-11

    We study numerically and analytically a simple off-lattice model of scalar harmonic vibrations by means of Euclidean random matrix theory. Since the spectrum of this model shares the most puzzling spectral features with the high-frequency domain of glasses (non-Rayleigh broadening of the Brillouin peak, boson peak and secondary peak), Euclidean random matrix theory provides a single and fairly simple theoretical framework for their explanation. (author)

  15. Droplet localization in the random XXZ model and its manifestations

    Science.gov (United States)

    Elgart, A.; Klein, A.; Stolz, G.

    2018-01-01

    We examine many-body localization properties for the eigenstates that lie in the droplet sector of the random-field spin- \\frac 1 2 XXZ chain. These states satisfy a basic single cluster localization property (SCLP), derived in Elgart et al (2018 J. Funct. Anal. (in press)). This leads to many consequences, including dynamical exponential clustering, non-spreading of information under the time evolution, and a zero velocity Lieb–Robinson bound. Since SCLP is only applicable to the droplet sector, our definitions and proofs do not rely on knowledge of the spectral and dynamical characteristics of the model outside this regime. Rather, to allow for a possible mobility transition, we adapt the notion of restricting the Hamiltonian to an energy window from the single particle setting to the many body context.

  16. [Critical of the additive model of the randomized controlled trial].

    Science.gov (United States)

    Boussageon, Rémy; Gueyffier, François; Bejan-Angoulvant, Theodora; Felden-Dominiak, Géraldine

    2008-01-01

    Randomized, double-blind, placebo-controlled clinical trials are currently the best way to demonstrate the clinical effectiveness of drugs. Its methodology relies on the method of difference (John Stuart Mill), through which the observed difference between two groups (drug vs placebo) can be attributed to the pharmacological effect of the drug being tested. However, this additive model can be questioned in the event of statistical interactions between the pharmacological and the placebo effects. Evidence in different domains has shown that the placebo effect can influence the effect of the active principle. This article evaluates the methodological, clinical and epistemological consequences of this phenomenon. Topics treated include extrapolating results, accounting for heterogeneous results, demonstrating the existence of several factors in the placebo effect, the necessity to take these factors into account for given symptoms or pathologies, as well as the problem of the "specific" effect.

  17. Random field Ising model and community structure in complex networks

    Science.gov (United States)

    Son, S.-W.; Jeong, H.; Noh, J. D.

    2006-04-01

    We propose a method to determine the community structure of a complex network. In this method the ground state problem of a ferromagnetic random field Ising model is considered on the network with the magnetic field Bs = +∞, Bt = -∞, and Bi≠s,t=0 for a node pair s and t. The ground state problem is equivalent to the so-called maximum flow problem, which can be solved exactly numerically with the help of a combinatorial optimization algorithm. The community structure is then identified from the ground state Ising spin domains for all pairs of s and t. Our method provides a criterion for the existence of the community structure, and is applicable equally well to unweighted and weighted networks. We demonstrate the performance of the method by applying it to the Barabási-Albert network, Zachary karate club network, the scientific collaboration network, and the stock price correlation network. (Ising, Potts, etc.)

  18. Random Field Ising Models: Fractal Interfaces and their Implications

    Science.gov (United States)

    Bupathy, A.; Kumar, M.; Banerjee, V.; Puri, S.

    2017-10-01

    We use a computationally efficient graph-cut (GC) method to obtain exact ground-states of the d = 3 random field Ising model (RFIM) on simple cubic (SC), bodycentered cubic (BCC) and face-centered cubic (FCC) lattices with Gaussian, Uniform and Bimodal distributions for the disorder Δ. At small-r, the correlation function C(r; Δ) shows a cusp singularity characterised by a non-integer roughness exponent α signifying rough fractal interfaces with dimension d f = d – α. In the paramagnetic phase (Δ > Δ c ), α ≃ 0:5 for all lattice and disorder types. In the ferromagnetic phase (Δ Fractal interfaces have important implications on growth and relaxation.

  19. Method of model reduction and multifidelity models for solute transport in random layered porous media

    Science.gov (United States)

    Xu, Zhijie; Tartakovsky, Alexandre M.

    2017-09-01

    This work presents a method of model reduction that leads to models with three solutions of increasing fidelity (multifidelity models) for solute transport in a bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the reduced model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. In contrast to the linear scaling with the correlation length and the mean velocity from macrodispersion theory, our model predicts a nonlinear and a quadratic dependence of the effective dispersion on the correlation length and the mean velocity, respectively. We observe that velocity fluctuations enhance dispersion in a nonmonotonic fashion (a stochastic spike phenomenon): The dispersion initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity (correlation). Maximum enhancement in dispersion can be obtained at a correlation length about 0.25 the size of the porous media perpendicular to flow. This information can be useful for engineering such random layered porous media. Numerical simulations are implemented to compare solutions with varying fidelity.

  20. Electronic Properties of Random Polymers: Modelling Optical Spectra of Melanins

    Science.gov (United States)

    Bochenek, Kinga; Gudowska-Nowak, Ewa

    2003-05-01

    Melanins are a group of complex pigments of biological origin, widely spread in all species from fungi to man. Among diverse types of melanins, the human melanins, eumelanins, are brown or black nitrogen-containing pigments, mostly known for their photoprotective properties in human skin. We have undertaken theoretical studies aimed to understand absorption spectra of eumelanins and their chemical precursors. The structure of the biopigment is poorly defined, although it is believed to be composed of cross-linked heteropolymers based on indolequinones. As a basic model of the eumelanin structure, we have chosen pentamers containing hydroquinones (HQ) and/or 5,6-indolequinones (IQ) and/or semiquinones (SQ) often listed as structural melanin monomers. The eumelanin oligomers have been constructed as random compositions of basic monomers and optimized for the energy of bonding. Absorption spectra of model assemblies have been calculated within the semiempirical intermediate neglect of differential overlap (INDO) approximation. Model spectrum of eumelanin has been further obtained by sum of independent spectra of singular polymers. By comparison with experimental data it is shown that the INDO/CI method manages to reproduce well characteristic properties of experimental spectrum of synthetic eumelanins.

  1. Soil erosion rates in two karst peak-cluster depression basins of northwest Guangxi, China: Comparison of the RUSLE model with 137Cs measurements

    Science.gov (United States)

    Feng, Teng; Chen, Hongsong; Polyakov, Viktor O.; Wang, Kelin; Zhang, Xinbao; Zhang, Wei

    2016-01-01

    Reliable estimation of erosion in karst areas is difficult because of the heterogeneous nature of infiltration and sub-surface drainage. Understanding the processes involved is a key requirement for managing against karst rock desertification. This study used the revised Universal Soil Loss Equation (RUSLE) to estimate the annual soil erosion rates on hillslopes and compared them with 137Cs budget in the depressions at two typical karst peak-cluster depression basins in northwest Guangxi, southwestern China. Runoff plots data were used to calibrate the slope length factor (L) of the RUSLE model by adjusting the accumulated area threshold. The RUSLE model was sensitive to the value of the threshold and required DEMs with 1 m resolution, due to the discontinuous nature of the overland flow. The average annual soil erosion rates on hillslopes simulated by the RUSLE were 0.22 and 0.10 Mg ha- 1 y- 1 during 2006 through 2011 in the partially cultivated GZ1 and the undisturbed GZ2 basins, respectively. The corresponding deposition rates in the depressions agreed well with the 137Cs records when recent changes in precipitation and land use were taken into consideration. The study suggests that attention should be given to the RUSLE-L factor when applying the RUSLE on karst hillslopes because of the discontinuous nature of runoff and significant underground seepage during storm events that effectively reduces the effects of slope length.

  2. Kitt Peak speckle camera.

    Science.gov (United States)

    Breckinridge, J B; McAlister, H A; Robinson, W G

    1979-04-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  3. Peak-Finding Algorithms.

    Science.gov (United States)

    Hung, Jui-Hung; Weng, Zhiping

    2017-03-01

    Microarray and next-generation sequencing technologies have greatly expedited the discovery of genomic DNA that can be enriched using various biochemical methods. Chromatin immunoprecipitation (ChIP) is a general method for enriching chromatin fragments that are specifically recognized by an antibody. The resulting DNA fragments can be assayed by microarray (ChIP-chip) or sequencing (ChIP-seq). This introduction focuses on ChIP-seq data analysis. The first step of analyzing ChIP-seq data is identifying regions in the genome that are enriched in a ChIP sample; these regions are called peaks. © 2017 Cold Spring Harbor Laboratory Press.

  4. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    Science.gov (United States)

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  5. A random effect multiplicative heteroscedastic model for bacterial growth

    Directory of Open Access Journals (Sweden)

    Quinto Emiliano J

    2010-02-01

    Full Text Available Abstract Background Predictive microbiology develops mathematical models that can predict the growth rate of a microorganism population under a set of environmental conditions. Many primary growth models have been proposed. However, when primary models are applied to bacterial growth curves, the biological variability is reduced to a single curve defined by some kinetic parameters (lag time and growth rate, and sometimes the models give poor fits in some regions of the curve. The development of a prediction band (from a set of bacterial growth curves using non-parametric and bootstrap methods permits to overcome that problem and include the biological variability of the microorganism into the modelling process. Results Absorbance data from Listeria monocytogenes cultured at 22, 26, 38, and 42°C were selected under different environmental conditions of pH (4.5, 5.5, 6.5, and 7.4 and percentage of NaCl (2.5, 3.5, 4.5, and 5.5. Transformation of absorbance data to viable count data was carried out. A random effect multiplicative heteroscedastic model was considered to explain the dynamics of bacterial growth. The concept of a prediction band for microbial growth is proposed. The bootstrap method was used to obtain resamples from this model. An iterative procedure is proposed to overcome the computer intensive task of calculating simultaneous prediction intervals, along time, for bacterial growth. The bands were narrower below the inflection point (0-8 h at 22°C, and 0-5.5 h at 42°C, and wider to the right of it (from 9 h onwards at 22°C, and from 7 h onwards at 42°C. A wider band was observed at 42°C than at 22°C when the curves reach their upper asymptote. Similar bands have been obtained for 26 and 38°C. Conclusions The combination of nonparametric models and bootstrap techniques results in a good procedure to obtain reliable prediction bands in this context. Moreover, the new iterative algorithm proposed in this paper allows one to

  6. Stimulated luminescence emission from localized recombination in randomly distributed defects

    DEFF Research Database (Denmark)

    Jain, Mayank; Guralnik, Benny; Andersen, Martin Thalbitzer

    2012-01-01

    results in a highly asymmetric TL peak; this peak can be understood to derive from a continuum of several first-order TL peaks. Our model also shows an extended power law behaviour for OSL (or prompt luminescence), which is expected from localized recombination mechanisms in materials with random...

  7. Conditional random field modelling of interactions between findings in mammography

    Science.gov (United States)

    Kooi, Thijs; Mordang, Jan-Jurre; Karssemeijer, Nico

    2017-03-01

    Recent breakthroughs in training deep neural network architectures, in particular deep Convolutional Neural Networks (CNNs), made a big impact on vision research and are increasingly responsible for advances in Computer Aided Diagnosis (CAD). Since many natural scenes and medical images vary in size and are too large to feed to the networks as a whole, two stage systems are typically employed, where in the first stage, small regions of interest in the image are located and presented to the network as training and test data. These systems allow us to harness accurate region based annotations, making the problem easier to learn. However, information is processed purely locally and context is not taken into account. In this paper, we present preliminary work on the employment of a Conditional Random Field (CRF) that is trained on top the CNN to model contextual interactions such as the presence of other suspicious regions, for mammography CAD. The model can easily be extended to incorporate other sources of information, such as symmetry, temporal change and various patient covariates and is general in the sense that it can have application in other CAD problems.

  8. Critical Behavior of the Annealed Ising Model on Random Regular Graphs

    Science.gov (United States)

    Can, Van Hao

    2017-11-01

    In Giardinà et al. (ALEA Lat Am J Probab Math Stat 13(1):121-161, 2016), the authors have defined an annealed Ising model on random graphs and proved limit theorems for the magnetization of this model on some random graphs including random 2-regular graphs. Then in Can (Annealed limit theorems for the Ising model on random regular graphs, arXiv:1701.08639, 2017), we generalized their results to the class of all random regular graphs. In this paper, we study the critical behavior of this model. In particular, we determine the critical exponents and prove a non standard limit theorem stating that the magnetization scaled by n^{3/4} converges to a specific random variable, with n the number of vertices of random regular graphs.

  9. Estimating a DIF decomposition model using a random-weights linear logistic test model approach.

    Science.gov (United States)

    Paek, Insu; Fukuhara, Hirotaka

    2015-09-01

    A differential item functioning (DIF) decomposition model separates a testlet item DIF into two sources: item-specific differential functioning and testlet-specific differential functioning. This article provides an alternative model-building framework and estimation approach for a DIF decomposition model that was proposed by Beretvas and Walker (2012). Although their model is formulated under multilevel modeling with the restricted pseudolikelihood estimation method, our approach illustrates DIF decomposition modeling that is directly built upon the random-weights linear logistic test model framework with the marginal maximum likelihood estimation method. In addition to demonstrating our approach's performance, we provide detailed information on how to implement this new DIF decomposition model using an item response theory software program; using DIF decomposition may be challenging for practitioners, yet practical information on how to implement it has previously been unavailable in the measurement literature.

  10. Solvable random-walk model with memory and its relations with Markovian models of anomalous diffusion

    Science.gov (United States)

    Boyer, D.; Romo-Cruz, J. C. R.

    2014-10-01

    Motivated by studies on the recurrent properties of animal and human mobility, we introduce a path-dependent random-walk model with long-range memory for which not only the mean-square displacement (MSD) but also the propagator can be obtained exactly in the asymptotic limit. The model consists of a random walker on a lattice, which, at a constant rate, stochastically relocates at a site occupied at some earlier time. This time in the past is chosen randomly according to a memory kernel, whose temporal decay can be varied via an exponent parameter. In the weakly non-Markovian regime, memory reduces the diffusion coefficient from the bare value. When the mean backward jump in time diverges, the diffusion coefficient vanishes and a transition to an anomalous subdiffusive regime occurs. Paradoxically, at the transition, the process is an anticorrelated Lévy flight. Although in the subdiffusive regime the model exhibits some features of the continuous time random walk with infinite mean waiting time, it belongs to another universality class. If memory is very long-ranged, a second transition takes place to a regime characterized by a logarithmic growth of the MSD with time. In this case the process is asymptotically Gaussian and effectively described as a scaled Brownian motion with a diffusion coefficient decaying as 1 /t .

  11. Spatial peak-load pricing

    Energy Technology Data Exchange (ETDEWEB)

    Arellano, M. Soledad; Serra, Pablo [Universidad de Chile, Dept. of Industrial Engineering, Santiago (Chile)

    2007-03-15

    This article extends the traditional electricity peak-load pricing model to include transmission costs. In the context of a two-node, two-technology electric power system, where suppliers face inelastic demand, we show that when the marginal plant is located at the energy-importing center, generators located away from that center should pay the marginal capacity transmission cost; otherwise, consumers should bear this cost through capacity payments. Since electric power transmission is a natural monopoly, marginal-cost pricing does not fully cover costs. We propose distributing the revenue deficit among users in proportion to the surplus they derive from the service priced at marginal cost. (Author)

  12. Mendelian Randomization versus Path Models: Making Causal Inferences in Genetic Epidemiology.

    Science.gov (United States)

    Ziegler, Andreas; Mwambi, Henry; König, Inke R

    2015-01-01

    The term Mendelian randomization is popular in the current literature. The first aim of this work is to describe the idea of Mendelian randomization studies and the assumptions required for drawing valid conclusions. The second aim is to contrast Mendelian randomization and path modeling when different 'omics' levels are considered jointly. We define Mendelian randomization as introduced by Katan in 1986, and review its crucial assumptions. We introduce path models as the relevant additional component to the current use of Mendelian randomization studies in 'omics'. Real data examples for the association between lipid levels and coronary artery disease illustrate the use of path models. Numerous assumptions underlie Mendelian randomization, and they are difficult to be fulfilled in applications. Path models are suitable for investigating causality, and they should not be mixed up with the term Mendelian randomization. In many applications, path modeling would be the appropriate analysis in addition to a simple Mendelian randomization analysis. Mendelian randomization and path models use different concepts for causal inference. Path modeling but not simple Mendelian randomization analysis is well suited to study causality with different levels of 'omics' data. 2015 S. Karger AG, Basel.

  13. Critical behavior of the three-dimensional Ising model with anisotropic bond randomness at the ferromagnetic-paramagnetic transition line.

    Science.gov (United States)

    Papakonstantinou, T; Malakis, A

    2013-01-01

    We study the ±J three-dimensional (3D) Ising model with a spatially uniaxial anisotropic bond randomness on the simple cubic lattice. The ±J random exchange is applied on the xy planes, whereas, in the z direction, only a ferromagnetic exchange is used. After sketching the phase diagram and comparing it with the corresponding isotropic case, the system is studied at the ferromagnetic-paramagnetic transition line using parallel tempering and a convenient concentration of antiferromagnetic bonds (p(z)=0;p(xy)=0.176). The numerical data clearly point out a second-order ferromagnetic-paramagnetic phase transition belonging in the same universality class with the 3D random Ising model. The smooth finite-size behavior of the effective exponents, describing the peaks of the logarithmic derivatives of the order parameter, provides an accurate estimate of the critical exponent 1/ν=1.463(3), and a collapse analysis of magnetization data gives an estimate of β/ν=0.516(7). These results are in agreement with previous papers and, in particular, with those of the isotropic ±J three-dimensional Ising model at the ferromagnetic-paramagnetic transition line, indicating the irrelevance of the introduced anisotropy.

  14. Genetic Analysis of Daily Maximum Milking Speed by a Random Walk Model in Dairy Cows

    DEFF Research Database (Denmark)

    Karacaören, Burak; Janss, Luc; Kadarmideen, Haja

    Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models...... of maximum milking speed. Wood curve did not provide a good fit to the data set. Quadratic random regressions gave better predictions compared with the random walk model. However random walk model does not need to be evaluated for different orders of regression coefficients. In addition with the Kalman...... filter applications: random walk model could give online prediction of breeding values. Hence without waiting for whole lactation records, genetic evaluation could be made when the daily or monthly data is available...

  15. MODELING URBAN DYNAMICS USING RANDOM FOREST: IMPLEMENTING ROC AND TOC FOR MODEL EVALUATION

    Directory of Open Access Journals (Sweden)

    M. Ahmadlou

    2016-06-01

    Full Text Available The importance of spatial accuracy of land use/cover change maps necessitates the use of high performance models. To reach this goal, calibrating machine learning (ML approaches to model land use/cover conversions have received increasing interest among the scholars. This originates from the strength of these techniques as they powerfully account for the complex relationships underlying urban dynamics. Compared to other ML techniques, random forest has rarely been used for modeling urban growth. This paper, drawing on information from the multi-temporal Landsat satellite images of 1985, 2000 and 2015, calibrates a random forest regression (RFR model to quantify the variable importance and simulation of urban change spatial patterns. The results and performance of RFR model were evaluated using two complementary tools, relative operating characteristics (ROC and total operating characteristics (TOC, by overlaying the map of observed change and the modeled suitability map for land use change (error map. The suitability map produced by RFR model showed 82.48% area under curve for the ROC model which indicates a very good performance and highlights its appropriateness for simulating urban growth.

  16. Amplification of postwildfire peak flow by debris

    Science.gov (United States)

    Kean, Jason W.; McGuire, Luke; Rengers, Francis; Smith, Joel B.; Staley, Dennis M.

    2016-01-01

    In burned steeplands, the peak depth and discharge of postwildfire runoff can substantially increase from the addition of debris. Yet methods to estimate the increase over water flow are lacking. We quantified the potential amplification of peak stage and discharge using video observations of postwildfire runoff, compiled data on postwildfire peak flow (Qp), and a physically based model. Comparison of flood and debris flow data with similar distributions in drainage area (A) and rainfall intensity (I) showed that the median runoff coefficient (C = Qp/AI) of debris flows is 50 times greater than that of floods. The striking increase in Qp can be explained using a fully predictive model that describes the additional flow resistance caused by the emergence of coarse-grained surge fronts. The model provides estimates of the amplification of peak depth, discharge, and shear stress needed for assessing postwildfire hazards and constraining models of bedrock incision.

  17. Compensated Row-Column Ultrasound Imaging System Using Fisher Tippett Multilayered Conditional Random Field Model.

    Directory of Open Access Journals (Sweden)

    Ibrahim Ben Daya

    Full Text Available 3-D ultrasound imaging offers unique opportunities in the field of non destructive testing that cannot be easily found in A-mode and B-mode images. To acquire a 3-D ultrasound image without a mechanically moving transducer, a 2-D array can be used. The row column technique is preferred over a fully addressed 2-D array as it requires a significantly lower number of interconnections. Recent advances in 3-D row-column ultrasound imaging systems were largely focused on sensor design. However, these imaging systems face three intrinsic challenges that cannot be addressed by improving sensor design alone: speckle noise, sparsity of data in the imaged volume, and the spatially dependent point spread function of the imaging system. In this paper, we propose a compensated row-column ultrasound image reconstruction system using Fisher-Tippett multilayered conditional random field model. Tests carried out on both simulated and real row-column ultrasound images show the effectiveness of our proposed system as opposed to other published systems. Visual assessment of the results show our proposed system's potential at preserving detail and reducing speckle. Quantitative analysis shows that our proposed system outperforms previously published systems when evaluated with metrics such as Peak Signal to Noise Ratio, Coefficient of Correlation, and Effective Number of Looks. These results show the potential of our proposed system as an effective tool for enhancing 3-D row-column imaging.

  18. Late Noachian Icy Highlands climate model: Exploring the possibility of transient melting and fluvial/lacustrine activity through peak annual and seasonal temperatures

    Science.gov (United States)

    Palumbo, Ashley M.; Head, James W.; Wordsworth, Robin D.

    2018-01-01

    The nature of the Late Noachian climate of Mars remains one of the outstanding questions in the study of the evolution of martian geology and climate. Despite abundant evidence for flowing water (valley networks and open/closed basin lakes), climate models have had difficulties reproducing mean annual surface temperatures (MAT) > 273 K in order to generate the ;warm and wet; climate conditions presumed to be necessary to explain the observed fluvial and lacustrine features. Here, we consider a ;cold and icy; climate scenario, characterized by MAT ∼225 K and snow and ice distributed in the southern highlands, and ask: Does the formation of the fluvial and lacustrine features require continuous ;warm and wet; conditions, or could seasonal temperature variation in a ;cold and icy; climate produce sufficient summertime ice melting and surface runoff to account for the observed features? To address this question, we employ the 3D Laboratoire de Météorologie Dynamique global climate model (LMD GCM) for early Mars and (1) analyze peak annual temperature (PAT) maps to determine where on Mars temperatures exceed freezing in the summer season, (2) produce temperature time series at three valley network systems and compare the duration of the time during which temperatures exceed freezing with seasonal temperature variations in the Antarctic McMurdo Dry Valleys (MDV) where similar fluvial and lacustrine features are observed, and (3) perform a positive-degree-day analysis to determine the annual volume of meltwater produced through this mechanism, estimate the necessary duration that this process must repeat to produce sufficient meltwater for valley network formation, and estimate whether runoff rates predicted by this mechanism are comparable to those required to form the observed geomorphology of the valley networks. When considering an ambient CO2 atmosphere, characterized by MAT ∼225 K, we find that: (1) PAT can exceed the melting point of water (>273 K) in

  19. Andes Hantavirus-Infection of a 3D Human Lung Tissue Model Reveals a Late Peak in Progeny Virus Production Followed by Increased Levels of Proinflammatory Cytokines and VEGF-A.

    Science.gov (United States)

    Sundström, Karin B; Nguyen Hoang, Anh Thu; Gupta, Shawon; Ahlm, Clas; Svensson, Mattias; Klingström, Jonas

    2016-01-01

    Andes virus (ANDV) causes hantavirus pulmonary syndrome (HPS), a severe acute disease with a 40% case fatality rate. Humans are infected via inhalation, and the lungs are severely affected during HPS, but little is known regarding the effects of ANDV-infection of the lung. Using a 3-dimensional air-exposed organotypic human lung tissue model, we analyzed progeny virus production and cytokine-responses after ANDV-infection. After a 7-10 day period of low progeny virus production, a sudden peak in progeny virus levels was observed during approximately one week. This peak in ANDV-production coincided in time with activation of innate immune responses, as shown by induction of type I and III interferons and ISG56. After the peak in ANDV production a low, but stable, level of ANDV progeny was observed until 39 days after infection. Compared to uninfected models, ANDV caused long-term elevated levels of eotaxin-1, IL-6, IL-8, IP-10, and VEGF-A that peaked 20-25 days after infection, i.e., after the observed peak in progeny virus production. Notably, eotaxin-1 was only detected in supernatants from infected models. In conclusion, these findings suggest that ANDV replication in lung tissue elicits a late proinflammatory immune response with possible long-term effects on the local lung cytokine milieu. The change from an innate to a proinflammatory response might be important for the transition from initial asymptomatic infection to severe clinical disease, HPS.

  20. A Random Matrix Approach for Quantifying Model-Form Uncertainties in Turbulence Modeling

    CERN Document Server

    Xiao, Heng; Ghanem, Roger G

    2016-01-01

    With the ever-increasing use of Reynolds-Averaged Navier--Stokes (RANS) simulations in mission-critical applications, the quantification of model-form uncertainty in RANS models has attracted attention in the turbulence modeling community. Recently, a physics-based, nonparametric approach for quantifying model-form uncertainty in RANS simulations has been proposed, where Reynolds stresses are projected to physically meaningful dimensions and perturbations are introduced only in the physically realizable limits. However, a challenge associated with this approach is to assess the amount of information introduced in the prior distribution and to avoid imposing unwarranted constraints. In this work we propose a random matrix approach for quantifying model-form uncertainties in RANS simulations with the realizability of the Reynolds stress guaranteed. Furthermore, the maximum entropy principle is used to identify the probability distribution that satisfies the constraints from available information but without int...

  1. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope.

    Science.gov (United States)

    Quan, Wei; Lv, Lin; Liu, Baiqi

    2014-11-01

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  2. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope

    Energy Technology Data Exchange (ETDEWEB)

    Quan, Wei; Lv, Lin, E-mail: lvlinlch1990@163.com; Liu, Baiqi [School of Instrument Science and Opto-Electronics Engineering, Beihang University, Beijing 100191 (China)

    2014-11-15

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  3. Random regression models in the evaluation of the growth curve of Simbrasil beef cattle

    NARCIS (Netherlands)

    Mota, M.; Marques, F.A.; Lopes, P.S.; Hidalgo, A.M.

    2013-01-01

    Random regression models were used to estimate the types and orders of random effects of (co)variance functions in the description of the growth trajectory of the Simbrasil cattle breed. Records for 7049 animals totaling 18,677 individual weighings were submitted to 15 models from the third to the

  4. Technology diffusion in hospitals : A log odds random effects regression model

    NARCIS (Netherlands)

    Blank, J.L.T.; Valdmanis, V.G.

    2013-01-01

    This study identifies the factors that affect the diffusion of hospital innovations. We apply a log odds random effects regression model on hospital micro data. We introduce the concept of clustering innovations and the application of a log odds random effects regression model to describe the

  5. Technology diffusion in hospitals: A log odds random effects regression model

    NARCIS (Netherlands)

    J.L.T. Blank (Jos); V.G. Valdmanis (Vivian G.)

    2015-01-01

    textabstractThis study identifies the factors that affect the diffusion of hospital innovations. We apply a log odds random effects regression model on hospital micro data. We introduce the concept of clustering innovations and the application of a log odds random effects regression model to

  6. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  7. Peak flow meter use - slideshow

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/presentations/100202.htm Peak flow meter use - Series—Peak flow meter use - part one To use the sharing ... slide 7 out of 7 Overview A peak flow meter helps you check how well your asthma ...

  8. Regressor and random-effects dependencies in multilevel models

    NARCIS (Netherlands)

    Ebbes, P.; Bockenholt, U; Wedel, M.

    The objectives of this paper are (1) to review methods that can be used to test for different types of random effects and regressor dependencies, (2) to present results from Monte Carlo studies designed to investigate the performance of these methods, and (3) to discuss estimation methods that can

  9. Scale-free random graphs and Potts model

    Indian Academy of Sciences (India)

    We introduce a simple algorithm that constructs scale-free random graphs efficiently: each vertex has a prescribed weight − (0 < < 1) and an edge can connect vertices and with rate . Corresponding equilibrium ensemble is identified and the problem is solved by the → 1 limit of the -state Potts ...

  10. Random walk models of large-scale structure

    Indian Academy of Sciences (India)

    Abstract. This paper describes the insights gained from the excursion set approach, in which vari- ous questions about the phenomenology of large-scale structure formation can be mapped to problems associated with the first crossing distribution of appropriately defined barriers by random walks. Much of this is ...

  11. Peak distortion effects in analytical ion chromatography.

    Science.gov (United States)

    Wahab, M Farooq; Anderson, Jordan K; Abdelrady, Mohamed; Lucy, Charles A

    2014-01-07

    The elution profile of chromatographic peaks provides fundamental understanding of the processes that occur in the mobile phase and the stationary phase. Major advances have been made in the column chemistry and suppressor technology in ion chromatography (IC) to handle a variety of sample matrices and ions. However, if the samples contain high concentrations of matrix ions, the overloaded peak elution profile is distorted. Consequently, the trace peaks shift their positions in the chromatogram in a manner that depends on the peak shape of the overloading analyte. In this work, the peak shapes in IC are examined from a fundamental perspective. Three commercial IC columns AS16, AS18, and AS23 were studied with borate, hydroxide and carbonate as suppressible eluents. Monovalent ions (chloride, bromide, and nitrate) are used as model analytes under analytical (0.1 mM) to overload conditions (10-500 mM). Both peak fronting and tailing are observed. On the basis of competitive Langmuir isotherms, if the eluent anion is more strongly retained than the analyte ion on an ion exchanger, the analyte peak is fronting. If the eluent is more weakly retained on the stationary phase, the analyte peak always tails under overload conditions regardless of the stationary phase capacity. If the charge of the analyte and eluent anions are different (e.g., Br(-) vs CO3(2-)), the analyte peak shapes depend on the eluent concentration in a more complex pattern. It was shown that there are interesting similarities with peak distortions due to strongly retained mobile phase components in other modes of liquid chromatography.

  12. Studies in astronomical time series analysis: Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  13. Fuzzy Field Theory as a Random Matrix Model

    Science.gov (United States)

    Tekel, Juraj

    This dissertation considers the theory of scalar fields on fuzzy spaces from the point of view of random matrices. First we define random matrix ensembles, which are natural description of such theory. These ensembles are new and the novel feature is a presence of kinetic term in the probability measure, which couples the random matrix to a set of external matrices and thus breaks the original symmetry. Considering the case of a free field ensemble, which is generalization of a Gaussian matrix ensemble, we develop a technique to compute expectation values of the observables of the theory based on explicit Wick contractions and we write down recursion rules for these. We show that the eigenvalue distribution of the random matrix follows the Wigner semicircle distribution with a rescaled radius. We also compute distributions of the matrix Laplacian of the random matrix given by the new term and demonstrate that the eigenvalues of these two matrices are correlated. We demonstrate the robustness of the method by computing expectation values and distributions for more complicated observables. We then consider the ensemble corresponding to an interacting field theory, with a quartic interaction. We use the same method to compute the distribution of the eigenvalues and show that the presence of the kinetic terms rescales the distribution given by the original theory, which is a polynomially deformed Wigner semicircle. We compute the eigenvalue distribution of the matrix Laplacian and the joint distribution up to second order in the correlation and we show that the correlation between the two changes from the free field case. Finally, as an application of these results, we compute the phase diagram of the fuzzy scalar field theory, we find multiscaling which stabilizes this diagram in the limit of large matrices and compare it with the results obtained numerically and by considering the kinetic part as a perturbation.

  14. Marginal and Random Intercepts Models for Longitudinal Binary Data With Examples From Criminology.

    Science.gov (United States)

    Long, Jeffrey D; Loeber, Rolf; Farrington, David P

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides individual-level information including information about heterogeneity of growth. It is shown how a type of numerical averaging can be used with the random intercepts model to obtain group-level information, thus approximating individual and marginal aspects of the LMM. The types of inferences associated with each model are illustrated with longitudinal criminal offending data based on N = 506 males followed over a 22-year period. Violent offending indexed by official records and self-report were analyzed, with the marginal model estimated using generalized estimating equations and the random intercepts model estimated using maximum likelihood. The results show that the numerical averaging based on the random intercepts can produce prediction curves almost identical to those obtained directly from the marginal model parameter estimates. The results provide a basis for contrasting the models and the estimation procedures and key features are discussed to aid in selecting a method for empirical analysis.

  15. Stochastic model reduction for robust dynamical characterization of structures with random parameters

    Science.gov (United States)

    Ghienne, Martin; Blanzé, Claude; Laurent, Luc

    2017-12-01

    In this paper, we characterize random eigenspaces with a non-intrusive method based on the decoupling of random eigenvalues from their corresponding random eigenvectors. This method allows us to estimate the first statistical moments of the random eigenvalues of the system with a reduced number of deterministic finite element computations. The originality of this work is to adapt the method used to estimate each random eigenvalue depending on a global accuracy requirement. This allows us to ensure a minimal computational cost. The stochastic model of the structure is thus reduced by exploiting specific properties of random eigenvectors associated with the random eigenfrequencies being sought. An indicator with no additional computation cost is proposed to identify when the method needs to be enhanced. Finally, a simple three-beam frame and an industrial structure illustrate the proposed approach.

  16. Activated aging dynamics and effective trap model description in the random energy model

    Science.gov (United States)

    Baity-Jesi, M.; Biroli, G.; Cammarota, C.

    2018-01-01

    We study the out-of-equilibrium aging dynamics of the random energy model (REM) ruled by a single spin-flip Metropolis dynamics. We focus on the dynamical evolution taking place on time-scales diverging with the system size. Our aim is to show to what extent the activated dynamics displayed by the REM can be described in terms of an effective trap model. We identify two time regimes: the first one corresponds to the process of escaping from a basin in the energy landscape and to the subsequent exploration of high energy configurations, whereas the second one corresponds to the evolution from a deep basin to the other. By combining numerical simulations with analytical arguments we show why the trap model description does not hold in the former but becomes exact in the second.

  17. Modeling observation error and its effects in a random walk/extinction model.

    Science.gov (United States)

    Buonaccorsi, John P; Staudenmayer, John; Carreras, Maximo

    2006-11-01

    This paper examines the consequences of observation errors for the "random walk with drift", a model that incorporates density independence and is frequently used in population viability analysis. Exact expressions are given for biases in estimates of the mean, variance and growth parameters under very general models for the observation errors. For other quantities, such as the finite rate of increase, and probabilities about population size in the future we provide and evaluate approximate expressions. These expressions explain the biases induced by observation error without relying exclusively on simulations, and also suggest ways to correct for observation error. A secondary contribution is a careful discussion of observation error models, presented in terms of either log-abundance or abundance. This discussion recognizes that the bias and variance in observation errors may change over time, the result of changing sampling effort or dependence on the underlying population being sampled.

  18. Random materials modeling : Statistical approach proposal for recycling materials

    OpenAIRE

    Jeong, Jena; Wang, L.; Schmidt, Franziska; LEKLOU, NORDINE; Ramezani, Hamidreza

    2015-01-01

    The current paper aims to promote the application of demolition waste on civil constructions. To achieve this assaignement, two main physcical properties, i.e. dry density and water absoption of the recycled aggregates have been chosen and studied at the first stage. The materail moduli of the recycled materials, i.e. the Lamé's coefficients, and strongly depend on the porosity. Moreover, the recycling materials should be considered as random materials. As a result, the statistical approach...

  19. Modelling mesoporous alumina microstructure with 3D random models of platelets.

    Science.gov (United States)

    Wang, H; Pietrasanta, A; Jeulin, D; Willot, F; Faessel, M; Sorbier, L; Moreaud, M

    2015-12-01

    This work focuses on a mesoporous material made up of nanometric alumina 'platelets' of unknown shape. We develope a 3D random microstructure to model the porous material, based on 2D transmission electron microscopy (TEM) images, without prior knowledge on the spatial distribution of alumina inside the material. The TEM images, acquired on samples with thickness 300 nm, a scale much larger than the platelets's size, are too blurry and noisy to allow one to distinguish platelets or platelets aggregates individually. In a first step, the TEM images correlation function and integral range are estimated. The presence of long-range fluctuations, due to the TEM inhomogeneous detection, is detected and corrected by filtering. The corrected correlation function is used as a morphological descriptor for the model. After testing a Boolean model of platelets, a two-scale model of microstructure is introduced to replicate the statistical dispersion of platelets observed on TEM images. Accordingly, a set of two-scale Boolean models with varying physically admissible platelets shapes is proposed. Upon optimization, the model takes into account the dispersion of platelets in the microstructure as observed on TEM images. Comparing it to X-ray diffraction and nitrogen porosimetry data, the model is found to be in good agreement with the material in terms of specific surface area. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  20. A Mixture Proportional Hazards Model with Random Effects for Response Times in Tests

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias

    2016-01-01

    In this article, a new model for test response times is proposed that combines latent class analysis and the proportional hazards model with random effects in a similar vein as the mixture factor model. The model assumes the existence of different latent classes. In each latent class, the response times are distributed according to a…

  1. Numerical Simulation of Entropy Growth for a Nonlinear Evolutionary Model of Random Markets

    Directory of Open Access Journals (Sweden)

    Mahdi Keshtkar

    2016-01-01

    Full Text Available In this communication, the generalized continuous economic model for random markets is revisited. In this model for random markets, agents trade by pairs and exchange their money in a random and conservative way. They display the exponential wealth distribution as asymptotic equilibrium, independently of the effectiveness of the transactions and of the limitation of the total wealth. In the current work, entropy of mentioned model is defined and then some theorems on entropy growth of this evolutionary problem are given. Furthermore, the entropy increasing by simulation on some numerical examples is verified.

  2. A unifying framework for marginalized random intercept models of correlated binary outcomes

    Science.gov (United States)

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.

    2013-01-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871

  3. A unifying framework for marginalized random intercept models of correlated binary outcomes.

    Science.gov (United States)

    Swihart, Bruce J; Caffo, Brian S; Crainiceanu, Ciprian M

    2014-08-01

    We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts.

  4. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models

    Science.gov (United States)

    Wang, Wei; Griswold, Michael E.

    2016-01-01

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the ‘Average Predicted Value’ method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. PMID:27449636

  5. Identification and estimation of nonseparable single-index models in panel data with correlated random effects

    NARCIS (Netherlands)

    Cizek, Pavel; Lei, Jinghua

    The identification in a nonseparable single-index models with correlated random effects is considered in panel data with a fixed number of time periods. The identification assumption is based on the correlated random effects structure. Under this assumption, the parameters of interest are identified

  6. Identification and Estimation of Nonseparable Single-Index Models in Panel Data with Correlated Random Effects

    NARCIS (Netherlands)

    Cizek, P.; Lei, J.

    2013-01-01

    Abstract: The identification of parameters in a nonseparable single-index models with correlated random effects is considered in the context of panel data with a fixed number of time periods. The identification assumption is based on the correlated random-effect structure: the distribution of

  7. Deriving Genomic Breeding Values for Residual Feed Intake from Covariance Functions of Random Regression Models

    DEFF Research Database (Denmark)

    Strathe, Anders B; Mark, Thomas; Nielsen, Bjarne

    Random regression models were used to estimate covariance functions between cumulated feed intake (CFI) and body weight (BW) in 8424 Danish Duroc pigs. Random regressions on second order Legendre polynomials of age were used to describe genetic and permanent environmental curves in BW and CFI. Ba...

  8. Local lattice relaxations in random metallic alloys: Effective tetrahedron model and supercell approach

    DEFF Research Database (Denmark)

    Ruban, Andrei; Simak, S.I.; Shallcross, S.

    2003-01-01

    We present a simple effective tetrahedron model for local lattice relaxation effects in random metallic alloys on simple primitive lattices. A comparison with direct ab initio calculations for supercells representing random Ni0.50Pt0.50 and Cu0.25Au0.75 alloys as well as the dilute limit of Au-ri...

  9. Modeling high-peak-power few-cycle field waveform generation by optical parametric amplification in the long-wavelength infrared.

    Science.gov (United States)

    Voronin, A A; Lanin, A A; Zheltikov, A M

    2016-10-03

    Extended coupled-wave analysis of optical parametric chirped-pulse amplification (OPCPA) reveals regimes whereby high-peak-power few-cycle pulses can be generated in the long-wavelength infrared (LWIR) spectral range. Broadband OPCPA in suitable nonlinear crystals pumped at around 2 μm and seeded either through the signal or the idler input is shown to enable the generation of high-power field waveforms with pulse widths shorter than two field cycles within the entire LWIR range.

  10. Bayesian Peak Picking for NMR Spectra

    Directory of Open Access Journals (Sweden)

    Yichen Cheng

    2014-02-01

    Full Text Available Protein structure determination is a very important topic in structural genomics, which helps people to understand varieties of biological functions such as protein-protein interactions, protein–DNA interactions and so on. Nowadays, nuclear magnetic resonance (NMR has often been used to determine the three-dimensional structures of protein in vivo. This study aims to automate the peak picking step, the most important and tricky step in NMR structure determination. We propose to model the NMR spectrum by a mixture of bivariate Gaussian densities and use the stochastic approximation Monte Carlo algorithm as the computational tool to solve the problem. Under the Bayesian framework, the peak picking problem is casted as a variable selection problem. The proposed method can automatically distinguish true peaks from false ones without preprocessing the data. To the best of our knowledge, this is the first effort in the literature that tackles the peak picking problem for NMR spectrum data using Bayesian method.

  11. Bayesian Peak Picking for NMR Spectra

    KAUST Repository

    Cheng, Yichen

    2014-02-01

    Protein structure determination is a very important topic in structural genomics, which helps people to understand varieties of biological functions such as protein-protein interactions, protein–DNA interactions and so on. Nowadays, nuclear magnetic resonance (NMR) has often been used to determine the three-dimensional structures of protein in vivo. This study aims to automate the peak picking step, the most important and tricky step in NMR structure determination. We propose to model the NMR spectrum by a mixture of bivariate Gaussian densities and use the stochastic approximation Monte Carlo algorithm as the computational tool to solve the problem. Under the Bayesian framework, the peak picking problem is casted as a variable selection problem. The proposed method can automatically distinguish true peaks from false ones without preprocessing the data. To the best of our knowledge, this is the first effort in the literature that tackles the peak picking problem for NMR spectrum data using Bayesian method.

  12. Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration

    Science.gov (United States)

    Groce, Alex; Joshi, Rajeev

    2008-01-01

    Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.

  13. Predicting Peak Flows following Forest Fires

    Science.gov (United States)

    Elliot, William J.; Miller, Mary Ellen; Dobre, Mariana

    2016-04-01

    Following forest fires, peak flows in perennial and ephemeral streams often increase by a factor of 10 or more. This increase in peak flow rate may overwhelm existing downstream structures, such as road culverts, causing serious damage to road fills at stream crossings. In order to predict peak flow rates following wildfires, we have applied two different tools. One is based on the U.S.D.A Natural Resource Conservation Service Curve Number Method (CN), and the other is by applying the Water Erosion Prediction Project (WEPP) to the watershed. In our presentation, we will describe the science behind the two methods, and present the main variables for each model. We will then provide an example of a comparison of the two methods to a fire-prone watershed upstream of the City of Flagstaff, Arizona, USA, where a fire spread model was applied for current fuel loads, and for likely fuel loads following a fuel reduction treatment. When applying the curve number method, determining the time to peak flow can be problematic for low severity fires because the runoff flow paths are both surface and through shallow lateral flow. The WEPP watershed version incorporates shallow lateral flow into stream channels. However, the version of the WEPP model that was used for this study did not have channel routing capabilities, but rather relied on regression relationships to estimate peak flows from individual hillslope polygon peak runoff rates. We found that the two methods gave similar results if applied correctly, with the WEPP predictions somewhat greater than the CN predictions. Later releases of the WEPP model have incorporated alternative methods for routing peak flows that need to be evaluated.

  14. Implications of random variation in the Stand Prognosis Model

    Science.gov (United States)

    David A. Hamilton

    1991-01-01

    Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...

  15. The Random Walk Model Based on Bipartite Network

    Directory of Open Access Journals (Sweden)

    Zhang Man-Dun

    2016-01-01

    Full Text Available With the continuing development of the electronic commerce and growth of network information, there is a growing possibility for citizens to be confused by the information. Though the traditional technology of information retrieval have the ability to relieve the overload of information in some extent, it can not offer a targeted personality service based on user’s interests and activities. In this context, the recommendation algorithm arose. In this paper, on the basis of conventional recommendation, we studied the scheme of random walk based on bipartite network and the application of it. We put forward a similarity measurement based on implicit feedback. In this method, a uneven character vector is imported(the weight of item in the system. We put forward a improved random walk pattern which make use of partial or incomplete neighbor information to create recommendation information. In the end, there is an experiment in the real data set, the recommendation accuracy and practicality are improved. We promise the reality of the result of the experiment

  16. Random regression test-day model for the analysis of dairy cattle ...

    African Journals Online (AJOL)

    Random regression test-day model for the analysis of dairy cattle production data in South Africa: Creating the framework. EF Dzomba, KA Nephawe, AN Maiwashe, SWP Cloete, M Chimonyo, CB Banga, CJC Muller, K Dzama ...

  17. Modeling and understanding of effects of randomness in arrays of resonant meta-atoms

    DEFF Research Database (Denmark)

    Tretyakov, Sergei A.; Albooyeh, Mohammad; Alitalo, Pekka

    2013-01-01

    In this review presentation we will discuss approaches to modeling and understanding electromagnetic properties of 2D and 3D lattices of small resonant particles (meta-atoms) in transition from regular (periodic) to random (amorphous) states. Nanostructured metasurfaces (2D) and metamaterials (3D......) are arrangements of optically small but resonant particles (meta-atoms). We will present our results on analytical modeling of metasurfaces with periodical and random arrangements of electrically and magnetically resonant meta-atoms with identical or random sizes, both for the normal and oblique-angle excitations......) of the arrangements of meta-atoms....

  18. Estimating random-intercept models on data streams

    NARCIS (Netherlands)

    Ippel, L.; Kaptein, M.C.; Vermunt, J.K.

    2016-01-01

    Multilevel models are often used for the analysis of grouped data. Grouped data occur for instance when estimating the performance of pupils nested within schools or analyzing multiple observations nested within individuals. Currently, multilevel models are mostly fit to static datasets. However,

  19. Estimating Independent Locally Shifted Random Utility Models for Ranking Data

    Science.gov (United States)

    Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans

    2011-01-01

    We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…

  20. Covariance Functions and Random Regression Models in the ...

    African Journals Online (AJOL)

    ARC-IRENE

    many, highly correlated measures (Meyer, 1998a). Several approaches have been proposed to deal with such data, from simplest repeatability models (SRM) to complex multivariate models (MTM). The SRM considers different measurements at different stages (ages) as a realization of the same genetic trait with constant.

  1. A restricted dimer model on a two-dimensional random causal triangulation

    DEFF Research Database (Denmark)

    Ambjørn, Jan; Durhuus, Bergfinnur; Wheater, J. F.

    2014-01-01

    We introduce a restricted hard dimer model on a random causal triangulation that is exactly solvable and generalizes a model recently proposed by Atkin and Zohren (2012 Phys. Lett. B 712 445–50). We show that the latter model exhibits unusual behaviour at its multicritical point; in particular, its...

  2. Completely random measures for modelling block-structured sparse networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Schmidt, Mikkel Nørgaard; Mørup, Morten

    2016-01-01

    Many statistical methods for network data parameterize the edge-probability by attributing latent traits to the vertices such as block structure and assume exchangeability in the sense of the Aldous-Hoover representation theorem. Empirical studies of networks indicate that many real-world networks...... [2014] proposed the use of a different notion of exchangeability due to Kallenberg [2006] and obtained a network model which admits power-law behaviour while retaining desirable statistical properties, however this model does not capture latent vertex traits such as block-structure. In this work we re......-introduce the use of block-structure for network models obeying allenberg’s notion of exchangeability and thereby obtain a model which admits the inference of block-structure and edge inhomogeneity. We derive a simple expression for the likelihood and an efficient sampling method. The obtained model...

  3. A spatial error model with continuous random effects and an application to growth convergence

    Science.gov (United States)

    Laurini, Márcio Poletti

    2017-10-01

    We propose a spatial error model with continuous random effects based on Matérn covariance functions and apply this model for the analysis of income convergence processes (β -convergence). The use of a model with continuous random effects permits a clearer visualization and interpretation of the spatial dependency patterns, avoids the problems of defining neighborhoods in spatial econometrics models, and allows projecting the spatial effects for every possible location in the continuous space, circumventing the existing aggregations in discrete lattice representations. We apply this model approach to analyze the economic growth of Brazilian municipalities between 1991 and 2010 using unconditional and conditional formulations and a spatiotemporal model of convergence. The results indicate that the estimated spatial random effects are consistent with the existence of income convergence clubs for Brazilian municipalities in this period.

  4. Local properties of the large-scale peaks of the CMB temperature

    Science.gov (United States)

    Marcos-Caballero, A.; Martínez-González, E.; Vielva, P.

    2017-05-01

    In the present work, we study the largest structures of the CMB temperature measured by Planck in terms of the most prominent peaks on the sky, which, in particular, are located in the southern galactic hemisphere. Besides these large-scale features, the well-known Cold Spot anomaly is included in the analysis. All these peaks would contribute significantly to some of the CMB large-scale anomalies, as the parity and hemispherical asymmetries, the dipole modulation, the alignment between the quadrupole and the octopole, or in the case of the Cold Spot, to the non-Gaussianity of the field. The analysis of the peaks is performed by using their multipolar profiles, which characterize the local shape of the peaks in terms of the discrete Fourier transform of the azimuthal angle. In order to quantify the local anisotropy of the peaks, the distribution of the phases of the multipolar profiles is studied by using the Rayleigh random walk methodology. Finally, a direct analysis of the 2-dimensional field around the peaks is performed in order to take into account the effect of the galactic mask. The results of the analysis conclude that, once the peak amplitude and its first and second order derivatives at the centre are conditioned, the rest of the field is compatible with the standard model. In particular, it is observed that the Cold Spot anomaly is caused by the large value of curvature at the centre.

  5. Stochastic analysis model for vehicle-track coupled systems subject to earthquakes and track random irregularities

    Science.gov (United States)

    Xu, Lei; Zhai, Wanming

    2017-10-01

    This paper devotes to develop a computational model for stochastic analysis and reliability assessment of vehicle-track systems subject to earthquakes and track random irregularities. In this model, the earthquake is expressed as non-stationary random process simulated by spectral representation and random function, and the track random irregularities with ergodic properties on amplitudes, wavelengths and probabilities are characterized by a track irregularity probabilistic model, and then the number theoretical method (NTM) is applied to effectively select representative samples of earthquakes and track random irregularities. Furthermore, a vehicle-track coupled model is presented to obtain the dynamic responses of vehicle-track systems due to the earthquakes and track random irregularities at time-domain, and the probability density evolution method (PDEM) is introduced to describe the evolutionary process of probability from excitation input to response output by assuming the vehicle-track system as a probabilistic conservative system, which lays the foundation on reliability assessment of vehicle-track systems. The effectiveness of the proposed model is validated by comparing to the results of Monte-Carlo method from statistical viewpoint. As an illustrative example, the random vibrations of a high-speed railway vehicle running on the track slabs excited by lateral seismic waves and track random irregularities are analyzed, from which some significant conclusions can be drawn, e.g., track irregularities will additionally promote the dynamic influence of earthquakes especially on maximum values and dispersion degree of responses; the characteristic frequencies or frequency ranges respectively governed by earthquakes and track random irregularities are greatly different, moreover, the lateral seismic waves will dominate or even change the characteristic frequencies of system responses of some lateral dynamic indices at low frequency.

  6. Secure identity-based encryption in the quantum random oracle model

    Science.gov (United States)

    Zhandry, Mark

    2015-04-01

    We give the first proof of security for an identity-based encryption (IBE) scheme in the quantum random oracle model. This is the first proof of security for any scheme in this model that does not rely on the assumed existence of so-called quantum-secure pseudorandom functions (PRFs). Our techniques are quite general and we use them to obtain security proofs for two random oracle hierarchical IBE schemes and a random oracle signature scheme, all of which have previously resisted quantum security proofs, even assuming quantum-secure PRFs. We also explain how to remove quantum-secure PRFs from prior quantum random oracle model proofs. We accomplish these results by developing new tools for arguing that quantum algorithms cannot distinguish between two oracle distributions. Using a particular class of oracle distributions that we call semi-constant distributions, we argue that the aforementioned cryptosystems are secure against quantum adversaries.

  7. Additive and subtractive scrambling in optional randomized response modeling.

    Directory of Open Access Journals (Sweden)

    Zawar Hussain

    Full Text Available This article considers unbiased estimation of mean, variance and sensitivity level of a sensitive variable via scrambled response modeling. In particular, we focus on estimation of the mean. The idea of using additive and subtractive scrambling has been suggested under a recent scrambled response model. Whether it is estimation of mean, variance or sensitivity level, the proposed scheme of estimation is shown relatively more efficient than that recent model. As far as the estimation of mean is concerned, the proposed estimators perform relatively better than the estimators based on recent additive scrambling models. Relative efficiency comparisons are also made in order to highlight the performance of proposed estimators under suggested scrambling technique.

  8. Equilibria in a Random Viewer Model of Television Broadcasting

    DEFF Research Database (Denmark)

    Olai Hansen, Bodil; Keiding, Hans

    2014-01-01

    The authors considered a model of commercial television market with advertising with probabilistic viewer choice of channel, where private broadcasters may coexist with a public television broadcaster. The broadcasters influence the probability of getting viewer attention through the amount...

  9. The limiting behavior of the estimated parameters in a misspecified random field regression model

    DEFF Research Database (Denmark)

    Dahl, Christian Møller; Qin, Yu

    This paper examines the limiting properties of the estimated parameters in the random field regression model recently proposed by Hamilton (Econometrica, 2001). Though the model is parametric, it enjoys the flexibility of the nonparametric approach since it can approximate a large collection......, as a consequence the random field model specification introduces non-stationarity and non-ergodicity in the misspecified model and it becomes non-trivial, relative to the existing literature, to establish the limiting behavior of the estimated parameters. The asymptotic results are obtained by applying some...

  10. Restoration of dimensional reduction in the random-field Ising model at five dimensions.

    Science.gov (United States)

    Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas

    2017-04-01

    The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤DIsing model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.

  11. Tectonics, Climate and Earth's highest peaks

    Science.gov (United States)

    Robl, Jörg; Prasicek, Günther; Hergarten, Stefan

    2016-04-01

    Prominent peaks characterized by high relief and steep slopes are among the most spectacular morphological features on Earth. In collisional orogens they result from the interplay of tectonically driven crustal thickening and climatically induced destruction of overthickened crust by erosional surface processes. The glacial buzz-saw hypothesis proposes a superior status of climate in limiting mountain relief and peak altitude due to glacial erosion. It implies that peak altitude declines with duration of glacial occupation, i.e., towards high latitudes. This is in strong contrast with high peaks existing in high latitude mountain ranges (e.g. Mt. St. Elias range) and the idea of peak uplift due to isostatic compensation of spatially variable erosional unloading an over-thickened orogenic crust. In this study we investigate landscape dissection, crustal thickness and vertical strain rates in tectonically active mountain ranges to evaluate the influence of erosion on (latitudinal) variations in peak altitude. We analyze the spatial distribution of serval thousand prominent peaks on Earth extracted from the global ETOPO1 digital elevation model with a novel numerical tool. We compare this dataset to crustal thickness, thickening rate (vertical strain rate) and mean elevation. We use the ratios of mean elevation to peak elevation (landscape dissection) and peak elevation to crustal thickness (long-term impact of erosion on crustal thickness) as indicators for the influence of erosional surface processes on peak uplift and the vertical strain rate as a proxy for the mechanical state of the orogen. Our analysis reveals that crustal thickness and peak elevation correlate well in orogens that have reached a mechanically limited state (vertical strain rate near zero) where plate convergence is already balanced by lateral extrusion and gravitational collapse and plateaus are formed. On the Tibetan Plateau crustal thickness serves to predict peak elevation up to an altitude

  12. Flood Frequency Analysis for the Annual Peak Flows Simulated by an Event-Based Rainfall-Runoff Model in an Urban Drainage Basin

    Directory of Open Access Journals (Sweden)

    Jeonghwan Ahn

    2014-12-01

    Full Text Available The proper assessment of design flood is a major concern for many hydrological applications in small urban watersheds. A number of approaches can be used including statistical approach and the continuous simulation and design storm methods. However, each method has its own limitations and assumptions being applied to the real world. The design storm method has been widely used for a long time because of the simplicity of the method, but three critical assumptions are made such as the equality of the return periods between the rainfall and corresponding flood quantiles and the selections of the rainfall hyetograph and antecedent soil moisture conditions. Continuous simulation cannot be applied to small urban catchments with quick responses of runoff to rainfall. In this paper, a new flood frequency analysis for the simulated annual peak flows (FASAP is proposed. This method employs the candidate rainfall events selected by considering a time step order of five minutes and a sliding duration without any assumptions about the conventional design storm method in an urban watershed. In addition, the proposed methodology was verified by comparing the results with the conventional method in a real urban watershed.

  13. Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation

    Directory of Open Access Journals (Sweden)

    Gabriel Recchia

    2015-01-01

    Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  14. Encoding sequential information in semantic space models: comparing holographic reduced representation and random permutation.

    Science.gov (United States)

    Recchia, Gabriel; Sahlgren, Magnus; Kanerva, Pentti; Jones, Michael N

    2015-01-01

    Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, "noisy" permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.

  15. Spectra of Anderson type models with decaying randomness

    Indian Academy of Sciences (India)

    Springer Verlag Heidelberg #4 2048 1996 Dec 15 10:16:45

    The literature on the scattering theoretic and commutator methods for discrete Laplacian includes those of Boutet de Monvel–Sahbani [4, 5] who study deterministic ... the free part in terms of the structure it has in its spectral representation. 2. Main results. The models we consider in this paper are related to the discrete ...

  16. Multilevel random effect and marginal models for longitudinal data ...

    African Journals Online (AJOL)

    The models were applied to data obtained from a phase-III clinical trial on a new meningococcal vaccine. The goal is to investigate whether children injected by the candidate vaccine have a lower or higher risk for the occurrence of specific adverse events than children injected with licensed vaccine, and if so, to quantify the ...

  17. Modeling species distribution and change using random forest [Chapter 8

    Science.gov (United States)

    Jeffrey S. Evans; Melanie A. Murphy; Zachary A. Holden; Samuel A. Cushman

    2011-01-01

    Although inference is a critical component in ecological modeling, the balance between accurate predictions and inference is the ultimate goal in ecological studies (Peters 1991; De’ath 2007). Practical applications of ecology in conservation planning, ecosystem assessment, and bio-diversity are highly dependent on very accurate spatial predictions of...

  18. The effect of a trunk release maneuver on Peak Pressure Index, trunk displacement and perceived discomfort in older adults seated in a High Fowler's position: a randomized controlled trial.

    Science.gov (United States)

    Best, Krista L; Desharnais, Guylaine; Boily, Jeanette; Miller, William C; Camp, Pat G

    2012-11-16

    Pressure ulcers pose significant negative individual consequences and financial burden on the healthcare system. Prolonged sitting in High Fowler's position (HF) is common clinical practice for older adults who spend extended periods of time in bed. While HF aids in digestion and respiration, being placed in a HF may increase perceived discomfort and risk of pressure ulcers due to increased pressure magnitude at the sacral and gluteal regions. It is likely that shearing forces could also contribute to risk of pressure ulcers in HF. The purpose of this study was to evaluate the effect of a low-tech and time-efficient Trunk Release Manuever (TRM) on sacral and gluteal pressure, trunk displacement and perceived discomfort in ambulatory older adults. A randomized controlled trial was used. We recruited community-living adults who were 60 years of age and older using posters, newspaper advertisements and word-of-mouth. Participants were randomly allocated to either the intervention or control group. The intervention group (n = 59) received the TRM, while the control group (n = 58) maintained the standard HF position. The TRM group had significantly lower mean (SD) PPI values post-intervention compared to the control group, 59.6 (30.7) mmHg and 79.9 (36.5) mmHg respectively (p = 0.002). There was also a significant difference in trunk displacement between the TRM and control groups, +3.2 mm and -5.8 mm respectively (p = 0.005). There were no significant differences in perceived discomfort between the groups. The TRM was effective for reducing pressure in the sacral and gluteal regions and for releasing the trunk at the point of contact between the skin and the support surface, but did not have an effect on perceived discomfort. The TRM is a simple method of repositioning which may have important clinical application for the prevention of pressure ulcers that may occur as a result of HF.

  19. The Prediction Model of Dam Uplift Pressure Based on Random Forest

    Science.gov (United States)

    Li, Xing; Su, Huaizhi; Hu, Jiang

    2017-09-01

    The prediction of the dam uplift pressure is of great significance in the dam safety monitoring. Based on the comprehensive consideration of various factors, 18 parameters are selected as the main factors affecting the prediction of uplift pressure, use the actual monitoring data of uplift pressure as the evaluation factors for the prediction model, based on the random forest algorithm and support vector machine to build the dam uplift pressure prediction model to predict the uplift pressure of the dam, and the predict performance of the two models were compared and analyzed. At the same time, based on the established random forest prediction model, the significance of each factor is analyzed, and the importance of each factor of the prediction model is calculated by the importance function. Results showed that: (1) RF prediction model can quickly and accurately predict the uplift pressure value according to the influence factors, the average prediction accuracy is above 96%, compared with the support vector machine (SVM) model, random forest model has better robustness, better prediction precision and faster convergence speed, and the random forest model is more robust to missing data and unbalanced data. (2) The effect of water level on uplift pressure is the largest, and the influence of rainfall on the uplift pressure is the smallest compared with other factors.

  20. Evolution models with lethal mutations on symmetric or random fitness landscapes.

    Science.gov (United States)

    Kirakosyan, Zara; Saakian, David B; Hu, Chin-Kun

    2010-07-01

    We calculate the mean fitness for evolution models, when the fitness is a function of the Hamming distance from a reference sequence, and there is a probability that this fitness is nullified (Eigen model case) or tends to the negative infinity (Crow-Kimura model case). We calculate the mean fitness of these models. The mean fitness is calculated also for the random fitnesses with logarithmic-normal distribution, reasonably describing sometimes the situation with RNA viruses.

  1. Entropy, complexity, and Markov diagrams for random walk cancer models.

    Science.gov (United States)

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  2. Entropy, complexity, and Markov diagrams for random walk cancer models

    Science.gov (United States)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  3. Rgbp: An R Package for Gaussian, Poisson, and Binomial Random Effects Models with Frequency Coverage Evaluations

    Directory of Open Access Journals (Sweden)

    Hyungsuk Tak

    2017-06-01

    Full Text Available Rgbp is an R package that provides estimates and verifiable confidence intervals for random effects in two-level conjugate hierarchical models for overdispersed Gaussian, Poisson, and binomial data. Rgbp models aggregate data from k independent groups summarized by observed sufficient statistics for each random effect, such as sample means, possibly with covariates. Rgbp uses approximate Bayesian machinery with unique improper priors for the hyper-parameters, which leads to good repeated sampling coverage properties for random effects. A special feature of Rgbp is an option that generates synthetic data sets to check whether the interval estimates for random effects actually meet the nominal confidence levels. Additionally, Rgbp provides inference statistics for the hyper-parameters, e.g., regression coefficients.

  4. Possibility/Necessity-Based Probabilistic Expectation Models for Linear Programming Problems with Discrete Fuzzy Random Variables

    Directory of Open Access Journals (Sweden)

    Hideki Katagiri

    2017-10-01

    Full Text Available This paper considers linear programming problems (LPPs where the objective functions involve discrete fuzzy random variables (fuzzy set-valued discrete random variables. New decision making models, which are useful in fuzzy stochastic environments, are proposed based on both possibility theory and probability theory. In multi-objective cases, Pareto optimal solutions of the proposed models are newly defined. Computational algorithms for obtaining the Pareto optimal solutions of the proposed models are provided. It is shown that problems involving discrete fuzzy random variables can be transformed into deterministic nonlinear mathematical programming problems which can be solved through a conventional mathematical programming solver under practically reasonable assumptions. A numerical example of agriculture production problems is given to demonstrate the applicability of the proposed models to real-world problems in fuzzy stochastic environments.

  5. A discrete random effects probit model with application to the demand for preventive care.

    Science.gov (United States)

    Deb, P

    2001-07-01

    I have developed a random effects probit model in which the distribution of the random intercept is approximated by a discrete density. Monte Carlo results show that only three to four points of support are required for the discrete density to closely mimic normal and chi-squared densities and provide unbiased estimates of the structural parameters and the variance of the random intercept. The empirical application shows that both observed family characteristics and unobserved family-level heterogeneity are important determinants of the demand for preventive care. Copyright 2001 John Wiley & Sons, Ltd.

  6. A new neural network model for solving random interval linear programming problems.

    Science.gov (United States)

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Ultrasonic Transducer Peak-to-Peak Optical Measurement

    Directory of Open Access Journals (Sweden)

    Pavel Skarvada

    2012-01-01

    Full Text Available Possible optical setups for measurement of the peak-to-peak value of an ultrasonic transducer are described in this work. The Michelson interferometer with the calibrated nanopositioner in reference path and laser Doppler vibrometer were used for the basic measurement of vibration displacement. Langevin type of ultrasonic transducer is used for the purposes of Electro-Ultrasonic Nonlinear Spectroscopy (EUNS. Parameters of produced mechanical vibration have to been well known for EUNS. Moreover, a monitoring of mechanical vibration frequency shift with a mass load and sample-transducer coupling is important for EUNS measurement.

  8. On the Inference of Spatial Continuity using Spartan Random Field Models

    OpenAIRE

    Elogne, Samuel; Hristopulos, Dionisis

    2006-01-01

    This paper addresses the inference of spatial dependence in the context of a recently proposed framework. More specifically, the paper focuses on the estimation of model parameters for a class of generalized Gibbs random fields, i.e., Spartan Spatial Random Fields (SSRFs). The problem of parameter inference is based on the minimization of a distance metric. The latter involves a specifically designed distance between sample constraints (variance, generalized ``gradient'' and ``curvature'') an...

  9. Relationship between flux and concentration gradient of diffusive particles with the usage of random walk model

    Science.gov (United States)

    Ovchinnikov, M. N.

    2017-09-01

    The fundamental solutions of the diffusion equation for the local-equilibrium and nonlocal models are considered as the limiting cases of the solution of a problem related to consideration of the Brownian particles random walks. The differences between fundamental solutions, flows and concentration gradients were studied. The new modified non-local diffusion equation of the telegrapher type with correction function is suggested. It contains only microparameters of the random walk problem.

  10. The effect of a trunk release maneuver on Peak Pressure Index, trunk displacement and perceived discomfort in older adults seated in a high Fowler’s position: a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Best Krista L

    2012-11-01

    Full Text Available Abstract Background Pressure ulcers pose significant negative individual consequences and financial burden on the healthcare system. Prolonged sitting in High Fowler’s position (HF is common clinical practice for older adults who spend extended periods of time in bed. While HF aids in digestion and respiration, being placed in a HF may increase perceived discomfort and risk of pressure ulcers due to increased pressure magnitude at the sacral and gluteal regions. It is likely that shearing forces could also contribute to risk of pressure ulcers in HF. The purpose of this study was to evaluate the effect of a low-tech and time-efficient Trunk Release Manuever (TRM on sacral and gluteal pressure, trunk displacement and perceived discomfort in ambulatory older adults. Method A randomized controlled trial was used. We recruited community-living adults who were 60 years of age and older using posters, newspaper advertisements and word-of-mouth. Participants were randomly allocated to either the intervention or control group. The intervention group (n = 59 received the TRM, while the control group (n = 58 maintained the standard HF position. Results The TRM group had significantly lower mean (SD PPI values post-intervention compared to the control group, 59.6 (30.7 mmHg and 79.9 (36.5 mmHg respectively (p = 0.002. There was also a significant difference in trunk displacement between the TRM and control groups, +3.2 mm and −5.8 mm respectively (p = 0.005. There were no significant differences in perceived discomfort between the groups. Conclusion The TRM was effective for reducing pressure in the sacral and gluteal regions and for releasing the trunk at the point of contact between the skin and the support surface, but did not have an effect on perceived discomfort. The TRM is a simple method of repositioning which may have important clinical application for the prevention of pressure ulcers that may occur as a result of HF.

  11. Random regret minimization : Exploration of a new choice model for environmental and resource economics

    NARCIS (Netherlands)

    Thiene, M.; Boeri, M.; Chorus, C.G.

    2011-01-01

    This paper introduces the discrete choice model-paradigm of Random Regret Minimization (RRM) to the field of environmental and resource economics. The RRM-approach has been very recently developed in the context of travel demand modelling and presents a tractable, regret-based alternative to the

  12. Random Regret Minimization for consumer choice modelling : Assessment of empirical evidence

    NARCIS (Netherlands)

    Chorus, C.G.; Dekker, T.

    2013-01-01

    This paper introduces to the field of marketing a regret-based discrete choice model for the analysis of multi-attribute consumer choices from multinomial choice sets. This random regret minimization model (RRM), which has two years ago been introduced in the field of transport, forms a regret-based

  13. P2 : A random effects model with covariates for directed graphs

    NARCIS (Netherlands)

    van Duijn, M.A.J.; Snijders, T.A.B.; Zijlstra, B.J.H.

    A random effects model is proposed for the analysis of binary dyadic data that represent a social network or directed graph, using nodal and/or dyadic attributes as covariates. The network structure is reflected by modeling the dependence between the relations to and from the same actor or node.

  14. Simulation of random set models for unions of discs and the use of power tessellations

    DEFF Research Database (Denmark)

    Møller, Jesper; Helisova, Katerina

    2009-01-01

    The power tessellation (or power diagram or Laguerre diagram) turns out to be particularly useful in connection to a flexible class of random set models specified by an underlying process of interacting discs. We discuss how to simulate these models and calculate various geometric characteristics...

  15. Peak Dose Assessment for Proposed DOE-PPPO Authorized Limits

    Energy Technology Data Exchange (ETDEWEB)

    Maldonado, Delis [Oak Ridge Institute for Science and Education, Oak Ridge, TN (United States). Independent Environmental Assessment and Verification Program

    2012-06-01

    The Oak Ridge Institute for Science and Education (ORISE), a U.S. Department of Energy (DOE) prime contractor, was contracted by the DOE Portsmouth/Paducah Project Office (DOE-PPPO) to conduct a peak dose assessment in support of the Authorized Limits Request for Solid Waste Disposal at Landfill C-746-U at the Paducah Gaseous Diffusion Plant (DOE-PPPO 2011a). The peak doses were calculated based on the DOE-PPPO Proposed Single Radionuclides Soil Guidelines and the DOE-PPPO Proposed Authorized Limits (AL) Volumetric Concentrations available in DOE-PPPO 2011a. This work is provided as an appendix to the Dose Modeling Evaluations and Technical Support Document for the Authorized Limits Request for the C-746-U Landfill at the Paducah Gaseous Diffusion Plant, Paducah, Kentucky (ORISE 2012). The receptors evaluated in ORISE 2012 were selected by the DOE-PPPO for the additional peak dose evaluations. These receptors included a Landfill Worker, Trespasser, Resident Farmer (onsite), Resident Gardener, Recreational User, Outdoor Worker and an Offsite Resident Farmer. The RESRAD (Version 6.5) and RESRAD-OFFSITE (Version 2.5) computer codes were used for the peak dose assessments. Deterministic peak dose assessments were performed for all the receptors and a probabilistic dose assessment was performed only for the Offsite Resident Farmer at the request of the DOE-PPPO. In a deterministic analysis, a single input value results in a single output value. In other words, a deterministic analysis uses single parameter values for every variable in the code. By contrast, a probabilistic approach assigns parameter ranges to certain variables, and the code randomly selects the values for each variable from the parameter range each time it calculates the dose (NRC 2006). The receptor scenarios, computer codes and parameter input files were previously used in ORISE 2012. A few modifications were made to the parameter input files as appropriate for this effort. Some of these changes

  16. A random effects meta-analysis model with Box-Cox transformation.

    Science.gov (United States)

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The

  17. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter.

    Science.gov (United States)

    Huang, Lei

    2015-09-30

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required.

  18. A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications

    Science.gov (United States)

    Grauer, Jared A.

    2017-01-01

    Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.

  19. One Model Fits All: Explaining Many Aspects of Number Comparison within a Single Coherent Model-A Random Walk Account

    Science.gov (United States)

    Reike, Dennis; Schwarz, Wolf

    2016-01-01

    The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…

  20. Hubbert's Peak: A Physicist's View

    Science.gov (United States)

    McDonald, Richard

    2011-11-01

    Oil and its by-products, as used in manufacturing, agriculture, and transportation, are the lifeblood of today's 7 billion-person population and our 65T world economy. Despite this importance, estimates of future oil production seem dominated by wishful thinking rather than quantitative analysis. Better studies are needed. In 1956, Dr. M.King Hubbert proposed a theory of resource production and applied it successfully to predict peak U.S. oil production in 1970. Thus, the peak of oil production is referred to as ``Hubbert's Peak.'' Prof. Al Bartlett extended this work in publications and lectures on population and oil. Both Hubbert and Bartlett place peak world oil production at a similar time, essentially now. This paper extends this line of work to include analyses of individual countries, inclusion of multiple Gaussian peaks, and analysis of reserves data. While this is not strictly a predictive theory, we will demonstrate a ``closed'' story connecting production, oil-in-place, and reserves. This gives us the ``most likely'' estimate of future oil availability. Finally, we will comment on synthetic oil and the possibility of carbon-neutral synthetic oil for a sustainable future.

  1. Bias in peak clad temperature predictions due to uncertainties in modeling of ECC bypass and dissolved non-condensable gas phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Rohatgi, U.S.; Neymotin, L.Y.; Jo, J.; Wulff, W. (Brookhaven National Lab., Upton, NY (USA))

    1990-09-01

    This report describes a general method for estimating the effect on the Reflood Phase PCT from systematic errors (biases) associated with the modelling of the ECCS and dissolved nitrogen, and the application of this method in estimating biases in the Reflood Phase PCT (second PCT) predicted by the TRAC/PF1/MOD1, Version 14.3. The bias in the second PCT due to the uncertainty in the existing code models for ECCS related phenomena is {minus}19{degree}K ({minus}34{degree}F). The negative bias implies that the code models for this phenomena are conservative. The bias in the second PCT due to the lack of modelling of dissolved N{sub 2} in the code is estimated to be 9.9{degree}K (17.8{degree}F). The positive bias implies that the absence of dissolved N{sub 2} model makes the code prediction of PCT non-conservative. The bias estimation in this report is a major exception among all other uncertainty and bias assessments performed in conjunction with the CSAU methodology demonstration, because this bias estimation benefitted from using full-scale test data from the full-scale Upper Plenum Test Facility (UPTF). Thus, the bias estimates presented here are unaffected by scale distortions in test facilities. Data from small size facilities were also available and an estimate of bias based on these data will be conservative. 35 refs., 18 figs., 5 tabs.

  2. A note on Using regression models to analyze randomized trials: asymptotically valid hypothesis tests despite incorrectly specified models.

    Science.gov (United States)

    Kim, Jane Paik

    2013-03-01

    In the context of randomized trials, Rosenblum and van der Laan (2009, Biometrics 63, 937-945) considered the null hypothesis of no treatment effect on the mean outcome within strata of baseline variables. They showed that hypothesis tests based on linear regression models and generalized linear regression models are guaranteed to have asymptotically correct Type I error regardless of the actual data generating distribution, assuming the treatment assignment is independent of covariates. We consider another important outcome in randomized trials, the time from randomization until failure, and the null hypothesis of no treatment effect on the survivor function conditional on a set of baseline variables. By a direct application of arguments in Rosenblum and van der Laan (2009), we show that hypothesis tests based on multiplicative hazards models with an exponential link, i.e., proportional hazards models, and multiplicative hazards models with linear link functions where the baseline hazard is parameterized, are asymptotically valid under model misspecification provided that the censoring distribution is independent of the treatment assignment given the covariates. In the case of the Cox model and linear link model with unspecified baseline hazard function, the arguments in Rosenblum and van der Laan (2009) cannot be applied to show the robustness of a misspecified model. Instead, we adopt an approach used in previous literature (Struthers and Kalbfleisch, 1986, Biometrika 73, 363-369) to show that hypothesis tests based on these models, including models with interaction terms, have correct type I error. Copyright © 2013, The International Biometric Society.

  3. Modelling a demand driven biogas system for production of electricity at peak demand and for production of biomethane at other times.

    Science.gov (United States)

    O'Shea, R; Wall, D; Murphy, J D

    2016-09-01

    Four feedstocks were assessed for use in a demand driven biogas system. Biomethane potential (BMP) assays were conducted for grass silage, food waste, Laminaria digitata and dairy cow slurry. Semi-continuous trials were undertaken for all feedstocks, assessing biogas and biomethane production. Three kinetic models of the semi-continuous trials were compared. A first order model most accurately correlated with gas production in the pulse fed semi-continuous system. This model was developed for production of electricity on demand, and biomethane upgrading. The model examined a theoretical grass silage digester that would produce 435kWe in a continuous fed system. Adaptation to demand driven biogas required 187min to produce sufficient methane to run a 2MWe combined heat and power (CHP) unit for 60min. The upgrading system was dispatched 71min following CHP shutdown. Of the biogas produced 21% was used in the CHP and 79% was used in the upgrading system. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Random aggregation models for the formation and evolution of coding and non-coding DNA

    Science.gov (United States)

    Provata, A.

    A random aggregation model with influx is proposed for the formation of the non-coding DNA regions via random co-aggregation and influx of biological macromolecules such as viruses, parasite DNA, and replication segments. The constant mixing (transpositions) and influx drives the system in an out-of-equilibrium steady state characterised by a power law size distribution. The model predicts the long range distributions found in the noncoding eucaryotic DNA and explains the observed correlations. For the formation of coding DNA a random closed aggregation model is proposed which predicts short range coding size distributions. The closed aggregation process drives the system in an almost “frozen” stable state which is robust to external perturbations and which is characterised by well defined space and time scales, as observed in coding sequences.

  5. Elastic properties of model random three-dimensional open-cell solids

    Science.gov (United States)

    Roberts, A. P.; Garboczi, E. J.

    2002-01-01

    Most cellular solids are random materials, while practically all theoretical structure-property relations are for periodic models. To generate theoretical results for random models the finite element method (FEM) was used to study the elastic properties of open-cell solids. We have computed the density ( ρ) and microstructure dependence of the Young's modulus ( E) and Poisson's ratio ( ν) for four different isotropic random models. The models were based on Voronoi tessellations, level-cut Gaussian random fields, and nearest neighbour node-bond rules. These models were chosen to broadly represent the structure of foamed solids and other (non-foamed) cellular materials. At low densities, the Young's modulus can be described by the relation E∝ ρn. The exponent n and constant of proportionality depend on microstructure. We find 1.3common model of foams, became approximately incompressible ( ν≈0.5). This behaviour is not commonly observed experimentally. Our studies showed the result was robust to polydispersity and that a relatively large number (15%) of the bonds must be broken to significantly reduce the low-density Poission's ratio to ν≈0.33.

  6. Bayesian phase II adaptive randomization by jointly modeling time-to-event efficacy and binary toxicity.

    Science.gov (United States)

    Lei, Xiudong; Yuan, Ying; Yin, Guosheng

    2011-01-01

    In oncology, toxicity is typically observable shortly after a chemotherapy treatment, whereas efficacy, often characterized by tumor shrinkage, is observable after a relatively long period of time. In a phase II clinical trial design, we propose a Bayesian adaptive randomization procedure that accounts for both efficacy and toxicity outcomes. We model efficacy as a time-to-event endpoint and toxicity as a binary endpoint, sharing common random effects in order to induce dependence between the bivariate outcomes. More generally, we allow the randomization probability to depend on patients' specific covariates, such as prognostic factors. Early stopping boundaries are constructed for toxicity and futility, and a superior treatment arm is recommended at the end of the trial. Following the setup of a recent renal cancer clinical trial at M. D. Anderson Cancer Center, we conduct extensive simulation studies under various scenarios to investigate the performance of the proposed method, and compare it with available Bayesian adaptive randomization procedures.

  7. Phase structure of the O(n) model on a random lattice for n > 2

    DEFF Research Database (Denmark)

    Durhuus, B.; Kristjansen, C.

    1997-01-01

    We show that coarse graining arguments invented for the analysis of multi-spin systems on a randomly triangulated surface apply also to the O(n) model on a random lattice. These arguments imply that if the model has a critical point with diverging string susceptibility, then either γ = +1...... by (γ̃, γ) = (-1/m, 1/m+1), m = 2, 3, . . . We also show that at the critical points with positive string susceptibility exponent the average number of loops on the surface diverges while the average length of a single loop stays finite....

  8. Multiplicative random regression model for heterogeneous variance adjustment in genetic evaluation for milk yield in Simmental.

    Science.gov (United States)

    Lidauer, M H; Emmerling, R; Mäntysaari, E A

    2008-06-01

    A multiplicative random regression (M-RRM) test-day (TD) model was used to analyse daily milk yields from all available parities of German and Austrian Simmental dairy cattle. The method to account for heterogeneous variance (HV) was based on the multiplicative mixed model approach of Meuwissen. The variance model for the heterogeneity parameters included a fixed region x year x month x parity effect and a random herd x test-month effect with a within-herd first-order autocorrelation between test-months. Acceleration of variance model solutions after each multiplicative model cycle enabled fast convergence of adjustment factors and reduced total computing time significantly. Maximum Likelihood estimation of within-strata residual variances was enhanced by inclusion of approximated information on loss in degrees of freedom due to estimation of location parameters. This improved heterogeneity estimates for very small herds. The multiplicative model was compared with a model that assumed homogeneous variance. Re-estimated genetic variances, based on Mendelian sampling deviations, were homogeneous for the M-RRM TD model but heterogeneous for the homogeneous random regression TD model. Accounting for HV had large effect on cow ranking but moderate effect on bull ranking.

  9. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  10. Studies in astronomical time series analysis. I - Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1981-01-01

    Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.

  11. Women's preferences for cardiac rehabilitation program model: a randomized controlled trial.

    Science.gov (United States)

    Andraos, Christine; Arthur, Heather M; Oh, Paul; Chessex, Caroline; Brister, Stephanie; Grace, Sherry L

    2015-12-01

    Although cardiac rehabilitation (CR) is effective, women often report programs do not meet their needs. Innovative models have been developed that may better suit women. The objectives of the study were to describe: (1) adherence to CR model allocation; (2) satisfaction by model attended; and (3) CR preferences. Tertiary objectives from a randomized controlled trial of female patients randomized to mixed-sex, women-only, or home-based CR were tested. Patients were recruited from six hospitals. Consenting participants were asked to complete a survey and undertook a CR intake assessment. Eligible patients were randomized. Participants were mailed a follow-up survey six months later. Adherence to model allocation was ascertained from CR charts. Overall 169 (18.6%) patients were randomized, of which 116 (68.6%) completed the post-test survey. Forty-five (26.6%) participants did not receive the allocated model, with those referred to home-based CR least likely to attend the allocated model (n = 25; 45.4%). Semi-structured interviews revealed participants also often switched from women-only to mixed-sex CR due to time conflicts. Satisfaction was high across all models (mean = 4.23 ± 1.16/5; p = 0.85) but participants in the women-only program felt significantly more comfortable in their workout attire (p = 0.003) and perceived the environment as less competitive (p = 0.02). Patients equally preferred mixed-sex (n = 44, 41.9%) and women-only (n = 44, 41.9%) CR, over home-based (n = 17, 16.2%), with patients preferring the model they attended. Females were highly satisfied regardless of CR model attended but preferred supervised programs most. Patient preference and session timing should be considered in program model allocation decisions. © The European Society of Cardiology 2014.

  12. A comparison of various approaches to the exponential random graph model : A reanalysis of 102 student networks in school classes

    NARCIS (Netherlands)

    Lubbers, Miranda J.; Snijders, Tom A. B.

    2007-01-01

    This paper describes an empirical comparison of four specifications of the exponential family of random graph models (ERGM), distinguished by model specification (dyadic independence, Markov, partial conditional dependence) and, for the Markov model, by estimation method (Maximum Pseudolikelihood,

  13. A dynamic random effects multinomial logit model of household car ownership

    DEFF Research Database (Denmark)

    Bue Bjørner, Thomas; Leth-Petersen, Søren

    2007-01-01

    Using a large household panel we estimate demand for car ownership by means of a dynamic multinomial model with correlated random effects. Results suggest that the persistence in car ownership observed in the data should be attributed to both true state dependence and to unobserved heterogeneity...... (random effects). It also appears that random effects related to single and multiple car ownership are correlated, suggesting that the IIA assumption employed in simple multinomial models of car ownership is invalid. Relatively small elasticities with respect to income and car costs are estimated....... It should, however, be noted that the level of state dependence is considerably larger for households with single car ownership as compared with multiple car ownership. This suggests that the holding of a second car will be more affected by changes in the socioeconomic conditions of the household...

  14. Using observation-level random effects to model overdispersion in count data in ecology and evolution

    Directory of Open Access Journals (Sweden)

    Xavier A. Harrison

    2014-10-01

    Full Text Available Overdispersion is common in models of count data in ecology and evolutionary biology, and can occur due to missing covariates, non-independent (aggregated data, or an excess frequency of zeroes (zero-inflation. Accounting for overdispersion in such models is vital, as failing to do so can lead to biased parameter estimates, and false conclusions regarding hypotheses of interest. Observation-level random effects (OLRE, where each data point receives a unique level of a random effect that models the extra-Poisson variation present in the data, are commonly employed to cope with overdispersion in count data. However studies investigating the efficacy of observation-level random effects as a means to deal with overdispersion are scarce. Here I use simulations to show that in cases where overdispersion is caused by random extra-Poisson noise, or aggregation in the count data, observation-level random effects yield more accurate parameter estimates compared to when overdispersion is simply ignored. Conversely, OLRE fail to reduce bias in zero-inflated data, and in some cases increase bias at high levels of overdispersion. There was a positive relationship between the magnitude of overdispersion and the degree of bias in parameter estimates. Critically, the simulations reveal that failing to account for overdispersion in mixed models can erroneously inflate measures of explained variance (r2, which may lead to researchers overestimating the predictive power of variables of interest. This work suggests use of observation-level random effects provides a simple and robust means to account for overdispersion in count data, but also that their ability to minimise bias is not uniform across all types of overdispersion and must be applied judiciously.

  15. On the Gaussian peak of the product of decay probabilities of the standard model Higgs boson at a mass m_H\\sim125 GeV

    CERN Document Server

    d'Enterria, David

    2012-01-01

    The product of the branching ratios of the standard model (SM) Higgs boson in all decay channels available below the top-antitop threshold is observed to be a Gaussian distribution of the Higgs boson mass with a maximum centred around m_H\\sim125 GeV, i.e. in the region of masses where a new boson has been discovered at the Large Hadron Collider. This fact places the Higgs particle at a mass of "maximum opportunity" in terms of the study of its decays and couplings to other particles. Such an intriguing observation is seemingly driven by the different m_H-power dependence of the Higgs decay-widths into gauge bosons and fermions featuring a steep anticorrelated evolution below the WW decay threshold in the region m_H\\sim m_W-2_mW. Speculative consequences of taking this seemingly accidental feature as indicative of an (unknown) underlying dynamics of the SM are also discussed.

  16. 2D stochastic-integral models for characterizing random grain noise in titanium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H. [Victor Technologies, LLC, PO Box 7706, Bloomington, IN 47407-7706 (United States); Cherry, Matthew [University of Dayton Research Institute, 300 College Park Dr., Dayton, OH 45410 (United States); Pilchak, Adam; Knopp, Jeremy S.; Blodgett, Mark P. [Air Force Research Laboratory (AFRL/RXC), Wright Patterson AFB OH 45433-7817 (United States)

    2014-02-18

    We extend our previous work, in which we applied high-dimensional model representation (HDMR) and analysis of variance (ANOVA) concepts to the characterization of a metallic surface that has undergone a shot-peening treatment to reduce residual stresses, and has, therefore, become a random conductivity field. That example was treated as a onedimensional problem, because those were the only data available. In this study, we develop a more rigorous two-dimensional model for characterizing random, anisotropic grain noise in titanium alloys. Such a model is necessary if we are to accurately capture the 'clumping' of crystallites into long chains that appear during the processing of the metal into a finished product. The mathematical model starts with an application of the Karhunen-Loève (K-L) expansion for the random Euler angles, θ and φ, that characterize the orientation of each crystallite in the sample. The random orientation of each crystallite then defines the stochastic nature of the electrical conductivity tensor of the metal. We study two possible covariances, Gaussian and double-exponential, which are the kernel of the K-L integral equation, and find that the double-exponential appears to satisfy measurements more closely of the two. Results based on data from a Ti-7Al sample will be given, and further applications of HDMR and ANOVA will be discussed.

  17. A random regression model in analysis of litter size in pigs | Lukovi& ...

    African Journals Online (AJOL)

    Dispersion parameters for number of piglets born alive (NBA) were estimated using a random regression model (RRM). Two data sets of litter records from the Nemščak farm in Slovenia were used for analyses. The first dataset (DS1) included records from the first to the sixth parity. The second dataset (DS2) was extended ...

  18. Random regression models for daily feed intake in Danish Duroc pigs

    DEFF Research Database (Denmark)

    Strathe, Anders Bjerring; Mark, Thomas; Jensen, Just

    The objective of this study was to develop random regression models and estimate covariance functions for daily feed intake (DFI) in Danish Duroc pigs. A total of 476201 DFI records were available on 6542 Duroc boars between 70 to 160 days of age. The data originated from the National test statio...

  19. An Interactive Computer Model for Improved Student Understanding of Random Particle Motion and Osmosis

    Science.gov (United States)

    Kottonau, Johannes

    2011-01-01

    Effectively teaching the concepts of osmosis to college-level students is a major obstacle in biological education. Therefore, a novel computer model is presented that allows students to observe the random nature of particle motion simultaneously with the seemingly directed net flow of water across a semipermeable membrane during osmotic…

  20. A comparison of methods for representing random taste heterogeneity in discrete choice models

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Hess, Stephane

    2009-01-01

    This paper reports the findings of a systematic study using Monte Carlo experiments and a real dataset aimed at comparing the performance of various ways of specifying random taste heterogeneity in a discrete choice model. Specifically, the analysis compares the performance of two recent advanced...

  1. Semi-parametric estimation of random effects in a logistic regression model using conditional inference

    DEFF Research Database (Denmark)

    Petersen, Jørgen Holm

    2016-01-01

    This paper describes a new approach to the estimation in a logistic regression model with two crossed random effects where special interest is in estimating the variance of one of the effects while not making distributional assumptions about the other effect. A composite likelihood is studied...

  2. Comparing risk attitudes of organic and non-organic farmers with a Bayesian random coefficient model

    NARCIS (Netherlands)

    Gardebroek, C.

    2006-01-01

    Organic farming is usually considered to be more risky than conventional farming, but the risk aversion of organic farmers compared with that of conventional farmers has not been studied. Using a non-structural approach to risk estimation, a Bayesian random coefficient model is used to obtain

  3. Experimental validation of the stochastic model of a randomly fluctuating transmission-line

    NARCIS (Netherlands)

    Sy, O.O.; Vaessen, J.A.H.M.; Beurden, M.C. van; Michielsen, B.L.; Tijhuis, A.G.; Zwamborn, A.P.M.; Groot, J.S.

    2008-01-01

    A modeling method is proposed to quantify uncertainties affecting electromagnetic interactions. This method considers the uncertainties as random and measures them thanks to probability theory. A practical application is considered through the case of a transmission-line of varying geometry,

  4. Firm-Related Training Tracks: A Random Effects Ordered Probit Model

    Science.gov (United States)

    Groot, Wim; van den Brink, Henriette Maassen

    2003-01-01

    A random effects ordered response model of training is estimated to analyze the existence of training tracks and time varying coefficients in training frequency. Two waves of a Dutch panel survey of workers are used covering the period 1992-1996. The amount of training received by workers increased during the period 1994-1996 compared to…

  5. Randomized Controlled Trial of Video Self-Modeling Following Speech Restructuring Treatment for Stuttering

    Science.gov (United States)

    Cream, Angela; O'Brian, Sue; Jones, Mark; Block, Susan; Harrison, Elisabeth; Lincoln, Michelle; Hewat, Sally; Packman, Ann; Menzies, Ross; Onslow, Mark

    2010-01-01

    Purpose: In this study, the authors investigated the efficacy of video self-modeling (VSM) following speech restructuring treatment to improve the maintenance of treatment effects. Method: The design was an open-plan, parallel-group, randomized controlled trial. Participants were 89 adults and adolescents who undertook intensive speech…

  6. The van Hemmen model and effect of random crystalline anisotropy field

    Energy Technology Data Exchange (ETDEWEB)

    Morais, Denes M. de [Instituto de Física, Universidade Federal de Mato Grosso, 78060-900 Cuiabá, Mato Grosso (Brazil); Godoy, Mauricio, E-mail: mgodoy@fisica.ufmt.br [Instituto de Física, Universidade Federal de Mato Grosso, 78060-900 Cuiabá, Mato Grosso (Brazil); Arruda, Alberto S. de, E-mail: aarruda@fisica.ufmt.br [Instituto de Física, Universidade Federal de Mato Grosso, 78060-900 Cuiabá, Mato Grosso (Brazil); Silva, Jonathas N. da [Universidade Estadual Paulista, 14800-901, Araraquara, São Paulo (Brazil); Ricardo de Sousa, J. [Instituto Nacional de Sistemas Complexos, Departamento de Fisica, Universidade Federal do Amazona, 69077-000, Manaus, Amazonas (Brazil)

    2016-01-15

    In this work, we have presented the generalized phase diagrams of the van Hemmen model for spin S=1 in the presence of an anisotropic term of random crystalline field. In order to study the critical behavior of the phase transitions, we employed a mean-field Curie–Weiss approach, which allows calculation of the free energy and the equations of state of the model. The phase diagrams obtained here displayed tricritical behavior, with second-order phase transition lines separated from the first-order phase transition lines by a tricritical point. - Highlights: • Several phase diagrams are obtained for the model. • The influence of the random crystalline anisotropy field on the model is investigated. • Three ordered (spin-glass, ferromagnetic and mixed) phases are found. • The tricritical behavior is examined.

  7. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors

    Science.gov (United States)

    Herschtal, A.; te Marvelde, L.; Mengersen, K.; Hosseinifard, Z.; Foroudi, F.; Devereux, T.; Pham, D.; Ball, D.; Greer, P. B.; Pichler, P.; Eade, T.; Kneebone, A.; Bell, L.; Caine, H.; Hindson, B.; Kron, T.

    2015-02-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts -19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements.

  8. First-principles modeling of electromagnetic scattering by discrete and discretely heterogeneous random media

    Energy Technology Data Exchange (ETDEWEB)

    Mishchenko, Michael I., E-mail: michael.i.mishchenko@nasa.gov [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Dlugach, Janna M. [Main Astronomical Observatory of the National Academy of Sciences of Ukraine, 27 Zabolotny Str., 03680, Kyiv (Ukraine); Yurkin, Maxim A. [Voevodsky Institute of Chemical Kinetics and Combustion, SB RAS, Institutskaya str. 3, 630090 Novosibirsk (Russian Federation); Novosibirsk State University, Pirogova 2, 630090 Novosibirsk (Russian Federation); Bi, Lei [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Cairns, Brian [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Liu, Li [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Columbia University, 2880 Broadway, New York, NY 10025 (United States); Panetta, R. Lee [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Travis, Larry D. [NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025 (United States); Yang, Ping [Department of Atmospheric Sciences, Texas A& M University, College Station, TX 77843 (United States); Zakharova, Nadezhda T. [Trinnovim LLC, 2880 Broadway, New York, NY 10025 (United States)

    2016-05-16

    A discrete random medium is an object in the form of a finite volume of a vacuum or a homogeneous material medium filled with quasi-randomly and quasi-uniformly distributed discrete macroscopic impurities called small particles. Such objects are ubiquitous in natural and artificial environments. They are often characterized by analyzing theoretically the results of laboratory, in situ, or remote-sensing measurements of the scattering of light and other electromagnetic radiation. Electromagnetic scattering and absorption by particles can also affect the energy budget of a discrete random medium and hence various ambient physical and chemical processes. In either case electromagnetic scattering must be modeled in terms of appropriate optical observables, i.e., quadratic or bilinear forms in the field that quantify the reading of a relevant optical instrument or the electromagnetic energy budget. It is generally believed that time-harmonic Maxwell’s equations can accurately describe elastic electromagnetic scattering by macroscopic particulate media that change in time much more slowly than the incident electromagnetic field. However, direct solutions of these equations for discrete random media had been impracticable until quite recently. This has led to a widespread use of various phenomenological approaches in situations when their very applicability can be questioned. Recently, however, a new branch of physical optics has emerged wherein electromagnetic scattering by discrete and discretely heterogeneous random media is modeled directly by using analytical or numerically exact computer solutions of the Maxwell equations. Therefore, the main objective of this Report is to formulate the general theoretical framework of electromagnetic scattering by discrete random media rooted in the Maxwell–Lorentz electromagnetics and discuss its immediate analytical and numerical consequences. Starting from the microscopic Maxwell–Lorentz equations, we trace the development

  9. Random regret-based discrete-choice modelling: an application to healthcare.

    Science.gov (United States)

    de Bekker-Grob, Esther W; Chorus, Caspar G

    2013-07-01

    A new modelling approach for analysing data from discrete-choice experiments (DCEs) has been recently developed in transport economics based on the notion of regret minimization-driven choice behaviour. This so-called Random Regret Minimization (RRM) approach forms an alternative to the dominant Random Utility Maximization (RUM) approach. The RRM approach is able to model semi-compensatory choice behaviour and compromise effects, while being as parsimonious and formally tractable as the RUM approach. Our objectives were to introduce the RRM modelling approach to healthcare-related decisions, and to investigate its usefulness in this domain. Using data from DCEs aimed at determining valuations of attributes of osteoporosis drug treatments and human papillomavirus (HPV) vaccinations, we empirically compared RRM models, RUM models and Hybrid RUM-RRM models in terms of goodness of fit, parameter ratios and predicted choice probabilities. In terms of model fit, the RRM model did not outperform the RUM model significantly in the case of the osteoporosis DCE data (p = 0.21), whereas in the case of the HPV DCE data, the Hybrid RUM-RRM model outperformed the RUM model (p < 0.05). Differences in predicted choice probabilities between RUM models and (Hybrid RUM-) RRM models were small. Derived parameter ratios did not differ significantly between model types, but trade-offs between attributes implied by the two models can vary substantially. Differences in model fit between RUM, RRM and Hybrid RUM-RRM were found to be small. Although our study did not show significant differences in parameter ratios, the RRM and Hybrid RUM-RRM models did feature considerable differences in terms of the trade-offs implied by these ratios. In combination, our results suggest that RRM and Hybrid RUM-RRM modelling approach hold the potential of offering new and policy-relevant insights for health researchers and policy makers.

  10. Reduced-Order Monte Carlo Modeling of Radiation Transport in Random Media

    Science.gov (United States)

    Olson, Aaron

    The ability to perform radiation transport computations in stochastic media is essential for predictive capabilities in applications such as weather modeling, radiation shielding involving non-homogeneous materials, atmospheric radiation transport computations, and transport in plasma-air structures. Due to the random nature of such media, it is often not clear how to model or otherwise compute on many forms of stochastic media. Several approaches to evaluation of transport quantities for some stochastic media exist, though such approaches often either yield considerable error or are quite computationally expensive. We model stochastic media using the Karhunen-Loeve (KL) expansion, seek to improve efficiency through use of stochastic collocation (SC), and provide higher-order information of output values using the polynomial chaos expansion (PCE). We study and demonstrate method convergence and apply the new methods to both spatially continuous and spatially discontinuous stochastic media. New methods are shown to produce accurate solutions for reasonable computational cost for several problem when compared with existing solution methods. Spatially random media are modeled using transformations of the Gaussian-distributed KL expansion-continuous random media with a lognormal transformation and discontinuous random media with a Nataf transformation. Each transformation preserves second-order statistics for the quantity-atom density or material index, respectively-being modeled. The Nystrom method facilitates numerical solution of the KL eigenvalues and eigenvectors, and a variety of methods are investigated for sampling KL eigenfunctions as a function of solved eigenvectors. The infinite KL expansion is truncated to a finite number of terms each containing a random variable, and material realizations are created by either randomly or deterministically sampling from the random variables. Deterministic sampling is performed with either isotropic or anisotropic

  11. Feature selection using angle modulated simulated Kalman filter for peak classification of EEG signals.

    Science.gov (United States)

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Mubin, Marizan; Saad, Ismail

    2016-01-01

    In the existing electroencephalogram (EEG) signals peak classification research, the existing models, such as Dumpala, Acir, Liu, and Dingle peak models, employ different set of features. However, all these models may not be able to offer good performance for various applications and it is found to be problem dependent. Therefore, the objective of this study is to combine all the associated features from the existing models before selecting the best combination of features. A new optimization algorithm, namely as angle modulated simulated Kalman filter (AMSKF) will be employed as feature selector. Also, the neural network random weight method is utilized in the proposed AMSKF technique as a classifier. In the conducted experiment, 11,781 samples of peak candidate are employed in this study for the validation purpose. The samples are collected from three different peak event-related EEG signals of 30 healthy subjects; (1) single eye blink, (2) double eye blink, and (3) eye movement signals. The experimental results have shown that the proposed AMSKF feature selector is able to find the best combination of features and performs at par with the existing related studies of epileptic EEG events classification.

  12. Investigating the Dynamic Effects of Counterfeits with a Random Changepoint Simultaneous Equation Model

    OpenAIRE

    Yi Qian; Hui Xie

    2011-01-01

    Using a unique panel dataset and a new model, this article investigates the dynamic effects of counterfeit sales on authentic-product price dynamics. We propose a Bayesian random-changepoint simultaneous equation model that simultaneously takes into account three important features in empirical studies: (1) Endogeneity of a market entry, (2) Nonstationarity of the entry effects and (3) Heterogeneity of the firms' response behaviors. Besides accounting for the endogeneity of counterfeiting, th...

  13. Encoding Sequential Information in Vector Space Models of Semantics: Comparing Holographic Reduced Representation and Random Permutation

    OpenAIRE

    Recchia, Gabriel; Jones, Michael; Sahlgren, Magnus; Kanerva, Pentti

    2010-01-01

    Encoding information about the order in which words typically appear has been shown to improve the performance of high-dimensional semantic space models. This requires an encoding operation capable of binding together vectors in an order-sensitive way, and efficient enough to scale to large text corpora. Although both circular convolution and random permutations have been enlisted for this purpose in semantic models, these operations have never been systematically compared. In Experiment 1 we...

  14. UA(1) breaking and phase transition in chiral random matrix model

    OpenAIRE

    Sano, T.; Fujii, H.; Ohtani, M

    2009-01-01

    We propose a chiral random matrix model which properly incorporates the flavor-number dependence of the phase transition owing to the \\UA(1) anomaly term. At finite temperature, the model shows the second-order phase transition with mean-field critical exponents for two massless flavors, while in the case of three massless flavors the transition turns out to be of the first order. The topological susceptibility satisfies the anomalous \\UA(1) Ward identity and decreases gradually with the temp...

  15. TESTING BRAND VALUE MEASUREMENT METHODS IN A RANDOM COEFFICIENT MODELING FRAMEWORK

    Directory of Open Access Journals (Sweden)

    Szõcs Attila

    2014-07-01

    Full Text Available Our objective is to provide a framework for measuring brand equity, that is, the added value to the product endowed by the brand. Based on a demand and supply model, we propose a structural model that enables testing the structural effect of brand equity (demand side effect on brand value (supply side effect, using Monte Carlo simulation. Our main research question is which of the three brand value measurement methods (price premium, revenue premium and profit premium is more suitable from the perspective of the structural link between brand equity and brand value. Our model is based on recent developments in random coefficients model applications.

  16. A Statistical Model Updating Method of Beam Structures with Random Parameters under Static Load

    Directory of Open Access Journals (Sweden)

    Zhifeng Wu

    2017-06-01

    Full Text Available This paper presents a new statistical model updating method of beam structures with random parameters under static load. The new updating method considers structural parameters and measurement errors to be random. To reduce the unmeasured degrees of freedom in the finite element model, a static condensation technique is used in this method. A statistical model updating equation with respect to element updated factors is established afterwards. The element updated factors are expanded as random multivariate power series. Using a high-order perturbation technique, the statistical model updating equation can be solved to obtain the coefficients of the power series expansions of the element updated factors. The results of two numerical examples show that for the solution of the statistical model updating equation, the accuracy of the proposed method agrees with that of the Monte Carlo simulation method very well. The static responses obtained by the updated finite element model coincide with the measured results very well. Finally, a series of static load tests of the concrete beam are conducted to testify the effectiveness of the proposed method.

  17. Drivers of peak sales for pharmaceutical brands

    NARCIS (Netherlands)

    Fischer, Marc; Leeflang, Peter S. H.; Verhoef, Peter C.

    2010-01-01

    Peak sales are an important metric in the pharmaceutical industry. Specifically, managers are focused on the height-of-peak-sales and the time required achieving peak sales. We analyze how order of entry and quality affect the level of peak sales and the time-to-peak-sales of pharmaceutical brands.

  18. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    CERN Document Server

    Vamos, C; Vereecken, H

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.

  19. A theory of solving TAP equations for Ising models with general invariant random matrices

    DEFF Research Database (Denmark)

    Opper, Manfred; Çakmak, Burak; Winther, Ole

    2016-01-01

    We consider the problem of solving TAP mean field equations by iteration for Ising models with coupling matrices that are drawn at random from general invariant ensembles. We develop an analysis of iterative algorithms using a dynamical functional approach that in the thermodynamic limit yields...... the iteration dependent on a Gaussian distributed field only. The TAP magnetizations are stable fixed points if a de Almeida–Thouless stability criterion is fulfilled. We illustrate our method explicitly for coupling matrices drawn from the random orthogonal ensemble....

  20. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    Science.gov (United States)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  1. Study on the Business Cycle Model with Fractional-Order Time Delay under Random Excitation

    Directory of Open Access Journals (Sweden)

    Zifei Lin

    2017-07-01

    Full Text Available Time delay of economic policy and memory property in a real economy system is omnipresent and inevitable. In this paper, a business cycle model with fractional-order time delay which describes the delay and memory property of economic control is investigated. Stochastic averaging method is applied to obtain the approximate analytical solution. Numerical simulations are done to verify the method. The effects of the fractional order, time delay, economic control and random excitation on the amplitude of the economy system are investigated. The results show that time delay, fractional order and intensity of random excitation can all magnify the amplitude and increase the volatility of the economy system.

  2. Crash Frequency Analysis Using Hurdle Models with Random Effects Considering Short-Term Panel Data.

    Science.gov (United States)

    Chen, Feng; Ma, Xiaoxiang; Chen, Suren; Yang, Lin

    2016-10-26

    Random effect panel data hurdle models are established to research the daily crash frequency on a mountainous section of highway I-70 in Colorado. Road Weather Information System (RWIS) real-time traffic and weather and road surface conditions are merged into the models incorporating road characteristics. The random effect hurdle negative binomial (REHNB) model is developed to study the daily crash frequency along with three other competing models. The proposed model considers the serial correlation of observations, the unbalanced panel-data structure, and dominating zeroes. Based on several statistical tests, the REHNB model is identified as the most appropriate one among four candidate models for a typical mountainous highway. The results show that: (1) the presence of over-dispersion in the short-term crash frequency data is due to both excess zeros and unobserved heterogeneity in the crash data; and (2) the REHNB model is suitable for this type of data. Moreover, time-varying variables including weather conditions, road surface conditions and traffic conditions are found to play importation roles in crash frequency. Besides the methodological advancements, the proposed technology bears great potential for engineering applications to develop short-term crash frequency models by utilizing detailed data from field monitoring data such as RWIS, which is becoming more accessible around the world.

  3. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Science.gov (United States)

    Chlumecký, Martin; Buchtele, Josef; Richta, Karel

    2017-10-01

    The efficient calibration of rainfall-runoff models is a difficult issue, even for experienced hydrologists. Therefore, fast and high-quality model calibration is a valuable improvement. This paper describes a novel methodology and software for the optimisation of a rainfall-runoff modelling using a genetic algorithm (GA) with a newly prepared concept of a random number generator (HRNG), which is the core of the optimisation. The GA estimates model parameters using evolutionary principles, which requires a quality number generator. The new HRNG generates random numbers based on hydrological information and it provides better numbers compared to pure software generators. The GA enhances the model calibration very well and the goal is to optimise the calibration of the model with a minimum of user interaction. This article focuses on improving the internal structure of the GA, which is shielded from the user. The results that we obtained indicate that the HRNG provides a stable trend in the output quality of the model, despite various configurations of the GA. In contrast to previous research, the HRNG speeds up the calibration of the model and offers an improvement of rainfall-runoff modelling.

  4. Modeling longitudinal data with nonparametric multiplicative random effects jointly with survival data.

    Science.gov (United States)

    Ding, Jimin; Wang, Jane-Ling

    2008-06-01

    In clinical studies, longitudinal biomarkers are often used to monitor disease progression and failure time. Joint modeling of longitudinal and survival data has certain advantages and has emerged as an effective way to mutually enhance information. Typically, a parametric longitudinal model is assumed to facilitate the likelihood approach. However, the choice of a proper parametric model turns out to be more elusive than models for standard longitudinal studies in which no survival endpoint occurs. In this article, we propose a nonparametric multiplicative random effects model for the longitudinal process, which has many applications and leads to a flexible yet parsimonious nonparametric random effects model. A proportional hazards model is then used to link the biomarkers and event time. We use B-splines to represent the nonparametric longitudinal process, and select the number of knots and degrees based on a version of the Akaike information criterion (AIC). Unknown model parameters are estimated through maximizing the observed joint likelihood, which is iteratively maximized by the Monte Carlo Expectation Maximization (MCEM) algorithm. Due to the simplicity of the model structure, the proposed approach has good numerical stability and compares well with the competing parametric longitudinal approaches. The new approach is illustrated with primary biliary cirrhosis (PBC) data, aiming to capture nonlinear patterns of serum bilirubin time courses and their relationship with survival time of PBC patients.

  5. Parameter Estimation in Stratified Cluster Sampling under Randomized Response Models for Sensitive Question Survey.

    Science.gov (United States)

    Pu, Xiangke; Gao, Ge; Fan, Yubo; Wang, Mian

    2016-01-01

    Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.

  6. Parameter Estimation in Stratified Cluster Sampling under Randomized Response Models for Sensitive Question Survey.

    Directory of Open Access Journals (Sweden)

    Xiangke Pu

    Full Text Available Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.

  7. Random cascades on wavelet trees and their use in analyzing and modeling natural images

    Science.gov (United States)

    Wainwright, Martin J.; Simoncelli, Eero P.; Willsky, Alan S.

    2000-12-01

    We develop a new class of non-Gaussian multiscale stochastic processes defined by random cascades on trees of wavelet or other multiresolution coefficients. These cascades reproduce a rich semi-parametric class of random variables known as Gaussian scale mixtures. We demonstrate that this model class can accurately capture the remarkably regular and non- Gaussian features of natural images in a parsimonious fashion, involving only a small set of parameters. In addition, this model structure leads to efficient algorithms for image processing. In particular, we develop a Newton- like algorithm for MAP estimation that exploits very fast algorithm for linear-Gaussian estimation on trees, and hence is efficient. On the basis of this MAP estimator, we develop and illustrate a denoising technique that is based on a global prior model, and preserves the structure of natural images.

  8. Random Regression Models Based On The Skew Elliptically Contoured Distribution Assumptions With Applications To Longitudinal Data *

    Science.gov (United States)

    Zheng, Shimin; Rao, Uma; Bartolucci, Alfred A.; Singh, Karan P.

    2011-01-01

    Bartolucci et al.(2003) extended the distribution assumption from the normal (Lyles et al., 2000) to the elliptical contoured distribution (ECD) for random regression models used in analysis of longitudinal data accounting for both undetectable values and informative drop-outs. In this paper, the random regression models are constructed on the multivariate skew ECD. A real data set is used to illustrate that the skew ECDs can fit some unimodal continuous data better than the Gaussian distributions or more general continuous symmetric distributions when the symmetric distribution assumption is violated. Also, a simulation study is done for illustrating the model fitness from a variety of skew ECDs. The software we used is SAS/STAT, V. 9.13. PMID:21637734

  9. LED Lighting System Reliability Modeling and Inference via Random Effects Gamma Process and Copula Function

    Directory of Open Access Journals (Sweden)

    Huibing Hao

    2015-01-01

    Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.

  10. Bayesian analysis for exponential random graph models using the adaptive exchange sampler

    KAUST Repository

    Jin, Ick Hoon

    2013-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the existence of intractable normalizing constants. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the issue of intractable normalizing constants encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  11. Model for continuously scanning ultrasound vibrometer sensing displacements of randomly rough vibrating surfaces.

    Science.gov (United States)

    Ratilal, Purnima; Andrews, Mark; Donabed, Ninos; Galinde, Ameya; Rappaport, Carey; Fenneman, Douglas

    2007-02-01

    An analytic model is developed for the time-dependent ultrasound field reflected off a randomly rough vibrating surface for a continuously scanning ultrasound vibrometer system in bistatic configuration. Kirchhoff's approximation to Green's theorem is applied to model the three-dimensional scattering interaction of the ultrasound wave field with the vibrating rough surface. The model incorporates the beam patterns of both the transmitting and receiving ultrasound transducers and the statistical properties of the rough surface. Two methods are applied to the ultrasound system for estimating displacement and velocity amplitudes of an oscillating surface: incoherent Doppler shift spectra and coherent interferometry. Motion of the vibrometer over the randomly rough surface leads to time-dependent scattering noise that causes a randomization of the received signal spectrum. Simulations with the model indicate that surface displacement and velocity estimation are highly dependent upon the scan velocity and projected wavelength of the ultrasound vibrometer relative to the roughness height standard deviation and correlation length scales of the rough surface. The model is applied to determine limiting scan speeds for ultrasound vibrometer measuring ground displacements arising from acoustic or seismic excitation to be used in acoustic landmine confirmation sensing.

  12. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    -analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  13. Random effects modeling of multiple binomial responses using the multivariate binomial logit-normal distribution.

    Science.gov (United States)

    Coull, B A; Agresti, A

    2000-03-01

    The multivariate binomial logit-normal distribution is a mixture distribution for which, (i) conditional on a set of success probabilities and sample size indices, a vector of counts is independent binomial variates, and (ii) the vector of logits of the parameters has a multivariate normal distribution. We use this distribution to model multivariate binomial-type responses using a vector of random effects. The vector of logits of parameters has a mean that is a linear function of explanatory variables and has an unspecified or partly specified covariance matrix. The model generalizes and provides greater flexibility than the univariate model that uses a normal random effect to account for positive correlations in clustered data. The multivariate model is useful when different elements of the response vector refer to different characteristics, each of which may naturally have its own random effect. It is also useful for repeated binary measurement of a single response when there is a nonexchangeable association structure, such as one often expects with longitudinal data or when negative association exists for at least one pair of responses. We apply the model to an influenza study with repeated responses in which some pairs are negatively associated and to a developmental toxicity study with continuation-ratio logits applied to an ordinal response with clustered observations.

  14. Nakagami Markov random field as texture model for ultrasound RF envelope image.

    Science.gov (United States)

    Bouhlel, N; Sevestre-Ghalila, S

    2009-06-01

    The aim of this paper is to propose a new Markov random field (MRF) model for the backscattered ultrasonic echo in order to get information about backscatter characteristics, such as the scatterer density, amplitude and spacing. The model combines the Nakagami distribution that describes the envelope of backscattered echo with spatial interaction using MRF. In this paper, the parameters of the model and the estimation parameter method are introduced. Computer simulation using ultrasound radio-frequency (RF) simulator and experiments on choroidal malignant melanoma have been undertaken to test the validity of the model. The relationship between the parameters of MRF model and the backscatter characteristics has been established. Furthermore, the ability of the model to distinguish between normal and abnormal tissue has been proved. All the results can show the success of the model.

  15. Peak Power Output Test on a Rowing Ergometer: A Methodological Study.

    Science.gov (United States)

    Metikos, Boris; Mikulic, Pavle; Sarabon, Nejc; Markovic, Goran

    2015-10-01

    We aimed to examine the reliability and validity of the peak power output test on a rowing ergometer (Concept II Model D Inc.) and to establish the "optimal resistance" at which this peak power output was observed in 87 participants with varying levels of physical activity and rowing expertise: 15 male and 12 female physically inactive students (age: 21 ± 2 years), 16 male and 20 female physically active students (age: 23 ± 2 years), and 15 male and 9 female trained rowers (age: 19 ± 2 years). The participants performed countermovement jump (CMJ) test on a force plate, followed by 3 maximal-effort rowing trials using the lowest, medium, and the highest adjustable resistance settings (i.e., "1", "5," and "10" on the resistance control dial on the ergometer) in randomized order. The test proved to be reliable (coefficients of variation: 2.6-6.5%; intraclass correlation coefficients: 0.87-0.98). The correlation coefficients between CMJ peak power and rowing peak power (both in watts per kilogram) were fairly consistent across all 3 groups of participants and resistance levels, ranging between r = 0.70 and r = 0.78. Finally, the highest power output was observed at the highest resistance setting in 2 nonathletic groups (p rowing ergometer may serve as a reliable and valid tool for assessing whole-body peak power output in untrained individuals and rowing athletes.

  16. SHER: a colored petri net based random mobility model for wireless communications.

    Directory of Open Access Journals (Sweden)

    Naeem Akhtar Khan

    Full Text Available In wireless network research, simulation is the most imperative technique to investigate the network's behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER, a novel two-dimensional (2-D Colored Petri net (CPN based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and

  17. SHER: a colored petri net based random mobility model for wireless communications.

    Science.gov (United States)

    Khan, Naeem Akhtar; Ahmad, Farooq; Khan, Sher Afzal

    2015-01-01

    In wireless network research, simulation is the most imperative technique to investigate the network's behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the errors

  18. Bayesian hierarchical models for cost-effectiveness analyses that use data from cluster randomized trials.

    Science.gov (United States)

    Grieve, Richard; Nixon, Richard; Thompson, Simon G

    2010-01-01

    Cost-effectiveness analyses (CEA) may be undertaken alongside cluster randomized trials (CRTs) where randomization is at the level of the cluster (for example, the hospital or primary care provider) rather than the individual. Costs (and outcomes) within clusters may be correlated so that the assumption made by standard bivariate regression models, that observations are independent, is incorrect. This study develops a flexible modeling framework to acknowledge the clustering in CEA that use CRTs. The authors extend previous Bayesian bivariate models for CEA of multicenter trials to recognize the specific form of clustering in CRTs. They develop new Bayesian hierarchical models (BHMs) that allow mean costs and outcomes, and also variances, to differ across clusters. They illustrate how each model can be applied using data from a large (1732 cases, 70 primary care providers) CRT evaluating alternative interventions for reducing postnatal depression. The analyses compare cost-effectiveness estimates from BHMs with standard bivariate regression models that ignore the data hierarchy. The BHMs show high levels of cost heterogeneity across clusters (intracluster correlation coefficient, 0.17). Compared with standard regression models, the BHMs yield substantially increased uncertainty surrounding the cost-effectiveness estimates, and altered point estimates. The authors conclude that ignoring clustering can lead to incorrect inferences. The BHMs that they present offer a flexible modeling framework that can be applied more generally to CEA that use CRTs.

  19. Modelling and Simulation of Photosynthetic Microorganism Growth: Random Walk vs. Finite Difference Method

    Czech Academy of Sciences Publication Activity Database

    Papáček, Š.; Matonoha, Ctirad; Štumbauer, V.; Štys, D.

    2012-01-01

    Roč. 82, č. 10 (2012), s. 2022-2032 ISSN 0378-4754. [Modelling 2009. IMACS Conference on Mathematical Modelling and Computational Methods in Applied Sciences and Engineering /4./. Rožnov pod Radhoštěm, 22.06.2009-26.06.2009] Grant - others:CENAKVA(CZ) CZ.1.05/2.1.00/01.0024; GA JU(CZ) 152//2010/Z Institutional research plan: CEZ:AV0Z10300504 Keywords : multiscale modelling * distributed parameter system * boundary value problem * random walk * photosynthetic factory Subject RIV: EI - Biotechnology ; Bionics Impact factor: 0.836, year: 2012

  20. Secular evolution of the vertical column abundances of CHClF2 (HCFC-22) in the Earth's atmosphere inferred from ground-based IR solar observations at the Jungfraujoch and at Kitt Peak, and comparison with model calculations

    Science.gov (United States)

    Zander, R.; Mahieu, E.; Demoulin, PH.; Rinsland, C. P.; Weisenstein, D. K.; Ko, M. K. W.; Sze, N. D.; Gunson, M. R.

    1994-01-01

    Series of high-resolution infrared solar spectra recorded at the International Scientific Station of the Jungfraujoch, Switzerland, between 06/1986 and 11/1992, and at Kitt Peak National Observatory, Tucson, Arizona (U.S.A.), from 12/1980 to 04/1992, have been analyzed to provide a comprehensive ensemble of vertical column abundances of CHClF2 (HCFC-22; Freon-22) above the European and the North American continents. The columns were derived from nonlinear least-squares curve fittings between synthetic spectra and the observations containing the unresolved 2 nu(sub 6) Q-branch absorption of CHClF2 at 829.05/cm. The changes versus time observed in these columns were modeled assuming both an exponential and a linear increase with time. The exponential rates of increase at one-sigma uncertainties were found equal to (7.0 +/- 0.35)%/yr for the Junfraujoch data and (7.0 +/- 0.23)%/yr for the Kitt Peak data. The exponential trend of 7.0%/yr found at both stations widely separated in location can be considered as representative of the global increase of the CHClF2 burden in the Earth's atmosphere during the period 1980 to 1992. When assuming two realistic vertical volume mixing ratio profiles for CHClF2 in the troposphere, one quasi constant and the other decreasing by about 13% from the ground to the tropopause, the concentrations for mid-1990 were found to lie between 97 and 111 pptv (parts per trillion by volume) at the 3.58 km altitude of the Jungfraujoch and between 97 and 103 pptv at Kitt Peak, 2.09 km above sea level. Corresponding values derived from calculations using a high vertical resolution-2D model and recently compiled HCFC-22 releases to the atmosphere, were equal to 107 and 105 pptv, respectively, in excellent agreement with the measurements. The model calculated lifetime of CHClF2 was found equal to 15.6 years. The present results are compared critically with similar data found in the literature. On average, the concentrations found here are lower by 15

  1. A model for a correlated random walk based on the ordered extension of pseudopodia.

    Directory of Open Access Journals (Sweden)

    Peter J M Van Haastert

    Full Text Available Cell migration in the absence of external cues is well described by a correlated random walk. Most single cells move by extending protrusions called pseudopodia. To deduce how cells walk, we have analyzed the formation of pseudopodia by Dictyostelium cells. We have observed that the formation of pseudopodia is highly ordered with two types of pseudopodia: First, de novo formation of pseudopodia at random positions on the cell body, and therefore in random directions. Second, pseudopod splitting near the tip of the current pseudopod in alternating right/left directions, leading to a persistent zig-zag trajectory. Here we analyzed the probability frequency distributions of the angles between pseudopodia and used this information to design a stochastic model for cell movement. Monte Carlo simulations show that the critical elements are the ratio of persistent splitting pseudopodia relative to random de novo pseudopodia, the Left/Right alternation, the angle between pseudopodia and the variance of this angle. Experiments confirm predictions of the model, showing reduced persistence in mutants that are defective in pseudopod splitting and in mutants with an irregular cell surface.

  2. The episodic random utility model unifies time trade-off and discrete choice approaches in health state valuation

    NARCIS (Netherlands)

    B.M. Craig (Benjamin); J.J. van Busschbach (Jan)

    2009-01-01

    textabstractABSTRACT: BACKGROUND: To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. METHODS: First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common

  3. Equivalence of effective medium and random resistor network models for disorder-induced unsaturating linear magnetoresistance

    Science.gov (United States)

    Ramakrishnan, Navneeth; Lai, Ying Tong; Lara, Silvia; Parish, Meera M.; Adam, Shaffique

    2017-12-01

    A linear unsaturating magnetoresistance at high perpendicular magnetic fields, together with a quadratic positive magnetoresistance at low fields, has been seen in many different experimental materials, ranging from silver chalcogenides and thin films of InSb to topological materials like graphene and Dirac semimetals. In the literature, two very different theoretical approaches have been used to explain this classical magnetoresistance as a consequence of sample disorder. The phenomenological random resistor network model constructs a grid of four terminal resistors, each with a varying random resistance. The effective medium theory model imagines a smoothly varying disorder potential that causes a continuous variation of the local conductivity. Here, we demonstrate numerically that both models belong to the same universality class and that a restricted class of the random resistor network is actually equivalent to the effective medium theory. Both models are also in good agreement with experiments on a diverse range of materials. Moreover, we show that in both cases, a single parameter, i.e., the ratio of the fluctuations in the carrier density to the average carrier density, completely determines the magnetoresistance profile.

  4. Random regression models for milk, fat and protein in Colombian Buffaloes

    Directory of Open Access Journals (Sweden)

    Naudin Hurtado-Lugo

    2015-01-01

    Full Text Available Objective. Covariance functions for additive genetic and permanent environmental effects and, subsequently, genetic parameters for test-day milk (MY, fat (FY protein (PY yields and mozzarella cheese (MP in buffaloes from Colombia were estimate by using Random regression models (RRM with Legendre polynomials (LP. Materials and Methods. Test-day records of MY, FY, PY and MP from 1884 first lactations of buffalo cows from 228 sires were analyzed. The animals belonged to 14 herds in Colombia between 1995 and 2011. Ten monthly classes of days in milk were considered for test-day yields. The contemporary groups were defined as herd-year-month of milk test-day. Random additive genetic, permanent environmental and residual effects were included in the model. Fixed effects included the contemporary group, linear and quadratic effects of age at calving, and the average lactation curve of the population, which was modeled by third-order LP. Random additive genetic and permanent environmental effects were estimated by RRM using third- to- sixth-order LP. Residual variances were modeled using homogeneous and heterogeneous structures. Results. The heritabilities for MY, FY, PY and MP ranged from 0.38 to 0.05, 0.67 to 0.11, 0.50 to 0.07 and 0.50 to 0.11, respectively. Conclusions. In general, the RRM are adequate to describe the genetic variation in test-day of MY, FY, PY and MP in Colombian buffaloes.

  5. Identification of a potential fibromyalgia diagnosis using random forest modeling applied to electronic medical records

    Directory of Open Access Journals (Sweden)

    Emir B

    2015-06-01

    Full Text Available Birol Emir,1 Elizabeth T Masters,1 Jack Mardekian,1 Andrew Clair,1 Max Kuhn,2 Stuart L Silverman,3 1Pfizer Inc., New York, NY, 2Pfizer Inc., Groton, CT, 3Cedars-Sinai Medical Center, Los Angeles, CA, USA Background: Diagnosis of fibromyalgia (FM, a chronic musculoskeletal condition characterized by widespread pain and a constellation of symptoms, remains challenging and is often delayed. Methods: Random forest modeling of electronic medical records was used to identify variables that may facilitate earlier FM identification and diagnosis. Subjects aged ≥18 years with two or more listings of the International Classification of Diseases, Ninth Revision, (ICD-9 code for FM (ICD-9 729.1 ≥30 days apart during the 2012 calendar year were defined as cases among subjects associated with an integrated delivery network and who had one or more health care provider encounter in the Humedica database in calendar years 2011 and 2012. Controls were without the FM ICD-9 codes. Seventy-two demographic, clinical, and health care resource utilization variables were entered into a random forest model with downsampling to account for cohort imbalances (<1% subjects had FM. Importance of the top ten variables was ranked based on normalization to 100% for the variable with the largest loss in predicting performance by its omission from the model. Since random forest is a complex prediction method, a set of simple rules was derived to help understand what factors drive individual predictions. Results: The ten variables identified by the model were: number of visits where laboratory/non-imaging diagnostic tests were ordered; number of outpatient visits excluding office visits; age; number of office visits; number of opioid prescriptions; number of medications prescribed; number of pain medications excluding opioids; number of medications administered/ordered; number of emergency room visits; and number of musculoskeletal conditions. A receiver operating

  6. The application of the random regret minimization model to drivers’ choice of crash avoidance maneuvers

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers...... seek to minimize their anticipated regret from their corrective actions. The model accounts for driver attributes and behavior, critical events that made the crash imminent, vehicle and road characteristics, and environmental conditions. Analyzed data are retrieved from the General Estimates System...... (GES) crash database for the period between 2005 and 2009. The predictive ability of the RRM-based model is slightly superior to its RUM-based counterpart, namely the multinomial logit model (MNL) model. The marginal effects predicted by the RRM-based model are greater than those predicted by the RUM...

  7. The application of the random regret minimization model to drivers’ choice of crash avoidance maneuvers

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    This study explores the plausibility of regret minimization as behavioral paradigm underlying the choice of crash avoidance maneuvers. Alternatively to previous studies that considered utility maximization, this study applies the random regret minimization (RRM) model while assuming that drivers...... seek to minimize their anticipated regret from their corrective actions. The model accounts for driver attributes and behavior, critical events that made the crash imminent, vehicle and road characteristics, and environmental conditions. Analyzed data are retrieved from the General Estimates System...... (GES) crash database for the period between 2005 and 2009. The predictive ability of the RRM-based model is slightly superior to its RUM-based counterpart, namely the multinomial logit model (MNL) model. The marginal effects predicted by the RRM-based model are greater than those predicted by the RUM...

  8. A Collective Study on Modeling and Simulation of Resistive Random Access Memory.

    Science.gov (United States)

    Panda, Debashis; Sahu, Paritosh Piyush; Tseng, Tseung Yuen

    2018-01-10

    In this work, we provide a comprehensive discussion on the various models proposed for the design and description of resistive random access memory (RRAM), being a nascent technology is heavily reliant on accurate models to develop efficient working designs and standardize its implementation across devices. This review provides detailed information regarding the various physical methodologies considered for developing models for RRAM devices. It covers all the important models reported till now and elucidates their features and limitations. Various additional effects and anomalies arising from memristive system have been addressed, and the solutions provided by the models to these problems have been shown as well. All the fundamental concepts of RRAM model development such as device operation, switching dynamics, and current-voltage relationships are covered in detail in this work. Popular models proposed by Chua, HP Labs, Yakopcic, TEAM, Stanford/ASU, Ielmini, Berco-Tseng, and many others have been compared and analyzed extensively on various parameters. The working and implementations of the window functions like Joglekar, Biolek, Prodromakis, etc. has been presented and compared as well. New well-defined modeling concepts have been discussed which increase the applicability and accuracy of the models. The use of these concepts brings forth several improvements in the existing models, which have been enumerated in this work. Following the template presented, highly accurate models would be developed which will vastly help future model developers and the modeling community.

  9. A Collective Study on Modeling and Simulation of Resistive Random Access Memory

    Science.gov (United States)

    Panda, Debashis; Sahu, Paritosh Piyush; Tseng, Tseung Yuen

    2018-01-01

    In this work, we provide a comprehensive discussion on the various models proposed for the design and description of resistive random access memory (RRAM), being a nascent technology is heavily reliant on accurate models to develop efficient working designs and standardize its implementation across devices. This review provides detailed information regarding the various physical methodologies considered for developing models for RRAM devices. It covers all the important models reported till now and elucidates their features and limitations. Various additional effects and anomalies arising from memristive system have been addressed, and the solutions provided by the models to these problems have been shown as well. All the fundamental concepts of RRAM model development such as device operation, switching dynamics, and current-voltage relationships are covered in detail in this work. Popular models proposed by Chua, HP Labs, Yakopcic, TEAM, Stanford/ASU, Ielmini, Berco-Tseng, and many others have been compared and analyzed extensively on various parameters. The working and implementations of the window functions like Joglekar, Biolek, Prodromakis, etc. has been presented and compared as well. New well-defined modeling concepts have been discussed which increase the applicability and accuracy of the models. The use of these concepts brings forth several improvements in the existing models, which have been enumerated in this work. Following the template presented, highly accurate models would be developed which will vastly help future model developers and the modeling community.

  10. Managing Information Uncertainty in Wave Height Modeling for the Offshore Structural Analysis through Random Set

    Directory of Open Access Journals (Sweden)

    Keqin Yan

    2017-01-01

    Full Text Available This chapter presents a reliability study for an offshore jacket structure with emphasis on the features of nonconventional modeling. Firstly, a random set model is formulated for modeling the random waves in an ocean site. Then, a jacket structure is investigated in a pushover analysis to identify the critical wave direction and key structural elements. This is based on the ultimate base shear strength. The selected probabilistic models are adopted for the important structural members and the wave direction is specified in the weakest direction of the structure for a conservative safety analysis. The wave height model is processed in a P-box format when it is used in the numerical analysis. The models are applied to find the bounds of the failure probabilities for the jacket structure. The propagation of this wave model to the uncertainty in results is investigated in both an interval analysis and Monte Carlo simulation. The results are compared in context of information content and numerical accuracy. Further, the failure probability bounds are compared with the conventional probabilistic approach.

  11. A Random-Walk-Model for heavy metal particles in natural waters; Ein Random-Walk-Modell fuer Schwermetallpartikel in natuerlichen Gewaessern

    Energy Technology Data Exchange (ETDEWEB)

    Wollschlaeger, A.

    1996-12-31

    The presented particle tracking model is for the numerical calculation of heavy metal transport in natural waters. The Navier-Stokes-Equations are solved with the Finite-Element-Method. The advective movement of the particles is interpolated from the velocities on the discrete mesh. The influence of turbulence is simulated with a Random-Walk-Model where particles are distributed due to a given probability function. Both parts are added and lead to the new particle position. The characteristics of the heavy metals are assigned to the particules as their attributes. Dissolved heavy metals are transported only by the flow. Heavy metals which are bound to particulate matter have an additional settling velocity. The sorption and the remobilization processes are approximated through a probability law which maintains the proportionality ratio between dissolved heavy metals and those which are bound to particulate matter. At the bed heavy metals bound to particulate matter are subjected to deposition and erosion processes. The model treats these processes by considering the absorption intensity of the heavy metals to the bottom sediments. Calculations of the Weser estuary show that the particle tracking model allows the simulation of the heavy metal behaviour even under complex flow conditions. (orig.) [Deutsch] Das vorgestellte Partikelmodell dient zur numerischen Berechnung des Schwermetalltransports in natuerlichen Gewaessern. Die Navier-Stokes-Gleichungen werden mit der Methode der Finiten Elemente geloest. Die advektive Bewegung der Teilchen ergibt sich aus der Interpolation der Geschwindigkeiten auf dem diskreten Netz. Der Einfluss der Turbulenz wird mit einem Random-Walk-Modell simuliert, bei dem sich die Partikel anhand einer vorgegebenen Wahrscheinlichkeitsfunktion verteilen. Beide Bewegungsanteile werden zusammengefasst und ergeben die neue Partikelposition. Die Eigenschaften der Schwermetalle werden den Partikeln als Attribute zugeordnet. Geloeste Schwermetalle

  12. Ten Reasons to Take Peak Oil Seriously

    Directory of Open Access Journals (Sweden)

    Robert J. Brecha

    2013-02-01

    Full Text Available Forty years ago, the results of modeling, as presented in The Limits to Growth, reinvigorated a discussion about exponentially growing consumption of natural resources, ranging from metals to fossil fuels to atmospheric capacity, and how such consumption could not continue far into the future. Fifteen years earlier, M. King Hubbert had made the projection that petroleum production in the continental United States would likely reach a maximum around 1970, followed by a world production maximum a few decades later. The debate about “peak oil”, as it has come to be called, is accompanied by some of the same vociferous denials, myths and ideological polemicizing that have surrounded later representations of The Limits to Growth. In this review, we present several lines of evidence as to why arguments for a near-term peak in world conventional oil production should be taken seriously—both in the sense that there is strong evidence for peak oil and in the sense that being societally unprepared for declining oil production will have serious consequences.

  13. Revisiting Twomey's approximation for peak supersaturation

    Directory of Open Access Journals (Sweden)

    B. J. Shipway

    2015-04-01

    Full Text Available Twomey's seminal 1959 paper provided lower and upper bound approximations to the estimation of peak supersaturation within an updraft and thus provides the first closed expression for the number of nucleated cloud droplets. The form of this approximation is simple, but provides a surprisingly good estimate and has subsequently been employed in more sophisticated treatments of nucleation parametrization. In the current paper, we revisit the lower bound approximation of Twomey and make a small adjustment that can be used to obtain a more accurate calculation of peak supersaturation under all potential aerosol loadings and thermodynamic conditions. In order to make full use of this improved approximation, the underlying integro-differential equation for supersaturation evolution and the condition for calculating peak supersaturation are examined. A simple rearrangement of the algebra allows for an expression to be written down that can then be solved with a single lookup table with only one independent variable for an underlying lognormal aerosol population. While multimodal aerosol with N different dispersion characteristics requires 2N+1 inputs to calculate the activation fraction, only N of these one-dimensional lookup tables are needed. No additional information is required in the lookup table to deal with additional chemical, physical or thermodynamic properties. The resulting implementation provides a relatively simple, yet computationally cheap, physically based parametrization of droplet nucleation for use in climate and Numerical Weather Prediction models.

  14. Human behavioral complexity peaks at age 25

    Science.gov (United States)

    Brugger, Peter

    2017-01-01

    Random Item Generation tasks (RIG) are commonly used to assess high cognitive abilities such as inhibition or sustained attention. They also draw upon our approximate sense of complexity. A detrimental effect of aging on pseudo-random productions has been demonstrated for some tasks, but little is as yet known about the developmental curve of cognitive complexity over the lifespan. We investigate the complexity trajectory across the lifespan of human responses to five common RIG tasks, using a large sample (n = 3429). Our main finding is that the developmental curve of the estimated algorithmic complexity of responses is similar to what may be expected of a measure of higher cognitive abilities, with a performance peak around 25 and a decline starting around 60, suggesting that RIG tasks yield good estimates of such cognitive abilities. Our study illustrates that very short strings of, i.e., 10 items, are sufficient to have their complexity reliably estimated and to allow the documentation of an age-dependent decline in the approximate sense of complexity. PMID:28406953

  15. Parsimonious Continuous Time Random Walk Models and Kurtosis for Diffusion in Magnetic Resonance of Biological Tissue

    Directory of Open Access Journals (Sweden)

    Carson eIngo

    2015-03-01

    Full Text Available In this paper, we provide a context for the modeling approaches that have been developed to describe non-Gaussian diffusion behavior, which is ubiquitous in diffusion weighted magnetic resonance imaging of water in biological tissue. Subsequently, we focus on the formalism of the continuous time random walk theory to extract properties of subdiffusion and superdiffusionthrough novel simplifications of the Mittag-Leffler function. For the case of time-fractional subdiffusion, we compute the kurtosis for the Mittag-Leffler function, which provides both a connection and physical context to the much-used approach of diffusional kurtosis imaging. We provide Monte Carlo simulations to illustrate the concepts of anomalous diffusion as stochastic processes of the random walk. Finally, we demonstrate the clinical utility of the Mittag-Leffler function as a model to describe tissue microstructure through estimations of subdiffusion and kurtosis with diffusion MRI measurements in the brain of a chronic ischemic stroke patient.

  16. Joint random field model for all-weather moving vehicle detection.

    Science.gov (United States)

    Wang, Yang

    2010-09-01

    This paper proposes a joint random field (JRF) model for moving vehicle detection in video sequences. The JRF model extends the conditional random field (CRF) by introducing auxiliary latent variables to characterize the structure and evolution of visual scene. Hence, detection labels (e.g., vehicle/roadway) and hidden variables (e.g., pixel intensity under shadow) are jointly estimated to enhance vehicle segmentation in video sequences. Data-dependent contextual constraints among both detection labels and latent variables are integrated during the detection process. The proposed method handles both moving cast shadows/lights and various weather conditions. Computationally efficient algorithm has been developed for real-time vehicle detection in video streams. Experimental results show that the approach effectively deals with various illumination conditions and robustly detects moving vehicles even in grayscale video.

  17. Derrida's Generalized Random Energy models; 4, Continuous state branching and coalescents

    CERN Document Server

    Bovier, A

    2003-01-01

    In this paper we conclude our analysis of Derrida's Generalized Random Energy Models (GREM) by identifying the thermodynamic limit with a one-parameter family of probability measures related to a continuous state branching process introduced by Neveu. Using a construction introduced by Bertoin and Le Gall in terms of a coherent family of subordinators related to Neveu's branching process, we show how the Gibbs geometry of the limiting Gibbs measure is given in terms of the genealogy of this process via a deterministic time-change. This construction is fully universal in that all different models (characterized by the covariance of the underlying Gaussian process) differ only through that time change, which in turn is expressed in terms of Parisi's overlap distribution. The proof uses strongly the Ghirlanda-Guerra identities that impose the structure of Neveu's process as the only possible asymptotic random mechanism.

  18. Parsimonious continuous time random walk models and kurtosis for diffusion in magnetic resonance of biological tissue.

    Science.gov (United States)

    Ingo, Carson; Sui, Yi; Chen, Yufen; Parrish, Todd B; Webb, Andrew G; Ronen, Itamar

    2015-03-01

    In this paper, we provide a context for the modeling approaches that have been developed to describe non-Gaussian diffusion behavior, which is ubiquitous in diffusion weighted magnetic resonance imaging of water in biological tissue. Subsequently, we focus on the formalism of the continuous time random walk theory to extract properties of subdiffusion and superdiffusion through novel simplifications of the Mittag-Leffler function. For the case of time-fractional subdiffusion, we compute the kurtosis for the Mittag-Leffler function, which provides both a connection and physical context to the much-used approach of diffusional kurtosis imaging. We provide Monte Carlo simulations to illustrate the concepts of anomalous diffusion as stochastic processes of the random walk. Finally, we demonstrate the clinical utility of the Mittag-Leffler function as a model to describe tissue microstructure through estimations of subdiffusion and kurtosis with diffusion MRI measurements in the brain of a chronic ischemic stroke patient.

  19. Parsimonious Continuous Time Random Walk Models and Kurtosis for Diffusion in Magnetic Resonance of Biological Tissue

    Science.gov (United States)

    Ingo, Carson; Sui, Yi; Chen, Yufen; Parrish, Todd; Webb, Andrew; Ronen, Itamar

    2015-03-01

    In this paper, we provide a context for the modeling approaches that have been developed to describe non-Gaussian diffusion behavior, which is ubiquitous in diffusion weighted magnetic resonance imaging of water in biological tissue. Subsequently, we focus on the formalism of the continuous time random walk theory to extract properties of subdiffusion and superdiffusion through novel simplifications of the Mittag-Leffler function. For the case of time-fractional subdiffusion, we compute the kurtosis for the Mittag-Leffler function, which provides both a connection and physical context to the much-used approach of diffusional kurtosis imaging. We provide Monte Carlo simulations to illustrate the concepts of anomalous diffusion as stochastic processes of the random walk. Finally, we demonstrate the clinical utility of the Mittag-Leffler function as a model to describe tissue microstructure through estimations of subdiffusion and kurtosis with diffusion MRI measurements in the brain of a chronic ischemic stroke patient.

  20. Short communication: Alteration of priors for random effects in Gaussian linear mixed model

    DEFF Research Database (Denmark)

    Vandenplas, Jérémie; Christensen, Ole Fredslund; Gengler, Nicholas

    2014-01-01

    such alterations. Therefore, the aim of this study was to propose a method to alter both the mean and (co)variance of the prior multivariate normal distributions of random effects of linear mixed models while using currently available software packages. The proposed method was tested on simulated examples with 3......Linear mixed models, for which the prior multivariate normal distributions of random effects are assumed to have a mean equal to 0, are commonly used in animal breeding. However, some statistical analyses (e.g., the consideration of a population under selection into a genomic scheme breeding......, multiple-trait predictions of lactation yields, and Bayesian approaches integrating external information into genetic evaluations) need to alter both the mean and (co)variance of the prior distributions and, to our knowledge, most software packages available in the animal breeding community do not permit...

  1. A Bayesian Analysis of a Random Effects Small Business Loan Credit Scoring Model

    Directory of Open Access Journals (Sweden)

    Patrick J. Farrell

    2011-09-01

    Full Text Available One of the most important aspects of credit scoring is constructing a model that has low misclassification rates and is also flexible enough to allow for random variation. It is also well known that, when there are a large number of highly correlated variables as is typical in studies involving questionnaire data, a method must be found to reduce the number of variables to those that have high predictive power. Here we propose a Bayesian multivariate logistic regression model with both fixed and random effects for small business loan credit scoring and a variable reduction method using Bayes factors. The method is illustrated on an interesting data set based on questionnaires sent to loan officers in Canadian banks and venture capital companies

  2. Collocation methods for uncertainty quanti cation in PDE models with random data

    KAUST Repository

    Nobile, Fabio

    2014-01-06

    In this talk we consider Partial Differential Equations (PDEs) whose input data are modeled as random fields to account for their intrinsic variability or our lack of knowledge. After parametrizing the input random fields by finitely many independent random variables, we exploit the high regularity of the solution of the PDE as a function of the input random variables and consider sparse polynomial approximations in probability (Polynomial Chaos expansion) by collocation methods. We first address interpolatory approximations where the PDE is solved on a sparse grid of Gauss points in the probability space and the solutions thus obtained interpolated by multivariate polynomials. We present recent results on optimized sparse grids in which the selection of points is based on a knapsack approach and relies on sharp estimates of the decay of the coefficients of the polynomial chaos expansion of the solution. Secondly, we consider regression approaches where the PDE is evaluated on randomly chosen points in the probability space and a polynomial approximation constructed by the least square method. We present recent theoretical results on the stability and optimality of the approximation under suitable conditions between the number of sampling points and the dimension of the polynomial space. In particular, we show that for uniform random variables, the number of sampling point has to scale quadratically with the dimension of the polynomial space to maintain the stability and optimality of the approximation. Numerical results show that such condition is sharp in the monovariate case but seems to be over-constraining in higher dimensions. The regression technique seems therefore to be attractive in higher dimensions.

  3. Randomly interacting bosons on two spin levels

    Science.gov (United States)

    Mulhall, D.

    2017-12-01

    The problem of random interactions leading to regular spectra in shell model type simulations is described. The key results are reviewed alnog with a selection of the explanations. A model system of N particles on 2 spin levels having random 2-body collisions that conserve angular momentum is examined. Preliminary results are described, including the ground state spin distributions peaked at extreme values of angular momentum, signatures of rotational bands, and smooth parabolic yrast lines. A simple random matrix theory analysis shows signatures of quantum chaos in the level spacing distribution and the Δ3 statistic.

  4. Redshift space correlations and scale-dependent stochastic biasing of density peaks

    Science.gov (United States)

    Desjacques, Vincent; Sheth, Ravi K.

    2010-01-01

    We calculate the redshift space correlation function and the power spectrum of density peaks of a Gaussian random field. Our derivation, which is valid on linear scales k≲0.1hMpc-1, is based on the peak biasing relation given by Desjacques [Phys. Rev. DPRVDAQ1550-7998, 78, 103503 (2008)10.1103/PhysRevD.78.103503]. In linear theory, the redshift space power spectrum is Ppks(k,μ)=exp⁡(-f2σvel2k2μ2)[bpk(k)+bvel(k)fμ2]2Pδ(k), where μ is the angle with respect to the line of sight, σvel is the one-dimensional velocity dispersion, f is the growth rate, and bpk(k) and bvel(k) are k-dependent linear spatial and velocity bias factors. For peaks, the value of σvel depends upon the functional form of bvel. When the k dependence is absent from the square brackets and bvel is set to unity, the resulting expression is assumed to describe models where the bias is linear and deterministic, but the velocities are unbiased. The peak model is remarkable because it has unbiased velocities in this same sense—peak motions are driven by dark matter flows—but, in order to achieve this, bvel must be k dependent. We speculate that this is true in general: k dependence of the spatial bias will lead to k dependence of bvel even if the biased tracers flow with the dark matter. Because of the k dependence of the linear bias parameters, standard manipulations applied to the peak model will lead to k-dependent estimates of the growth factor that could erroneously be interpreted as a signature of modified dark energy or gravity. We use the Fisher formalism to show that the constraint on the growth rate f is degraded by a factor of 2 if one allows for a k-dependent velocity bias of the peak type. Our analysis also demonstrates that the Gaussian smoothing term is part and parcel of linear theory. We discuss a simple estimate of nonlinear evolution and illustrate the effect of the peak bias on the redshift space multipoles. For k≲0.1hMpc-1, the peak bias is deterministic but k

  5. On the effect of random inhomogeneities in Kerr media modelled by a nonlinear Schroedinger equation

    Energy Technology Data Exchange (ETDEWEB)

    Villarroel, Javier [Facultad de Ciencias, Universidad de Salamanca, Plaza Merced s/n, E-37008 Salamanca (Spain); Montero, Miquel, E-mail: javier@usal.e, E-mail: miquel.montero@ub.ed [Departament de FIsica Fonamental, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain)

    2010-07-14

    We consider the propagation of optical beams under the interplay of dispersion and Kerr nonlinearity in optical fibres with impurities distributed at random uniformly on the fibre. By using a model based on the nonlinear Schroedinger equation we clarify how such inhomogeneities affect different aspects such as the number of solitons present and the intensity of the signal. We also obtain the mean distance for the signal to dissipate to a given level.

  6. Spatial Random Field Models Inspired from Statistical Physics with Applications in the Geosciences

    OpenAIRE

    Hristopulos, D. T.

    2005-01-01

    The spatial structure of fluctuations in spatially inhomogeneous processes can be modeled in terms of Gibbs random fields. A local low energy estimator (LLEE) is proposed for the interpolation (prediction) of such processes at points where observations are not available. The LLEE approximates the spatial dependence of the data and the unknown values at the estimation points by low-lying excitations of a suitable energy functional. It is shown that the LLEE is a linear, unbiased, non-exact est...

  7. Weight evaluation of Tabapuã cattle raised in northeastern Brazil using random-regression models

    Directory of Open Access Journals (Sweden)

    M.R. Oliveira

    Full Text Available ABSTRACT The objective of this study is to compare random-regression models used to describe changes in evaluation parameters for growth in Tabapuã bovine raised in the Northeast of Brazilian. The M4532-5 random-regression model was found to be best for estimating the variation and heritability of growth characteristics in the animals evaluated. Estimates of direct additive genetic variance increased with age, while the maternal additive genetic variance demonstrated growth from birth to up to nearly 420 days of age. The genetic correlations between the first four characteristics were positive with moderate to large ranges. The greatest genetic correlation was observed between birth weight and at 240 days of age (0.82. The phenotypic correlation between birth weight and other characteristics was low. The M4532-5 random-regression model with 39 parameters was found to be best for describing the growth curve of the animals evaluated providing improved selection for heavier animals when performed after weaning. The interpretation of genetic parameters to predict the growth curve of cattle may allow the selection of animals to accelerate slaughter procedures.

  8. ESTIMATION OF GENETIC PARAMETERS IN TROPICARNE CATTLE WITH RANDOM REGRESSION MODELS USING B-SPLINES

    Directory of Open Access Journals (Sweden)

    Joel Domínguez Viveros

    2015-04-01

    Full Text Available The objectives were to estimate variance components, and direct (h2 and maternal (m2 heritability in the growth of Tropicarne cattle based on a random regression model using B-Splines for random effects modeling. Information from 12 890 monthly weightings of 1787 calves, from birth to 24 months old, was analyzed. The pedigree included 2504 animals. The random effects model included genetic and permanent environmental (direct and maternal of cubic order, and residuals. The fixed effects included contemporaneous groups (year – season of weighed, sex and the covariate age of the cow (linear and quadratic. The B-Splines were defined in four knots through the growth period analyzed. Analyses were performed with the software Wombat. The variances (phenotypic and residual presented a similar behavior; of 7 to 12 months of age had a negative trend; from birth to 6 months and 13 to 18 months had positive trend; after 19 months were maintained constant. The m2 were low and near to zero, with an average of 0.06 in an interval of 0.04 to 0.11; the h2 also were close to zero, with an average of 0.10 in an interval of 0.03 to 0.23.

  9. Stochastic modeling for starting-time of phase evolution of random seismic ground motions

    Directory of Open Access Journals (Sweden)

    Yongbo Peng

    2014-01-01

    Full Text Available In response to the challenge inherent in classical high-dimensional models of random ground motions, a family of simulation methods for non-stationary seismic ground motions was developed previously through employing a wave-group propagation formulation with phase spectrum model built up on the frequency components' starting-time of phase evolution. The present paper aims at extending the formulation to the simulation of non-stationary random seismic ground motions. The ground motion records associated with N—S component of Northridge Earthquake at the type-II site are investigated. The frequency components' starting-time of phase evolution of is identified from the ground motion records, and is proved to admit the Gamma distribution through data fitting. Numerical results indicate that the simulated random ground motion features zero-mean, non-stationary, and non-Gaussian behaviors, and the phase spectrum model with only a few starting-times of phase evolution could come up with a sound contribution to the simulation.

  10. Spatial panel data models of aquaculture production in West Sumatra province with random-effects

    Science.gov (United States)

    Sartika, Wimi; Susetyo, Budi; Syafitri, Utami Dyah

    2017-03-01

    Spatial Panel Regression is a statistical model that used to analyze the effect of several independent variables on the dependent variable based on using panel data and take the spatial effect into account. There are two approaches on predicting spatial panel data, Fixed Effect Spatial Autoregressive (SAR-FE) and Random Effect Spatial Autoregressive (SAR-RE). SAR-FE has the assumption that the intercept has vary acrros spatial unit, while the SAR-RE's assumption is the interception is on residual model and it only has a general intercept. The purpose of this study is to modeling the production of West Sumatra fishery using Spatial Panel Regression. The model uses secondary data which is published by "Badan Pusat Statistik" on the results of aquaculture production in West Sumatra. The test results shown that the level of West Sumatra 2004-2012 aquaculture production was precisely modeled by the approach of Spatial Autoregressive Random Effect. From SAR-RE model, the most influence factors on aquaculture production in West Sumatra province in 2004-2012 was the number of motor boats, the area of fish seeds, fish seed production, and the number of fishermen public waters.

  11. Random intermittent search and the tug-of-war model of motor-driven transport

    KAUST Repository

    Newby, Jay

    2010-04-16

    We formulate the \\'tug-of-war\\' model of microtubule cargo transport by multiple molecular motors as an intermittent random search for a hidden target. A motor complex consisting of multiple molecular motors with opposing directional preference is modeled using a discrete Markov process. The motors randomly pull each other off of the microtubule so that the state of the motor complex is determined by the number of bound motors. The tug-of-war model prescribes the state transition rates and corresponding cargo velocities in terms of experimentally measured physical parameters. We add space to the resulting Chapman-Kolmogorov (CK) equation so that we can consider delivery of the cargo to a hidden target at an unknown location along the microtubule track. The target represents some subcellular compartment such as a synapse in a neuron\\'s dendrites, and target delivery is modeled as a simple absorption process. Using a quasi-steady-state (QSS) reduction technique we calculate analytical approximations of the mean first passage time (MFPT) to find the target. We show that there exists an optimal adenosine triphosphate (ATP) concentration that minimizes the MFPT for two different cases: (i) the motor complex is composed of equal numbers of kinesin motors bound to two different microtubules (symmetric tug-of-war model) and (ii) the motor complex is composed of different numbers of kinesin and dynein motors bound to a single microtubule (asymmetric tug-of-war model). © 2010 IOP Publishing Ltd.

  12. Markov Random Field Restoration of Point Correspondences for Active Shape Modelling

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Paulsen, Rasmus Reinhold; Larsen, Rasmus

    2004-01-01

    In this paper it is described how to build a statistical shape model using a training set with a sparse of landmarks. A well defined model mesh is selected and fitted to all shapes in the training set using thin plate spline warping. This is followed by a projection of the points of the warped...... model mesh to the target shapes. When this is done by a nearest neighbour projection it can result in folds and inhomogeneities in the correspondence vector field. The novelty in this paper is the use and extension of a Markov random field regularisation of the correspondence field. The correspondence...... model that produces highly homogeneous polygonised shapes with improved reconstruction capabilities of the training data. Furthermore, the method leads to an overall reduction in the total variance of the resulting point distribution model. The method is demonstrated on a set of human ear canals...

  13. A continuous-time model of centrally coordinated motion with random switching.

    Science.gov (United States)

    Dallon, J C; Despain, Lynnae C; Evans, Emily J; Grant, Christopher P; Smith, W V

    2017-02-01

    This paper considers differential problems with random switching, with specific applications to the motion of cells and centrally coordinated motion. Starting with a differential-equation model of cell motion that was proposed previously, we set the relaxation time to zero and consider the simpler model that results. We prove that this model is well-posed, in the sense that it corresponds to a pure jump-type continuous-time Markov process (without explosion). We then describe the model's long-time behavior, first by specifying an attracting steady-state distribution for a projection of the model, then by examining the expected location of the cell center when the initial data is compatible with that steady-state. Under such conditions, we present a formula for the expected velocity and give a rigorous proof of that formula's validity. We conclude the paper with a comparison between these theoretical results and the results of numerical simulations.

  14. Statistical Shape Modelling and Markov Random Field Restoration (invited tutorial and exercise)

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen

    This tutorial focuses on statistical shape analysis using point distribution models (PDM) which is widely used in modelling biological shape variability over a set of annotated training data. Furthermore, Active Shape Models (ASM) and Active Appearance Models (AAM) are based on PDMs and have proven...... themselves a generic holistic tool in various segmentation and simulation studies. Finding a basis of homologous points is a fundamental issue in PDMs which effects both alignment and decomposition of the training data, and may be aided by Markov Random Field Restoration (MRF) of the correspondence...... deformation field between shapes. The tutorial demonstrates both generative active shape and appearance models, and MRF restoration on 3D polygonized surfaces. ''Exercise: Spectral-Spatial classification of multivariate images'' From annotated training data this exercise applies spatial image restoration...

  15. A multiscale Markov random field model in wavelet domain for image segmentation

    Science.gov (United States)

    Dai, Peng; Cheng, Yu; Wang, Shengchun; Du, Xinyu; Wu, Dan

    2017-07-01

    The human vision system has abilities for feature detection, learning and selective attention with some properties of hierarchy and bidirectional connection in the form of neural population. In this paper, a multiscale Markov random field model in the wavelet domain is proposed by mimicking some image processing functions of vision system. For an input scene, our model provides its sparse representations using wavelet transforms and extracts its topological organization using MRF. In addition, the hierarchy property of vision system is simulated using a pyramid framework in our model. There are two information flows in our model, i.e., a bottom-up procedure to extract input features and a top-down procedure to provide feedback controls. The two procedures are controlled simply by two pyramidal parameters, and some Gestalt laws are also integrated implicitly. Equipped with such biological inspired properties, our model can be used to accomplish different image segmentation tasks, such as edge detection and region segmentation.

  16. Bayesian informative dropout model for longitudinal binary data with random effects using conditional and joint modeling approaches.

    Science.gov (United States)

    Chan, Jennifer S K

    2016-05-01

    Dropouts are common in longitudinal study. If the dropout probability depends on the missing observations at or after dropout, this type of dropout is called informative (or nonignorable) dropout (ID). Failure to accommodate such dropout mechanism into the model will bias the parameter estimates. We propose a conditional autoregressive model for longitudinal binary data with an ID model such that the probabilities of positive outcomes as well as the drop-out indicator in each occasion are logit linear in some covariates and outcomes. This model adopting a marginal model for outcomes and a conditional model for dropouts is called a selection model. To allow for the heterogeneity and clustering effects, the outcome model is extended to incorporate mixture and random effects. Lastly, the model is further extended to a novel model that models the outcome and dropout jointly such that their dependency is formulated through an odds ratio function. Parameters are estimated by a Bayesian approach implemented using the user-friendly Bayesian software WinBUGS. A methadone clinic dataset is analyzed to illustrate the proposed models. Result shows that the treatment time effect is still significant but weaker after allowing for an ID process in the data. Finally the effect of drop-out on parameter estimates is evaluated through simulation studies. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Energy peaks: A high energy physics outlook

    Science.gov (United States)

    Franceschini, Roberto

    2017-12-01

    Energy distributions of decay products carry information on the kinematics of the decay in ways that are at the same time straightforward and quite hidden. I will review these properties and discuss their early historical applications, as well as more recent ones in the context of (i) methods for the measurement of masses of new physics particle with semi-invisible decays, (ii) the characterization of Dark Matter particles produced at colliders, (iii) precision mass measurements of Standard Model particles, in particular of the top quark. Finally, I will give an outlook of further developments and applications of energy peak method for high energy physics at colliders and beyond.

  18. Cooling rates of LL, L and H chondrites and constraints on the duration of peak thermal conditions: Diffusion kinetic modeling and implications for fragmentation of asteroids and impact resetting of petrologic types

    Science.gov (United States)

    Ganguly, Jibamitra; Tirone, Massimiliano; Domanik, Kenneth

    2016-11-01

    We have carried out detailed thermometric and cooling history studies of several LL-, L- and H-chondrites of petrologic types 5 and 6. Among the selected samples, the low-temperature cooling of St. Séverin (LL6) has been constrained in an earlier study by thermochronological data to an average rate of ∼2.6 °C/My below 500 °C. However, numerical simulations of the development of Fe-Mg profiles in Opx-Cpx pairs using this cooling rate grossly misfit the measured compositional profiles. Satisfactory simulation of the latter and low temperature thermochronological constraints requires a two-stage cooling model with a cooling rate of ∼50-200 °C/ky from the peak metamorphic temperature of ∼875 °C down to 450 °C, and then transitioning to very slow cooling with an average rate of ∼2.6 °C/My. Similar rapid high temperature cooling rates (200-600 °C/ky) are also required to successfully model the compositional profiles in the Opx-Cpx pairs in the other samples of L5, L6 chondrites. For the H-chondrite samples, the low temperature cooling rates were determined earlier to be 10-20 °C/My by metallographic method. As in St. Séverin, these cooling rates grossly misfit the compositional profiles in the Opx-Cpx pairs. Modeling of these profiles requires very rapid cooling, ∼200-400 °C/ky, from the peak temperatures (∼810-830 °C), transitioning to the metallographic rates at ∼450-500 °C. We interpret the rapid high temperature cooling rates to the exposure of the samples to surface or near surface conditions as a result of fragmentation of the parent body by asteroidal impacts. Using the thermochronological data, the timing of the presumed impact is constrained to be ∼4555-4560 My before present for St. Séverin. We also deduced similar two stage cooling models in earlier studies of H-chondrites and mesosiderites that could be explained, using the available geochronological data, by impact induced fragmentation at around the same time. Diffusion kinetic

  19. Method for evaluating prediction models that apply the results of randomized trials to individual patients

    Directory of Open Access Journals (Sweden)

    Kattan Michael W

    2007-06-01

    Full Text Available Abstract Introduction The clinical significance of a treatment effect demonstrated in a randomized trial is typically assessed by reference to differences in event rates at the group level. An alternative is to make individualized predictions for each patient based on a prediction model. This approach is growing in popularity, particularly for cancer. Despite its intuitive advantages, it remains plausible that some prediction models may do more harm than good. Here we present a novel method for determining whether predictions from a model should be used to apply the results of a randomized trial to individual patients, as opposed to using group level results. Methods We propose applying the prediction model to a data set from a randomized trial and examining the results of patients for whom the treatment arm recommended by a prediction model is congruent with allocation. These results are compared with the strategy of treating all patients through use of a net benefit function that incorporates both the number of patients treated and the outcome. We examined models developed using data sets regarding adjuvant chemotherapy for colorectal cancer and Dutasteride for benign prostatic hypertrophy. Results For adjuvant chemotherapy, we found that patients who would opt for chemotherapy even for small risk reductions, and, conversely, those who would require a very large risk reduction, would on average be harmed by using a prediction model; those with intermediate preferences would on average benefit by allowing such information to help their decision making. Use of prediction could, at worst, lead to the equivalent of an additional death or recurrence per 143 patients; at best it could lead to the equivalent of a reduction in the number of treatments of 25% without an increase in event rates. In the Dutasteride case, where the average benefit of treatment is more modest, there is a small benefit of prediction modelling, equivalent to a reduction of

  20. X-ray diffraction peak profiles from threading dislocations in GaN epitaxial films

    OpenAIRE

    Kaganer, V M; Brandt, O.; Trampert, A.; Ploog, K. H.

    2004-01-01

    We analyze the lineshape of x-ray diffraction profiles of GaN epitaxial layers with large densities of randomly distributed threading dislocations. The peaks are Gaussian only in the central, most intense part of the peak, while the tails obey a power law. The $q^{-3}$ decay typical for random dislocations is observed in double-crystal rocking curves. The entire profile is well fitted by a restricted random dislocation distribution. The densities of both edge and screw threading dislocations ...

  1. Neurofeedback training for peak performance.

    Science.gov (United States)

    Graczyk, Marek; Pąchalska, Maria; Ziółkowski, Artur; Mańko, Grzegorz; Łukaszewska, Beata; Kochanowicz, Kazimierz; Mirski, Andrzej; Kropotov, Iurii D

    2014-01-01

    One of the applications of the Neurofeedback methodology is peak performance in sport. The protocols of the neurofeedback are usually based on an assessment of the spectral parameters of spontaneous EEG in resting state conditions. The aim of the paper was to study whether the intensive neurofeedback training of a well-functioning Olympic athlete who has lost his performance confidence after injury in sport, could change the brain functioning reflected in changes in spontaneous EEG and event related potentials (ERPs). The case is presented of an Olympic athlete who has lost his performance confidence after injury in sport. He wanted to resume his activities by means of neurofeedback training. His QEEG/ERP parameters were assessed before and after 4 intensive sessions of neurotherapy. Dramatic and statistically significant changes that could not be explained by error measurement were observed in the patient. Neurofeedback training in the subject under study increased the amplitude of the monitoring component of ERPs generated in the anterior cingulate cortex, accompanied by an increase in beta activity over the medial prefrontal cortex. Taking these changes together, it can be concluded that that even a few sessions of neurofeedback in a high performance brain can significantly activate the prefrontal cortical areas associated with increasing confidence in sport performance.

  2. Neurofeedback training for peak performance

    Directory of Open Access Journals (Sweden)

    Marek Graczyk

    2014-11-01

    Full Text Available [b]aim[/b]. One of the applications of the Neurofeedback methodology is peak performance in sport. The protocols of the neurofeedback are usually based on an assessment of the spectral parameters of spontaneous EEG in resting state conditions. The aim of the paper was to study whether the intensive neurofeedback training of a well-functioning Olympic athlete who has lost his performance confidence after injury in sport, could change the brain functioning reflected in changes in spontaneous EEG and event related potentials (ERPs. [b]case study[/b]. The case is presented of an Olympic athlete who has lost his performance confidence after injury in sport. He wanted to resume his activities by means of neurofeedback training. His QEEG/ERP parameters were assessed before and after 4 intensive sessions of neurotherapy. Dramatic and statistically significant changes that could not be explained by error measurement were observed in the patient. [b]conclusion[/b]. Neurofeedback training in the subject under study increased the amplitude of the monitoring component of ERPs generated in the anterior cingulate cortex, accompanied by an increase in beta activity over the medial prefrontal cortex. Taking these changes together, it can be concluded that that even a few sessions of neurofeedback in a high performance brain can significantly activate the prefrontal cortical areas associated with increasing confidence in sport performance.

  3. Effect of natural hirudin on random pattern skin flap survival in a porcine model.

    Science.gov (United States)

    Zhao, H; Shi, Q; Sun, Z Y; Yin, G Q; Yang, H L

    2012-01-01

    The effect of local administration of hirudin on random pattern skin flap survival was investigated in a porcine model. Three random pattern skin flaps (4 × 14 cm) were created on each flank of five Chinese minipigs. The experimental group (10 flaps) received 20 antithrombin units of hirudin, injected subdermally into the distal half immediately after surgery and on days 1 and 2; a control group (10 flaps) was injected with saline and a sham group (10 flaps) was not injected. All flaps were followed for 10 days postoperatively. Macroscopically, the congested/necrotic length in the experimental group was significantly decreased compared with the other two groups by day 3. Histopathological evaluation revealed venous congestion and inflammation in the control and sham groups from day 1, but minimal changes in the experimental group. By day 10, the mean ± SD surviving area was significantly greater in the experimental group (67.6 ± 2.1%) than in the control (45.2 ± 1.4%) or sham (48.3 ± 1.1%) groups. Local administration of hirudin can significantly increase the surviving area in overdimensioned random pattern skin flaps, in a porcine model.

  4. Effect of overpasses in the Biham-Middleton-Levine traffic flow model with random and parallel update rule

    Science.gov (United States)

    Ding, Zhong-Jun; Jiang, Rui; Gao, Zi-You; Wang, Bing-Hong; Long, Jiancheng

    2013-08-01

    The effect of overpasses in the Biham-Middleton-Levine traffic flow model with random and parallel update rules has been studied. An overpass is a site that can be occupied simultaneously by an eastbound car and a northbound one. Under periodic boundary conditions, both self-organized and random patterns are observed in the free-flowing phase of the parallel update model, while only the random pattern is observed in the random update model. We have developed mean-field analysis for the moving phase of the random update model, which agrees with the simulation results well. An intermediate phase is observed in which some cars could pass through the jamming cluster due to the existence of free paths in the random update model. Two intermediate states are observed in the parallel update model, which have been ignored in previous studies. The intermediate phases in which the jamming skeleton is only oriented along the diagonal line in both models have been analyzed, with the analyses agreeing well with the simulation results. With the increase of overpass ratio, the jamming phase and the intermediate phases disappear in succession for both models. Under open boundary conditions, the system exhibits only two phases when the ratio of overpasses is below a threshold in the random update model. When the ratio of the overpass is close to 1, three phases could be observed, similar to the totally asymmetric simple exclusion process model. The dependence of the average velocity, the density, and the flow rate on the injection probability in the moving phase has also been obtained through mean-field analysis. The results of the parallel model under open boundary conditions are similar to that of the random update model.

  5. General Merrill A. McPeak: An Effective Change Agent?

    National Research Council Canada - National Science Library

    Hopper, Tim

    1997-01-01

    .... It also looks at the role he played as a change agent. Current models of how organizational change should be implemented are compared to how General McPeak implemented his organizational change...

  6. PReFerSim: fast simulation of demography and selection under the Poisson Random Field model.

    Science.gov (United States)

    Ortega-Del Vecchyo, Diego; Marsden, Clare D; Lohmueller, Kirk E

    2016-11-15

    The Poisson Random Field (PRF) model has become an important tool in population genetics to study weakly deleterious genetic variation under complicated demographic scenarios. Currently, there are no freely available software applications that allow simulation of genetic variation data under this model. Here we present PReFerSim, an ANSI C program that performs forward simulations under the PRF model. PReFerSim models changes in population size, arbitrary amounts of inbreeding, dominance and distributions of selective effects. Users can track summaries of genetic variation over time and output trajectories of selected alleles. PReFerSim is freely available at: https://github.com/LohmuellerLab/PReFerSim CONTACT: klohmueller@ucla.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. A trophallaxis inspired model for distributed transport between randomly interacting agents

    CERN Document Server

    Gräwer, Johannes; Mazza, Marco G; Katifori, Eleni

    2016-01-01

    A trophallaxis inspired model for distributed transport between randomly interacting agents Trophallaxis, the regurgitation and mouth to mouth transfer of liquid food between members of eusocial insect societies, is an important process that allows the fast and efficient dissemination of food in the colony. Trophallactic systems are typically treated as a network of agent interactions. This approach, though valuable, does not easily lend itself to analytic predictions. In this work we consider a simple trophallactic system of randomly interacting agents with finite carrying capacity, and calculate analytically and via a series of simulations the global food intake rate for the whole colony as well as observables describing how uniformly the food is distributed within the nest. Our work serves as a stepping stone to describing the collective properties of more complex trophallactic systems, such as those including division of labor between foragers and workers.

  8. A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    WANG Shaoyu

    2016-12-01

    Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.

  9. Sample distribution in peak mode isotachophoresis

    Energy Technology Data Exchange (ETDEWEB)

    Rubin, Shimon [Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa (Israel); Schwartz, Ortal [Russel Berrie Nanotechnology Institute, Technion - Israel Institute of Technology, Haifa (Israel); Bercovici, Moran, E-mail: mberco@technion.ac.il [Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa (Israel); Russel Berrie Nanotechnology Institute, Technion - Israel Institute of Technology, Haifa (Israel)

    2014-01-15

    We present an analytical study of peak mode isotachophoresis (ITP), and provide closed form solutions for sample distribution and electric field, as well as for leading-, trailing-, and counter-ion concentration profiles. Importantly, the solution we present is valid not only for the case of fully ionized species, but also for systems of weak electrolytes which better represent real buffer systems and for multivalent analytes such as proteins and DNA. The model reveals two major scales which govern the electric field and buffer distributions, and an additional length scale governing analyte distribution. Using well-controlled experiments, and numerical simulations, we verify and validate the model and highlight its key merits as well as its limitations. We demonstrate the use of the model for determining the peak concentration of focused sample based on known buffer and analyte properties, and show it differs significantly from commonly used approximations based on the interface width alone. We further apply our model for studying reactions between multiple species having different effective mobilities yet co-focused at a single ITP interface. We find a closed form expression for an effective-on rate which depends on reactants distributions, and derive the conditions for optimizing such reactions. Interestingly, the model reveals that maximum reaction rate is not necessarily obtained when the concentration profiles of the reacting species perfectly overlap. In addition to the exact solutions, we derive throughout several closed form engineering approximations which are based on elementary functions and are simple to implement, yet maintain the interplay between the important scales. Both the exact and approximate solutions provide insight into sample focusing and can be used to design and optimize ITP-based assays.

  10. Modeling and predicting the Spanish Bachillerato academic results over the next few years using a random network model

    Science.gov (United States)

    Cortés, J.-C.; Colmenar, J.-M.; Hidalgo, J.-I.; Sánchez-Sánchez, A.; Santonja, F.-J.; Villanueva, R.-J.

    2016-01-01

    Academic performance is a concern of paramount importance in Spain, where around of 30 % of the students in the last two courses in high school, before to access to the labor market or to the university, do not achieve the minimum knowledge required according to the Spanish educational law in force. In order to analyze this problem, we propose a random network model to study the dynamics of the academic performance in Spain. Our approach is based on the idea that both, good and bad study habits, are a mixture of personal decisions and influence of classmates. Moreover, in order to consider the uncertainty in the estimation of model parameters, we perform a lot of simulations taking as the model parameters the ones that best fit data returned by the Differential Evolution algorithm. This technique permits to forecast model trends in the next few years using confidence intervals.

  11. A transdisciplinary model to inform randomized clinical trial methods for electronic cigarette evaluation

    Directory of Open Access Journals (Sweden)

    Alexa A. Lopez

    2016-03-01

    Full Text Available Abstract Background This study is a systematic evaluation of a novel tobacco product, electronic cigarettes (ECIGs using a two-site, four-arm, 6-month, parallel-group randomized controlled trial (RCT with a follow-up to 9 months. Virginia Commonwealth University is the primary site and Penn State University is the secondary site. This RCT design is important because it is informed by analytical work, clinical laboratory results, and qualitative/quantitative findings regarding the specific ECIG products used. Methods Participants (N = 520 will be randomized across sites and must be healthy smokers of >9 cigarettes for at least one year, who have not had a quit attempt in the prior month, are not planning to quit in the next 6 months, and are interested in reducing cigarette intake. Participants will be randomized into one of four 24-week conditions: a cigarette substitute that does not produce an inhalable aerosol; or one of three ECIG conditions that differ by nicotine concentration 0, 8, or 36 mg/ml. Blocked randomization will be accomplished with a 1:1:1:1 ratio of condition assignments at each site. Specific aims are to: characterize ECIG influence on toxicants, biomarkers, health indicators, and disease risk; determine tobacco abstinence symptom and adverse event profile associated with real-world ECIG use; and examine the influence of ECIG use on conventional tobacco product use. Liquid nicotine concentration-related differences on these study outcomes are predicted. Participants and research staff in contact with participants will be blinded to the nicotine concentration in the ECIG conditions. Discussion Results from this study will inform knowledge concerning ECIG use as well as demonstrate a model that may be applied to other novel tobacco products. The model of using prior empirical testing of ECIG devices should be considered in other RCT evaluations. Trial registration TRN: NCT02342795 , registered December 16, 2014.

  12. Interpretation of TOVS Water Vapor Radiances Using a Random Strong Line Model

    CERN Document Server

    Soden, B J; Soden, Brian J.; Bretherton, Francis P.

    1995-01-01

    This study illustrates the application of a random strong line (RSL) model of radiative transfer to the interpretation of satellite observations of the upwelling radiation in the 6.3 micron water vapor absorption band. The model, based upon an assemblage of randomly overlapped, strongly absorbing, pressure broadened lines, is compared to detailed radiative transfer calculations of the upper (6.7 micron) tropospheric water vapor radiance and demonstrated to be accurate to within ~ 1.2 K. Similar levels of accuracy are found when the model is compared to detailed calculations of the middle (7.3 micron) and lower (8.3 micron) tropospheric water vapor radiance, provided that the emission from the underlying surface is taken into account. Based upon these results, the RSL model is used to interpret TOVS-observed water vapor radiances in terms of the relative humidity averaged over deep layers of the upper, middle, and lower troposphere. We then present near-global maps of the geographic distribution and climatolog...

  13. Estimation of retinal vessel caliber using model fitting and random forests

    Science.gov (United States)

    Araújo, Teresa; Mendonça, Ana Maria; Campilho, Aurélio

    2017-03-01

    Retinal vessel caliber changes are associated with several major diseases, such as diabetes and hypertension. These caliber changes can be evaluated using eye fundus images. However, the clinical assessment is tiresome and prone to errors, motivating the development of automatic methods. An automatic method based on vessel crosssection intensity profile model fitting for the estimation of vessel caliber in retinal images is herein proposed. First, vessels are segmented from the image, vessel centerlines are detected and individual segments are extracted and smoothed. Intensity profiles are extracted perpendicularly to the vessel, and the profile lengths are determined. Then, model fitting is applied to the smoothed profiles. A novel parametric model (DoG-L7) is used, consisting on a Difference-of-Gaussians multiplied by a line which is able to describe profile asymmetry. Finally, the parameters of the best-fit model are used for determining the vessel width through regression using ensembles of bagged regression trees with random sampling of the predictors (random forests). The method is evaluated on the REVIEW public dataset. A precision close to the observers is achieved, outperforming other state-of-the-art methods. The method is robust and reliable for width estimation in images with pathologies and artifacts, with performance independent of the range of diameters.

  14. Random Neighborhood Graphs as Models of Fracture Networks on Rocks: Structural and Dynamical Analysis

    CERN Document Server

    Estrada, Ernesto

    2016-01-01

    We propose a new model to account for the main structural characteristics of rock fracture networks (RFNs). The model is based on a generalization of the random neighborhood graphs to consider fractures embedded into rectangular spaces. We study a series of 29 real-world RFNs and find the best fit with the random rectangular neighborhood graphs (RRNGs) proposed here. We show that this model captures most of the structural characteristics of the RFNs and allows a distinction between small and more spherical rocks and large and more elongated ones. We use a diffusion equation on the graphs in order to model diffusive processes taking place through the channels of the RFNs. We find a small set of structural parameters that highly correlates with the average diffusion time in the RFNs. In particular, the second smallest eigenvalue of the Laplacian matrix is a good predictor of the average diffusion time on RFNs, showing a Pearson correlation coefficient larger than $0.99$ with the average diffusion time on RFNs. ...

  15. Reactive Power Pricing Model Considering the Randomness of Wind Power Output

    Science.gov (United States)

    Dai, Zhong; Wu, Zhou

    2018-01-01

    With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.

  16. Facility Location with Double-peaked Preferences

    DEFF Research Database (Denmark)

    Filos-Ratsikas, Aris; Li, Minming; Zhang, Jie

    2015-01-01

    We study the problem of locating a single facility on a real line based on the reports of self-interested agents, when agents have double-peaked preferences, with the peaks being on opposite sides of their locations. We observe that double-peaked preferences capture real-life scenarios and thus...... complement the well-studied notion of single-peaked preferences. We mainly focus on the case where peaks are equidistant from the agents’ locations and discuss how our results extend to more general settings. We show that most of the results for single-peaked preferences do not directly apply to this setting...

  17. Estimating safety effects of pavement management factors utilizing Bayesian random effect models.

    Science.gov (United States)

    Jiang, Ximiao; Huang, Baoshan; Zaretzki, Russell L; Richards, Stephen; Yan, Xuedong

    2013-01-01

    Previous studies of pavement management factors that relate to the occurrence of traffic-related crashes are rare. Traditional research has mostly employed summary statistics of bidirectional pavement quality measurements in extended longitudinal road segments over a long time period, which may cause a loss of important information and result in biased parameter estimates. The research presented in this article focuses on crash risk of roadways with overall fair to good pavement quality. Real-time and location-specific data were employed to estimate the effects of pavement management factors on the occurrence of crashes. This research is based on the crash data and corresponding pavement quality data for the Tennessee state route highways from 2004 to 2009. The potential temporal and spatial correlations among observations caused by unobserved factors were considered. Overall 6 models were built accounting for no correlation, temporal correlation only, and both the temporal and spatial correlations. These models included Poisson, negative binomial (NB), one random effect Poisson and negative binomial (OREP, ORENB), and two random effect Poisson and negative binomial (TREP, TRENB) models. The Bayesian method was employed to construct these models. The inference is based on the posterior distribution from the Markov chain Monte Carlo (MCMC) simulation. These models were compared using the deviance information criterion. Analysis of the posterior distribution of parameter coefficients indicates that the pavement management factors indexed by Present Serviceability Index (PSI) and Pavement Distress Index (PDI) had significant impacts on the occurrence of crashes, whereas the variable rutting depth was not significant. Among other factors, lane width, median width, type of terrain, and posted speed limit were significant in affecting crash frequency. The findings of this study indicate that a reduction in pavement roughness would reduce the likelihood of traffic

  18. A Markov model for the temporal dynamics of balanced random networks of finite size

    Science.gov (United States)

    Lagzi, Fereshteh; Rotter, Stefan

    2014-01-01

    The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between

  19. Modelling heat transfer during flow through a random packed bed of spheres

    Science.gov (United States)

    Burström, Per E. C.; Frishfelds, Vilnis; Ljung, Anna-Lena; Lundström, T. Staffan; Marjavaara, B. Daniel

    2017-11-01

    Heat transfer in a random packed bed of monosized iron ore pellets is modelled with both a discrete three-dimensional system of spheres and a continuous Computational Fluid Dynamics (CFD) model. Results show a good agreement between the two models for average values over a cross section of the bed for an even temperature profiles at the inlet. The advantage with the discrete model is that it captures local effects such as decreased heat transfer in sections with low speed. The disadvantage is that it is computationally heavy for larger systems of pellets. If averaged values are sufficient, the CFD model is an attractive alternative that is easy to couple to the physics up- and downstream the packed bed. The good agreement between the discrete and continuous model furthermore indicates that the discrete model may be used also on non-Stokian flow in the transitional region between laminar and turbulent flow, as turbulent effects show little influence of the overall heat transfer rates in the continuous model.

  20. Impact of Smart Grid Technologies on Peak Load to 2050

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    The IEA's Smart Grids Technology Roadmap identified five global trends that could be effectively addressed by deploying smart grids. These are: increasing peak load (the maximum power that the grid delivers during peak hours), rising electricity consumption, electrification of transport, deployment of variable generation technologies (e.g. wind and solar PV) and ageing infrastructure. Along with this roadmap, a new working paper -- Impact of Smart Grid Technologies on Peak Load to 2050 -- develops a methodology to estimate the evolution of peak load until 2050. It also analyses the impact of smart grid technologies in reducing peak load for four key regions; OECD North America, OECD Europe, OECD Pacific and China. This working paper is a first IEA effort in an evolving modelling process of smart grids that is considering demand response in residential and commercial sectors as well as the integration of electric vehicles.

  1. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  2. Equilibrium Model of Discrete Dynamic Supply Chain Network with Random Demand and Advertisement Strategy

    Directory of Open Access Journals (Sweden)

    Guitao Zhang

    2014-01-01

    Full Text Available The advertisement can increase the consumers demand; therefore it is one of the most important marketing strategies in the operations management of enterprises. This paper aims to analyze the impact of advertising investment on a discrete dynamic supply chain network which consists of suppliers, manufactures, retailers, and demand markets associated at different tiers under random demand. The impact of advertising investment will last several planning periods besides the current period due to delay effect. Based on noncooperative game theory, variational inequality, and Lagrange dual theory, the optimal economic behaviors of the suppliers, the manufactures, the retailers, and the consumers in the demand markets are modeled. In turn, the supply chain network equilibrium model is proposed and computed by modified project contraction algorithm with fixed step. The effectiveness of the model is illustrated by numerical examples, and managerial insights are obtained through the analysis of advertising investment in multiple periods and advertising delay effect among different periods.

  3. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling.

    Science.gov (United States)

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A; Wong, Alexander

    2016-06-27

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging.

  4. When human walking becomes random walking: fractal analysis and modeling of gait rhythm fluctuations

    Science.gov (United States)

    Hausdorff, Jeffrey M.; Ashkenazy, Yosef; Peng, Chang-K.; Ivanov, Plamen Ch.; Stanley, H. Eugene; Goldberger, Ary L.

    2001-12-01

    We present a random walk, fractal analysis of the stride-to-stride fluctuations in the human gait rhythm. The gait of healthy young adults is scale-free with long-range correlations extending over hundreds of strides. This fractal scaling changes characteristically with maturation in children and older adults and becomes almost completely uncorrelated with certain neurologic diseases. Stochastic modeling of the gait rhythm dynamics, based on transitions between different “neural centers”, reproduces distinctive statistical properties of the gait pattern. By tuning one model parameter, the hopping (transition) range, the model can describe alterations in gait dynamics from childhood to adulthood - including a decrease in the correlation and volatility exponents with maturation.

  5. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    DEFF Research Database (Denmark)

    Holst, René; Jørgensen, Bent

    2015-01-01

    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  6. Role Analysis in Networks using Mixtures of Exponential Random Graph Models.

    Science.gov (United States)

    Salter-Townshend, Michael; Murphy, Thomas Brendan

    2015-06-01

    A novel and flexible framework for investigating the roles of actors within a network is introduced. Particular interest is in roles as defined by local network connectivity patterns, identified using the ego-networks extracted from the network. A mixture of Exponential-family Random Graph Models is developed for these ego-networks in order to cluster the nodes into roles. We refer to this model as the ego-ERGM. An Expectation-Maximization algorithm is developed to infer the unobserved cluster assignments and to estimate the mixture model parameters using a maximum pseudo-likelihood approximation. The flexibility and utility of the method are demonstrated on examples of simulated and real networks.

  7. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a formulation of a fully nonlinear and dispersive potential flow water wave model with random inputs for the probabilistic description...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...... by the variability in the model input. Finally, we present a synthetic experiment studying the variance-based sensitivity of the wave load on an offshore structure to a number of input uncertainties. In the numerical examples presented the PC methods exhibit fast convergence, suggesting that the problem is amenable...

  8. Numerical Demultiplexing of Color Image Sensor Measurements via Non-linear Random Forest Modeling

    Science.gov (United States)

    Deglint, Jason; Kazemzadeh, Farnoud; Cho, Daniel; Clausi, David A.; Wong, Alexander

    2016-06-01

    The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging.

  9. A random walk evolution model of wireless sensor networks and virus spreading

    Science.gov (United States)

    Wang, Ya-Qi; Yang, Xiao-Yuan

    2013-01-01

    In this paper, considering both cluster heads and sensor nodes, we propose a novel evolving a network model based on a random walk to study the fault tolerance decrease of wireless sensor networks (WSNs) due to node failure, and discuss the spreading dynamic behavior of viruses in the evolution model. A theoretical analysis shows that the WSN generated by such an evolution model not only has a strong fault tolerance, but also can dynamically balance the energy loss of the entire network. It is also found that although the increase of the density of cluster heads in the network reduces the network efficiency, it can effectively inhibit the spread of viruses. In addition, the heterogeneity of the network improves the network efficiency and enhances the virus prevalence. We confirm all the theoretical results with sufficient numerical simulations.

  10. Effects of combined harmonic and random excitations on a Brusselator model

    Science.gov (United States)

    Xu, Yong; Ma, Jinzhong; Wang, Haiyan; Li, Yongge; Kurths, Jürgen

    2017-10-01

    We discuss the constructive role of combined harmonic and random excitation on stochastic resonance (SR) in a Brusselator model. We first numerically investigate SR determined by the Signal-to-Noise Ratio (SNR) in this model. Effects of different parameters on SR are described in detail. Our simulation results show that the intensity of the Gaussian colored noise and the amplitude of the periodic force can enhance SR. Moreover, an analytical framework is presented for the SNR of the Brusselator model, leading to a theoretical expression of SNR. We observe a good agreement between the theoretical and numerical results, and the effectiveness of the proposed theoretical method is verified. This theoretical analysis provides a global view on how the dynamics of a periodically forced system with noise changes in the vicinity of a Hopf bifurcation.

  11. Comparison of Consent Models in a Randomized Trial of Corticosteroids in Pediatric Septic Shock.

    Science.gov (United States)

    Menon, Kusum; O'Hearn, Katharine; McNally, James Dayre; Acharya, Anand; Wong, Hector R; Lawson, Margaret; Ramsay, Tim; McIntyre, Lauralyn; Gilfoyle, Elaine; Tucci, Marisa; Wensley, David; Gottesman, Ronald; Morrison, Gavin; Choong, Karen

    2017-11-01

    To describe the use of deferred and prior informed consent models in the context of a low additional risk to standard of care, placebo-controlled randomized controlled trial of corticosteroids in pediatric septic shock. An observational substudy of consent processes in a randomized controlled trial of hydrocortisone versus placebo. Seven tertiary level PICUs in Canada. Children newborn to 17 years inclusive admitted to PICU with suspected septic shock between July 2014 and March 2016. None. Information on the number of families approached, consent rates obtained, and spontaneously volunteered reasons for nonparticipation were collected for both deferred and informed consent. The research ethics board of five of seven centers approved a deferred consent model; however, implementation criteria for use of this model varied across sites. The consent rate using deferred versus prior informed consent was significantly higher (83%; 35/42 vs 58%; 15/26; p = 0.02). The mean times from meeting inclusion criteria to randomization (1.8 ± 1.8 vs 3.6 ± 2.1 hr; p = 0.007) and study drug administration (3.4 ± 2.7 hr vs 4.8 ± 2.1 hr; p = 0.05) were significantly shorter with the use of deferred consent versus prior informed consent. No family member or research ethics board expressed concern following use of deferred consent. Deferred consent was acceptable in time-sensitive critical care research to most research ethics boards, families, and healthcare providers and resulted in higher consent rates and more efficient recruitment. Larger studies on deferred consent and consistency interpreting jurisdictional guidelines are needed to advance pediatric acute care.

  12. Fitting parametric random effects models in very large data sets with application to VHA national data.

    Science.gov (United States)

    Gebregziabher, Mulugeta; Egede, Leonard; Gilbert, Gregory E; Hunt, Kelly; Nietert, Paul J; Mauldin, Patrick

    2012-10-24

    With the current focus on personalized medicine, patient/subject level inference is often of key interest in translational research. As a result, random effects models (REM) are becoming popular for patient level inference. However, for very large data sets that are characterized by large sample size, it can be difficult to fit REM using commonly available statistical software such as SAS since they require inordinate amounts of computer time and memory allocations beyond what are available preventing model convergence. For example, in a retrospective cohort study of over 800,000 Veterans with type 2 diabetes with longitudinal data over 5 years, fitting REM via generalized linear mixed modeling using currently available standard procedures in SAS (e.g. PROC GLIMMIX) was very difficult and same problems exist in Stata's gllamm or R's lme packages. Thus, this study proposes and assesses the performance of a meta regression approach and makes comparison with methods based on sampling of the full data. We use both simulated and real data from a national cohort of Veterans with type 2 diabetes (n=890,394) which was created by linking multiple patient and administrative files resulting in a cohort with longitudinal data collected over 5 years. The outcome of interest was mean annual HbA1c measured over a 5 years period. Using this outcome, we compared parameter estimates from the proposed random effects meta regression (REMR) with estimates based on simple random sampling and VISN (Veterans Integrated Service Networks) based stratified sampling of the full data. Our results indicate that REMR provides parameter estimates that are less likely to be biased with tighter confidence intervals when the VISN level estimates are homogenous. When the interest is to fit REM in repeated measures data with very large sample size, REMR can be used as a good alternative. It leads to reasonable inference for both Gaussian and non-Gaussian responses if parameter estimates are

  13. Genetic evaluation of egg production curve in Thai native chickens by random regression and spline models.

    Science.gov (United States)

    Mookprom, S; Boonkum, W; Kunhareang, S; Siripanya, S; Duangjinda, M

    2017-02-01

    The objective of this research is to investigate appropriate random regression models with various covariance functions, for the genetic evaluation of test-day egg production. Data included 7,884 monthly egg production records from 657 Thai native chickens (Pradu Hang Dam) that were obtained during the first to sixth generation and were born during 2007 to 2014 at the Research and Development Network Center for Animal Breeding (Native Chickens), Khon Kaen University. Average annual and monthly egg productions were 117 ± 41 and 10.20 ± 6.40 eggs, respectively. Nine random regression models were analyzed using the Wilmink function (WM), Koops and Grossman function (KG), Legendre polynomials functions with second, third, and fourth orders (LG2, LG3, LG4), and spline functions with 4, 5, 6, and 8 knots (SP4, SP5, SP6, and SP8). All covariance functions were nested within the same additive genetic and permanent environmental random effects, and the variance components were estimated by Restricted Maximum Likelihood (REML). In model comparisons, mean square error (MSE) and the coefficient of detemination (R2) calculated the goodness of fit; and the correlation between observed and predicted values [Formula: see text] was used to calculate the cross-validated predictive abilities. We found that the covariance functions of SP5, SP6, and SP8 proved appropriate for the genetic evaluation of the egg production curves for Thai native chickens. The estimated heritability of monthly egg production ranged from 0.07 to 0.39, and the highest heritability was found during the first to third months of egg production. In conclusion, the spline functions within monthly egg production can be applied to breeding programs for the improvement of both egg number and persistence of egg production. © 2016 Poultry Science Association Inc.

  14. Preterm Birth: Analysis of Longitudinal Data on Siblings Based on Random-Effects Logit Models.

    Science.gov (United States)

    Bacci, Silvia; Bartolucci, Francesco; Minelli, Liliana; Chiavarini, Manuela

    2016-01-01

    The literature about the determinants of a preterm birth is still controversial. We approach the analysis of these determinants distinguishing between woman's observable characteristics, which may change over time, and unobservable woman's characteristics, which are time invariant and explain the dependence between the typology (normal or preterm) of consecutive births. We rely on a longitudinal dataset about 28,603 women who delivered for the first time in the period 2005-2013 in the Umbria Region (Italy). We consider singleton physiological pregnancies originating from natural conceptions with birthweight of at least 500 g and gestational age between 24 and 42 weeks; the overall number of deliveries is 34,224. The dataset is based on the Standard Certificates of Life Birth collected in the region in the same period. We estimate two types of logit model for the event that the birth is preterm. The first model is pooled and accounts for the information about possible previous preterm deliveries, including the lagged response among the covariates. The second model takes explicitly into account the longitudinal structure of data through the introduction of a random effect that summarizes all the (time invariant) unobservable characteristics of a woman affecting the probability of a preterm birth. The estimated models provide evidence that the probability of a preterm birth depends on certain woman's demographic and socioeconomic characteristics, other than on the previous history in terms of miscarriages and the baby's gender. Besides, as the random-effects model fits significantly better than the pooled model with lagged response, we conclude for a spurious state dependence between repeated preterm deliveries. The proposed analysis represents a useful tool to detect profiles of women with a high risk of preterm delivery. Such profiles are detected taking into account observable woman's demographic and socioeconomic characteristics as well as unobservable and

  15. Fluorescence microscopy image noise reduction using a stochastically-connected random field model.

    Science.gov (United States)

    Haider, S A; Cameron, A; Siva, P; Lui, D; Shafiee, M J; Boroomand, A; Haider, N; Wong, A

    2016-02-17

    Fluorescence microscopy is an essential part of a biologist's toolkit, allowing assaying of many parameters like subcellular localization of proteins, changes in cytoskeletal dynamics, protein-protein interactions, and the concentration of specific cellular ions. A fundamental challenge with using fluorescence microscopy is the presence of noise. This study introduces a novel approach to reducing noise in fluorescence microscopy images. The noise reduction problem is posed as a Maximum A Posteriori estimation problem, and solved using a novel random field model called stochastically-connected random field (SRF), which combines random graph and field theory. Experimental results using synthetic and real fluorescence microscopy data show the proposed approach achieving strong noise reduction performance when compared to several other noise reduction algorithms, using quantitative metrics. The proposed SRF approach was able to achieve strong performance in terms of signal-to-noise ratio in the synthetic results, high signal to noise ratio and contrast to noise ratio in the real fluorescence microscopy data results, and was able to maintain cell structure and subtle details while reducing background and intra-cellular noise.

  16. Micromechanical Modeling of Fiber-Reinforced Composites with Statistically Equivalent Random Fiber Distribution

    Directory of Open Access Journals (Sweden)

    Wenzhi Wang

    2016-07-01

    Full Text Available Modeling the random fiber distribution of a fiber-reinforced composite is of great importance for studying the progressive failure behavior of the material on the micro scale. In this paper, we develop a new algorithm for generating random representative volume elements (RVEs with statistical equivalent fiber distribution against the actual material microstructure. The realistic statistical data is utilized as inputs of the new method, which is archived through implementation of the probability equations. Extensive statistical analysis is conducted to examine the capability of the proposed method and to compare it with existing methods. It is found that the proposed method presents a good match with experimental results in all aspects including the nearest neighbor distance, nearest neighbor orientation, Ripley’s K function, and the radial distribution function. Finite element analysis is presented to predict the effective elastic properties of a carbon/epoxy composite, to validate the generated random representative volume elements, and to provide insights of the effect of fiber distribution on the elastic properties. The present algorithm is shown to be highly accurate and can be used to generate statistically equivalent RVEs for not only fiber-reinforced composites but also other materials such as foam materials and particle-reinforced composites.

  17. A lattice-model representation of continuous-time random walks

    Energy Technology Data Exchange (ETDEWEB)

    Campos, Daniel [School of Mathematics, Department of Applied Mathematics, University of Manchester, Manchester M60 1QD (United Kingdom); Mendez, Vicenc [Grup de Fisica Estadistica, Departament de Fisica, Universitat Autonoma de Barcelona, 08193 Bellaterra (Barcelona) (Spain)], E-mail: daniel.campos@uab.es, E-mail: vicenc.mendez@uab.es

    2008-02-29

    We report some ideas for constructing lattice models (LMs) as a discrete approach to the reaction-dispersal (RD) or reaction-random walks (RRW) models. The analysis of a rather general class of Markovian and non-Markovian processes, from the point of view of their wavefront solutions, let us show that in some regimes their macroscopic dynamics (front speed) turns out to be different from that by classical reaction-diffusion equations, which are often used as a mean-field approximation to the problem. So, the convenience of a more general framework as that given by the continuous-time random walks (CTRW) is claimed. Here we use LMs as a numerical approach in order to support that idea, while in previous works our discussion was restricted to analytical models. For the two specific cases studied here, we derive and analyze the mean-field expressions for our LMs. As a result, we are able to provide some links between the numerical and analytical approaches studied.

  18. Selection of locations of knots for linear splines in random regression test-day models.

    Science.gov (United States)

    Jamrozik, J; Bohmanova, J; Schaeffer, L R

    2010-04-01

    Using spline functions (segmented polynomials) in regression models requires the knowledge of the location of the knots. Knots are the points at which independent linear segments are connected. Optimal positions of knots for linear splines of different orders were determined in this study for different scenarios, using existing estimates of covariance functions and an optimization algorithm. The traits considered were test-day milk, fat and protein yields, and somatic cell score (SCS) in the first three lactations of Canadian Holsteins. Two ranges of days in milk (from 5 to 305 and from 5 to 365) were taken into account. In addition, four different populations of Holstein cows, from Australia, Canada, Italy and New Zealand, were examined with respect to first lactation (305 days) milk only. The estimates of genetic and permanent environmental covariance functions were based on single- and multiple-trait test-day models, with Legendre polynomials of order 4 as random regressions. A differential evolution algorithm was applied to find the best location of knots for splines of orders 4 to 7 and the criterion for optimization was the goodness-of-fit of the spline covariance function. Results indicated that the optimal position of knots for linear splines differed between genetic and permanent environmental effects, as well as between traits and lactations. Different populations also exhibited different patterns of optimal knot locations. With linear splines, different positions of knots should therefore be used for different effects and traits in random regression test-day models when analysing milk production traits.

  19. A study of factors affecting highway accident rates using the random-parameters tobit model.

    Science.gov (United States)

    Anastasopoulos, Panagiotis Ch; Mannering, Fred L; Shankar, Venky N; Haddock, John E

    2012-03-01

    A large body of previous literature has used a variety of count-data modeling techniques to study factors that affect the frequency of highway accidents over some time period on roadway segments of a specified length. An alternative approach to this problem views vehicle accident rates (accidents per mile driven) directly instead of their frequencies. Viewing the problem as continuous data instead of count data creates a problem in that roadway segments that do not have any observed accidents over the identified time period create continuous data that are left-censored at zero. Past research has appropriately applied a tobit regression model to address this censoring problem, but this research has been limited in accounting for unobserved heterogeneity because it has been assumed that the parameter estimates are fixed over roadway-segment observations. Using 9-year data from urban interstates in Indiana, this paper employs a random-parameters tobit regression to account for unobserved heterogeneity in the study of motor-vehicle accident rates. The empirical results show that the random-parameters tobit model outperforms its fixed-parameters counterpart and has the potential to provide a fuller understanding of the factors determining accident rates on specific roadway segments. Published by Elsevier Ltd.

  20. A Random Walk in the Park: An Individual-Based Null Model for Behavioral Thermoregulation.

    Science.gov (United States)

    Vickers, Mathew; Schwarzkopf, Lin

    2016-04-01

    Behavioral thermoregulators leverage environmental temperature to control their body temperature. Habitat thermal quality therefore dictates the difficulty and necessity of precise thermoregulation, and the quality of behavioral thermoregulation in turn impacts organism fitness via the thermal dependence of performance. Comparing the body temperature of a thermoregulator with a null (non-thermoregulating) model allows us to estimate habitat thermal quality and the effect of behavioral thermoregulation on body temperature. We define a null model for behavioral thermoregulation that is a random walk in a temporally and spatially explicit thermal landscape. Predicted body temperature is also integrated through time, so recent body temperature history, environmental temperature, and movement influence current body temperature; there is no particular reliance on an organism's equilibrium temperature. We develop a metric called thermal benefit that equates body temperature to thermally dependent performance as a proxy for fitness. We measure thermal quality of two distinct tropical habitats as a temporally dynamic distribution that is an ergodic property of many random walks, and we compare it with the thermal benefit of real lizards in both habitats. Our simple model focuses on transient body temperature; as such, using it we observe such subtleties as shifts in the thermoregulatory effort and investment of lizards throughout the day, from thermoregulators to thermoconformers.

  1. Osteoporosis: Peak Bone Mass in Women

    Science.gov (United States)

    ... Home Osteoporosis Osteoporosis: Peak Bone Mass in Women Osteoporosis: Peak Bone Mass in Women Bones are the ... No. 15-7891 Last Reviewed 2015-06 NIH Osteoporosis and Related Bone Diseases ~ National Resource Center 2 ...

  2. A stylistic classification of Russian-language texts based on the random walk model

    Science.gov (United States)

    Kramarenko, A. A.; Nekrasov, K. A.; Filimonov, V. V.; Zhivoderov, A. A.; Amieva, A. A.

    2017-09-01

    A formal approach to text analysis is suggested that is based on the random walk model. The frequencies and reciprocal positions of the vowel letters are matched up by a process of quasi-particle migration. Statistically significant difference in the migration parameters for the texts of different functional styles is found. Thus, a possibility of classification of texts using the suggested method is demonstrated. Five groups of the texts are singled out that can be distinguished from one another by the parameters of the quasi-particle migration process.

  3. A Note on the Existence of the Posteriors for One-way Random Effect Probit Models.

    Science.gov (United States)

    Lin, Xiaoyan; Sun, Dongchu

    2010-01-01

    The existence of the posterior distribution for one-way random effect probit models has been investigated when the uniform prior is applied to the overall mean and a class of noninformative priors are applied to the variance parameter. The sufficient conditions to ensure the propriety of the posterior are given for the cases with replicates at some factor levels. It is shown that the posterior distribution is never proper if there is only one observation at each factor level. For this case, however, a class of proper priors for the variance parameter can provide the necessary and sufficient conditions for the propriety of the posterior.

  4. Spatial random field models inspired from statistical physics with applications in the geosciences

    Science.gov (United States)

    Hristopulos, Dionissios T.

    2006-06-01

    The spatial structure of fluctuations in spatially inhomogeneous processes can be modeled in terms of Gibbs random fields. A local low energy estimator (LLEE) is proposed for the interpolation (prediction) of such processes at points where observations are not available. The LLEE approximates the spatial dependence of the data and the unknown values at the estimation points by low-lying excitations of a suitable energy functional. It is shown that the LLEE is a linear, unbiased, non-exact estimator. In addition, an expression for the uncertainty (standard deviation) of the estimate is derived.

  5. Random matrix theory and higher genus integrability: the quantum chiral Potts model

    Energy Technology Data Exchange (ETDEWEB)

    Angles d' Auriac, J.Ch. [Centre de Recherches sur les Tres Basses Temperatures, BP 166, Grenoble (France)]. E-mail: dauriac@polycnrs-gre.fr; Maillard, J.M.; Viallet, C.M. [LPTHE, Tour 16, Paris (France)]. E-mails: maillard@lpthe.jussieu.fr; viallet@lpthe.jussieu.fr

    2002-06-14

    We perform a random matrix theory (RMT) analysis of the quantum four-state chiral Potts chain for different sizes of the chain up to size L 8. Our analysis gives clear evidence of a Gaussian orthogonal ensemble (GOE) statistics, suggesting the existence of a generalized time-reversal invariance. Furthermore, a change from the (generic) GOE distribution to a Poisson distribution occurs when the integrability conditions are met. The chiral Potts model is known to correspond to a (star-triangle) integrability associated with curves of genus higher than zero or one. Therefore, the RMT analysis can also be seen as a detector of 'higher genus integrability'. (author)

  6. Quantized gauge theory on the fuzzy sphere as random matrix model

    Energy Technology Data Exchange (ETDEWEB)

    Steinacker, Harold E-mail: harold.steinacker@physik.uni-muenchen.de

    2004-02-16

    U(n) Yang-Mills theory on the fuzzy sphere S{sup 2}{sub N} is quantized using random matrix methods. The gauge theory is formulated as a matrix model for a single Hermitian matrix subject to a constraint, and a potential with two degenerate minima. This allows to reduce the path integral over the gauge fields to an integral over eigenvalues, which can be evaluated for large N. The partition function of U(n) Yang-Mills theory on the classical sphere is recovered in the large N limit, as a sum over instanton contributions. The monopole solutions are found explicitly.

  7. Random field Ising model in a uniform magnetic field: Ground states, pinned clusters and scaling laws.

    Science.gov (United States)

    Kumar, Manoj; Banerjee, Varsha; Puri, Sanjay

    2017-11-08

    In this paper, we study the random field Ising model (RFIM) in an external magnetic field h . A computationally efficient graph-cut method is used to study ground state (GS) morphologies in this system for three different disorder types: Gaussian, uniform and bimodal. We obtain the critical properties of this system and find that they are independent of the disorder type. We also study GS morphologies via pinned-cluster distributions, which are scale-free at criticality. The spin-spin correlation functions (and structure factors) are characterized by a roughness exponent [Formula: see text]. The corresponding scaling function is universal for all disorder types and independent of h.

  8. A new mean estimator using auxiliary variables for randomized response models

    Science.gov (United States)

    Ozgul, Nilgun; Cingi, Hulya

    2013-10-01

    Randomized response models are commonly used in surveys dealing with sensitive questions such as abortion, alcoholism, sexual orientation, drug taking, annual income, tax evasion to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Starting from the pioneering work of Warner [7], many versions of RRM have been developed that can deal with quantitative responses. In this study, new mean estimator is suggested for RRM including quantitative responses. The mean square error is derived and a simulation study is performed to show the efficiency of the proposed estimator to other existing estimators in RRM.

  9. A theory of solving TAP equations for Ising models with general invariant random matrices

    DEFF Research Database (Denmark)

    Opper, Manfred; Çakmak, Burak; Winther, Ole

    2016-01-01

    We consider the problem of solving TAP mean field equations by iteration for Ising models with coupling matrices that are drawn at random from general invariant ensembles. We develop an analysis of iterative algorithms using a dynamical functional approach that in the thermodynamic limit yields...... an effective dynamics of a single variable trajectory. Our main novel contribution is the expression for the implicit memory term of the dynamics for general invariant ensembles. By subtracting these terms, that depend on magnetizations at previous time steps, the implicit memory terms cancel making...

  10. Application of random number generators in genetic algorithms to improve rainfall-runoff modelling

    Czech Academy of Sciences Publication Activity Database

    Chlumecký, M.; Buchtele, Josef; Richta, K.

    2017-01-01

    Roč. 553, October (2017), s. 350-355 ISSN 0022-1694 Institutional support: RVO:67985874 Keywords : genetic algorithm * optimisation * rainfall-runoff modeling * random generator Subject RIV: DA - Hydrology ; Limnology Impact factor: 3.483, year: 2016 https://ac.els-cdn.com/S0022169417305516/1-s2.0-S0022169417305516-main.pdf?_tid=fa1bad8a-bd6a-11e7-8567-00000aab0f27&acdnat=1509365462_a1335d3d997e9eab19e23b1eee977705

  11. A Certificateless Ring Signature Scheme with High Efficiency in the Random Oracle Model

    Directory of Open Access Journals (Sweden)

    Yingying Zhang

    2017-01-01

    Full Text Available Ring signature is a kind of digital signature which can protect the identity of the signer. Certificateless public key cryptography not only overcomes key escrow problem but also does not lose some advantages of identity-based cryptography. Certificateless ring signature integrates ring signature with certificateless public key cryptography. In this paper, we propose an efficient certificateless ring signature; it has only three bilinear pairing operations in the verify algorithm. The scheme is proved to be unforgeable in the random oracle model.

  12. Random gauge models of the superconductor-insulator transition in two-dimensional disordered superconductors

    Science.gov (United States)

    Granato, Enzo

    2017-11-01

    We study numerically the superconductor-insulator transition in two-dimensional inhomogeneous superconductors with gauge disorder, described by four different quantum rotor models: a gauge glass, a flux glass, a binary phase glass, and a Gaussian phase glass. The first two models describe the combined effect of geometrical disorder in the array of local superconducting islands and a uniform external magnetic field, while the last two describe the effects of random negative Josephson-junction couplings or π junctions. Monte Carlo simulations in the path-integral representation of the models are used to determine the critical exponents and the universal conductivity at the quantum phase transition. The gauge- and flux-glass models display the same critical behavior, within the estimated numerical uncertainties. Similar agreement is found for the binary and Gaussian phase-glass models. Despite the different symmetries and disorder correlations, we find that the universal conductivity of these models is approximately the same. In particular, the ratio of this value to that of the pure model agrees with recent experiments on nanohole thin-film superconductors in a magnetic field, in the large disorder limit.

  13. Network motif identification and structure detection with exponential random graph models

    Directory of Open Access Journals (Sweden)

    Munni Begum

    2014-12-01

    Full Text Available Local regulatory motifs are identified in the transcription regulatory network of the most studied model organism Escherichia coli (E. coli through graphical models. Network motifs are small structures in a network that appear more frequently than expected by chance alone. We apply social network methodologies such as p* models, also known as Exponential Random Graph Models (ERGMs, to identify statistically significant network motifs. In particular, we generate directed graphical models that can be applied to study interaction networks in a broad range of databases. The Markov Chain Monte Carlo (MCMC computational algorithms are implemented to obtain the estimates of model parameters to the corresponding network statistics. A variety of ERGMs are fitted to identify statistically significant network motifs in transcription regulatory networks of E. coli. A total of nine ERGMs are fitted to study the transcription factor - transcription factor interactions and eleven ERGMs are fitted for the transcription factor-operon interactions. For both of these interaction networks, arc (a directed edge in a directed network and k-istar (or incoming star structures, for values of k between 2 and 10, are found to be statistically significant local structures or network motifs. The goodness of fit statistics are provided to determine the quality of these models.

  14. Novel Complete Probabilistic Models of Random Variation in High Frequency Performance of Nanoscale MOSFET

    Directory of Open Access Journals (Sweden)

    Rawid Banchuin

    2013-01-01

    Full Text Available The novel probabilistic models of the random variations in nanoscale MOSFET's high frequency performance defined in terms of gate capacitance and transition frequency have been proposed. As the transition frequency variation has also been considered, the proposed models are considered as complete unlike the previous one which take only the gate capacitance variation into account. The proposed models have been found to be both analytic and physical level oriented as they are the precise mathematical expressions in terms of physical parameters. Since the up-to-date model of variation in MOSFET's characteristic induced by physical level fluctuation has been used, part of the proposed models for gate capacitance is more accurate and physical level oriented than its predecessor. The proposed models have been verified based on the 65 nm CMOS technology by using the Monte-Carlo SPICE simulations of benchmark circuits and Kolmogorov-Smirnov tests as highly accurate since they fit the Monte-Carlo-based analysis results with 99% confidence. Hence, these novel models have been found to be versatile for the statistical/variability aware analysis/design of nanoscale MOSFET-based analog/mixed signal circuits and systems.

  15. A comparison of random draw and locally neutral models for the avifauna of an English woodland

    Directory of Open Access Journals (Sweden)

    Blackburn Tim M

    2004-06-01

    Full Text Available Abstract Background Explanations for patterns observed in the structure of local assemblages are frequently sought with reference to interactions between species, and between species and their local environment. However, analyses of null models, where non-interactive local communities are assembled from regional species pools, have demonstrated that much of the structure of local assemblages remains in simulated assemblages where local interactions have been excluded. Here we compare the ability of two null models to reproduce the breeding bird community of Eastern Wood, a 16-hectare woodland in England, UK. A random draw model, in which there is complete annual replacement of the community by immigrants from the regional pool, is compared to a locally neutral community model, in which there are two additional parameters describing the proportion of the community replaced annually (per capita death rate and the proportion of individuals recruited locally rather than as immigrants from the regional pool. Results Both the random draw and locally neutral model are capable of reproducing with significant accuracy several features of the observed structure of the annual Eastern Wood breeding bird community, including species relative abundances, species richness and species composition. The two additional parameters present in the neutral model result in a qualitatively more realistic representation of the Eastern Wood breeding bird community, particularly of its dynamics through time. The fact that these parameters can be varied, allows for a close quantitative fit between model and observed communities to be achieved, particularly with respect to annual species richness and species accumulation through time. Conclusion The presence of additional free parameters does not detract from the qualitative improvement in the model and the neutral model remains a model of local community structure that is null with respect to species differences at the

  16. NITPICK: peak identification for mass spectrometry data

    Directory of Open Access Journals (Sweden)

    Steen Hanno

    2008-08-01

    Full Text Available Abstract Background The reliable extraction of features from mass spectra is a fundamental step in the automated analysis of proteomic mass spectrometry (MS experiments. Results This contribution proposes a sparse template regression approach to peak picking called NITPICK. NITPICK is a Non-greedy, Iterative Template-based peak PICKer that deconvolves complex overlapping isotope distributions in multicomponent mass spectra. NITPICK is based on fractional averagine, a novel extension to Senko's well-known averagine model, and on a modified version of sparse, non-negative least angle regression, for which a suitable, statistically motivated early stopping criterion has been derived. The strength of NITPICK is the deconvolution of overlapping mixture mass spectra. Conclusion Extensive comparative evaluation has been carried out and results are provided for simulated and real-world data sets. NITPICK outperforms pepex, to date the only alternate, publicly available, non-greedy feature extraction routine. NITPICK is available as software package for the R programming language and can be downloaded from http://hci.iwr.uni-heidelberg.de/mip/proteomics/.

  17. Stoichiometry Calculation in BaxSr1−xTiO3 Solid Solution Thin Films, Prepared by RF Cosputtering, Using X-Ray Diffraction Peak Positions and Boltzmann Sigmoidal Modelling

    Directory of Open Access Journals (Sweden)

    J. Reséndiz-Muñoz

    2017-01-01

    Full Text Available A novel procedure based on the use of the Boltzmann equation to model the x parameter, the film deposition rate, and the optical band gap of BaxSr1−xTiO3 thin films is proposed. The BaxSr1−xTiO3 films were prepared by RF cosputtering from BaTiO3 and SrTiO3 targets changing the power applied to each magnetron to obtain different Ba/Sr contents. The method to calculate x consisted of fitting the angular shift of (110, (111, and (211 diffraction peaks observed as the density of substitutional Ba2+ increases in the solid solution when the applied RF power increases, followed by a scale transformation from applied power to x parameter using the Boltzmann equation. The Ba/Sr ratio was obtained from X-ray energy dispersive spectroscopy; the comparison with the X-ray diffraction derived composition shows a remarkable coincidence while the discrepancies offer a valuable diagnosis on the sputtering flux and phase composition. The proposed method allows a quick setup of the RF cosputtering system to control film composition providing a versatile tool to optimization of the process.

  18. Random-field Ising model: Insight from zero-temperature simulations

    Directory of Open Access Journals (Sweden)

    P.E. Theodorakis

    2014-12-01

    Full Text Available We enlighten some critical aspects of the three-dimensional (d=3 random-field Ising model (RFIM from simulations performed at zero temperature. We consider two different, in terms of the field distribution, versions of model, namely a Gaussian RFIM and an equal-weight trimodal RFIM. By implementing a computational approach that maps the ground-state of the system to the maximum-flow optimization problem of a network, we employ the most up-to-date version of the push-relabel algorithm and simulate large ensembles of disorder realizations of both models for a broad range of random-field values and systems sizes V=LxLxL, where L denotes linear lattice size and Lmax=156. Using as finite-size measures the sample-to-sample fluctuations of various quantities of physical and technical origin, and the primitive operations of the push-relabel algorithm, we propose, for both types of distributions, estimates of the critical field hmax and the critical exponent ν of the correlation length, the latter clearly suggesting that both models share the same universality class. Additional simulations of the Gaussian RFIM at the best-known value of the critical field provide the magnetic exponent ratio β/ν with high accuracy and clear out the controversial issue of the critical exponent α of the specific heat. Finally, we discuss the infinite-limit size extrapolation of energy- and order-parameter-based noise to signal ratios related to the self-averaging properties of the model, as well as the critical slowing down aspects of the algorithm.

  19. The effect of local application of natural hirudin on random pattern skin flap microcirculation in a porcine model.

    Science.gov (United States)

    Yin, Guo-Qian; Sun, Zhi-Yong; Wang, Gang

    2014-07-01

    The aim of this study was to investigate the effect of local administration of hirudin in improving random pattern skin flap microcirculation in a porcine model. Five Chinese minipigs were used and six dorsal random pattern skin flaps were elevated in each animal (4 × 14 cm). All flaps (n = 30) were assigned to experimental (n = 10), control (n = 10), and sham (n = 10) groups. Flap edema measurement showed that edema in experimental flaps was more severe (P skin flap microcirculation in over dimensioned random pattern skin flaps in a porcine model.

  20. Existence Results for a Michaud Fractional, Nonlocal, and Randomly Position Structured Fragmentation Model

    Directory of Open Access Journals (Sweden)

    Emile Franc Doungmo Goufo

    2014-01-01

    Full Text Available Until now, classical models of clusters’ fission remain unable to fully explain strange phenomena like the phenomenon of shattering (Ziff and McGrady, 1987 and the sudden appearance of infinitely many particles in some systems having initial finite number of particles. That is why there is a need to extend classical models to models with fractional derivative order and use new and various techniques to analyze them. In this paper, we prove the existence of strongly continuous solution operators for nonlocal fragmentation models with Michaud time derivative of fractional order (Samko et al., 1993. We focus on the case where the splitting rate is dependent on size and position and where new particles generating from fragmentation are distributed in space randomly according to some probability density. In the analysis, we make use of the substochastic semigroup theory, the subordination principle for differential equations of fractional order (Prüss, 1993, Bazhlekova, 2000, the analogy of Hille-Yosida theorem for fractional model (Prüss, 1993, and useful properties of Mittag-Leffler relaxation function (Berberan-Santos, 2005. We are then able to show that the solution operator to the full model is positive and contractive.