WorldWideScience

Sample records for model predicts strong

  1. Prediction of strong earthquake motions on rock surface using evolutionary process models

    International Nuclear Information System (INIS)

    Kameda, H.; Sugito, M.

    1984-01-01

    Stochastic process models are developed for prediction of strong earthquake motions for engineering design purposes. Earthquake motions with nonstationary frequency content are modeled by using the concept of evolutionary processes. Discussion is focused on the earthquake motions on bed rocks which are important for construction of nuclear power plants in seismic regions. On this basis, two earthquake motion prediction models are developed, one (EMP-IB Model) for prediction with given magnitude and epicentral distance, and the other (EMP-IIB Model) to account for the successive fault ruptures and the site location relative to the fault of great earthquakes. (Author) [pt

  2. Prediction of strongly-heated gas flows in a vertical tube using explicit algebraic stress/heat-flux models

    International Nuclear Information System (INIS)

    Baek, Seong Gu; Park, Seung O.

    2003-01-01

    This paper provides the assessment of prediction performance of explicit algebraic stress and heat-flux models under conditions of mixed convective gas flows in a strongly-heated vertical tube. Two explicit algebraic stress models and four algebraic heat-flux models are selected for assessment. Eight combinations of explicit algebraic stress and heat-flux models are used in predicting the flows experimentally studied by Shehata and McEligot (IJHMT 41(1998) p.4333) in which property variation was significant. Among the various model combinations, the Wallin and Johansson (JFM 403(2000) p. 89) explicit algebraic stress model-Abe, Kondo, and Nagano (IJHFF 17(1996) p. 228) algebraic heat-flux model combination is found to perform best. We also found that the dimensionless wall distance y + should be calculated based on the local property rather than the property at the wall for property-variation flows. When the buoyancy or the property variation effects are so strong that the flow may relaminarize, the choice of the basic platform two-equation model is a most important factor in improving the predictions

  3. Sequence-Specific Model for Peptide Retention Time Prediction in Strong Cation Exchange Chromatography.

    Science.gov (United States)

    Gussakovsky, Daniel; Neustaeter, Haley; Spicer, Victor; Krokhin, Oleg V

    2017-11-07

    The development of a peptide retention prediction model for strong cation exchange (SCX) separation on a Polysulfoethyl A column is reported. Off-line 2D LC-MS/MS analysis (SCX-RPLC) of S. cerevisiae whole cell lysate was used to generate a retention dataset of ∼30 000 peptides, sufficient for identifying the major sequence-specific features of peptide retention mechanisms in SCX. In contrast to RPLC/hydrophilic interaction liquid chromatography (HILIC) separation modes, where retention is driven by hydrophobic/hydrophilic contributions of all individual residues, SCX interactions depend mainly on peptide charge (number of basic residues at acidic pH) and size. An additive model (incorporating the contributions of all 20 residues into the peptide retention) combined with a peptide length correction produces a 0.976 R 2 value prediction accuracy, significantly higher than the additive models for either HILIC or RPLC. Position-dependent effects on peptide retention for different residues were driven by the spatial orientation of tryptic peptides upon interaction with the negatively charged surface functional groups. The positively charged N-termini serve as a primary point of interaction. For example, basic residues (Arg, His, Lys) increase peptide retention when located closer to the N-terminus. We also found that hydrophobic interactions, which could lead to a mixed-mode separation mechanism, are largely suppressed at 20-30% of acetonitrile in the eluent. The accuracy of the final Sequence-Specific Retention Calculator (SSRCalc) SCX model (∼0.99 R 2 value) exceeds all previously reported predictors for peptide LC separations. This also provides a solid platform for method development in 2D LC-MS protocols in proteomics and peptide retention prediction filtering of false positive identifications.

  4. Strong ground motion prediction using virtual earthquakes.

    Science.gov (United States)

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  5. The hadronic standard model for strong and electroweak interactions

    Energy Technology Data Exchange (ETDEWEB)

    Raczka, R. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)

    1993-12-31

    We propose a new model for strong and electro-weak interactions. First, we review various QCD predictions for hadron-hadron and lepton-hadron processes. We indicate that the present formulation of strong interactions in the frame work of Quantum Chromodynamics encounters serious conceptual and numerical difficulties in a reliable description of hadron-hadron and lepton-hadron interactions. Next we propose to replace the strong sector of Standard Model based on unobserved quarks and gluons by the strong sector based on the set of the observed baryons and mesons determined by the spontaneously broken SU(6) gauge field theory model. We analyse various properties of this model such as asymptotic freedom, Reggeization of gauge bosons and fundamental fermions, baryon-baryon and meson-baryon high energy scattering, generation of {Lambda}-polarization in inclusive processes and others. Finally we extend this model by electro-weak sector. We demonstrate a remarkable lepton and hadron anomaly cancellation and we analyse a series of important lepton-hadron and hadron-hadron processes such as e{sup +} + e{sup -} {yields} hadrons, e{sup +} + e{sup -} {yields} W{sup +} + W{sup -}, e{sup +} + e{sup -} {yields} p + anti-p, e + p {yields} e + p and p + anti-p {yields} p + anti-p processes. We obtained a series of interesting new predictions in this model especially for processes with polarized particles. We estimated the value of the strong coupling constant {alpha}(M{sub z}) and we predicted the top baryon mass M{sub {Lambda}{sub t}} {approx_equal} 240 GeV. Since in our model the proton, neutron, {Lambda}-particles, vector mesons like {rho}, {omega}, {phi}, J/{psi} ect. and leptons are elementary most of experimentally analysed lepton-hadron and hadron-hadron processes in LEP1, LEP2, LEAR, HERA, HERMES, LHC and SSC experiments may be relatively easily analysed in our model. (author). 252 refs, 65 figs, 1 tab.

  6. The hadronic standard model for strong and electroweak interactions

    International Nuclear Information System (INIS)

    Raczka, R.

    1993-01-01

    We propose a new model for strong and electro-weak interactions. First, we review various QCD predictions for hadron-hadron and lepton-hadron processes. We indicate that the present formulation of strong interactions in the frame work of Quantum Chromodynamics encounters serious conceptual and numerical difficulties in a reliable description of hadron-hadron and lepton-hadron interactions. Next we propose to replace the strong sector of Standard Model based on unobserved quarks and gluons by the strong sector based on the set of the observed baryons and mesons determined by the spontaneously broken SU(6) gauge field theory model. We analyse various properties of this model such as asymptotic freedom, Reggeization of gauge bosons and fundamental fermions, baryon-baryon and meson-baryon high energy scattering, generation of Λ-polarization in inclusive processes and others. Finally we extend this model by electro-weak sector. We demonstrate a remarkable lepton and hadron anomaly cancellation and we analyse a series of important lepton-hadron and hadron-hadron processes such as e + + e - → hadrons, e + + e - → W + + W - , e + + e - → p + anti-p, e + p → e + p and p + anti-p → p + anti-p processes. We obtained a series of interesting new predictions in this model especially for processes with polarized particles. We estimated the value of the strong coupling constant α(M z ) and we predicted the top baryon mass M Λ t ≅ 240 GeV. Since in our model the proton, neutron, Λ-particles, vector mesons like ρ, ω, φ, J/ψ ect. and leptons are elementary most of experimentally analysed lepton-hadron and hadron-hadron processes in LEP1, LEP2, LEAR, HERA, HERMES, LHC and SSC experiments may be relatively easily analysed in our model. (author). 252 refs, 65 figs, 1 tab

  7. The hadronic standard model for strong and electroweak interactions

    Energy Technology Data Exchange (ETDEWEB)

    Raczka, R [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)

    1994-12-31

    We propose a new model for strong and electro-weak interactions. First, we review various QCD predictions for hadron-hadron and lepton-hadron processes. We indicate that the present formulation of strong interactions in the frame work of Quantum Chromodynamics encounters serious conceptual and numerical difficulties in a reliable description of hadron-hadron and lepton-hadron interactions. Next we propose to replace the strong sector of Standard Model based on unobserved quarks and gluons by the strong sector based on the set of the observed baryons and mesons determined by the spontaneously broken SU(6) gauge field theory model. We analyse various properties of this model such as asymptotic freedom, Reggeization of gauge bosons and fundamental fermions, baryon-baryon and meson-baryon high energy scattering, generation of {Lambda}-polarization in inclusive processes and others. Finally we extend this model by electro-weak sector. We demonstrate a remarkable lepton and hadron anomaly cancellation and we analyse a series of important lepton-hadron and hadron-hadron processes such as e{sup +} + e{sup -} {yields} hadrons, e{sup +} + e{sup -} {yields} W{sup +} + W{sup -}, e{sup +} + e{sup -} {yields} p + anti-p, e + p {yields} e + p and p + anti-p {yields} p + anti-p processes. We obtained a series of interesting new predictions in this model especially for processes with polarized particles. We estimated the value of the strong coupling constant {alpha}(M{sub z}) and we predicted the top baryon mass M{sub {Lambda}{sub t}} {approx_equal} 240 GeV. Since in our model the proton, neutron, {Lambda}-particles, vector mesons like {rho}, {omega}, {phi}, J/{psi} ect. and leptons are elementary most of experimentally analysed lepton-hadron and hadron-hadron processes in LEP1, LEP2, LEAR, HERA, HERMES, LHC and SSC experiments may be relatively easily analysed in our model. (author). 252 refs, 65 figs, 1 tab.

  8. Site-specific strong ground motion prediction using 2.5-D modelling

    Science.gov (United States)

    Narayan, J. P.

    2001-08-01

    An algorithm was developed using the 2.5-D elastodynamic wave equation, based on the displacement-stress relation. One of the most significant advantages of the 2.5-D simulation is that the 3-D radiation pattern can be generated using double-couple point shear-dislocation sources in the 2-D numerical grid. A parsimonious staggered grid scheme was adopted instead of the standard staggered grid scheme, since this is the only scheme suitable for computing the dislocation. This new 2.5-D numerical modelling avoids the extensive computational cost of 3-D modelling. The significance of this exercise is that it makes it possible to simulate the strong ground motion (SGM), taking into account the energy released, 3-D radiation pattern, path effects and local site conditions at any location around the epicentre. The slowness vector (py) was used in the supersonic region for each layer, so that all the components of the inertia coefficient are positive. The double-couple point shear-dislocation source was implemented in the numerical grid using the moment tensor components as the body-force couples. The moment per unit volume was used in both the 3-D and 2.5-D modelling. A good agreement in the 3-D and 2.5-D responses for different grid sizes was obtained when the moment per unit volume was further reduced by a factor equal to the finite-difference grid size in the case of the 2.5-D modelling. The components of the radiation pattern were computed in the xz-plane using 3-D and 2.5-D algorithms for various focal mechanisms, and the results were in good agreement. A comparative study of the amplitude behaviour of the 3-D and 2.5-D wavefronts in a layered medium reveals the spatial and temporal damped nature of the 2.5-D elastodynamic wave equation. 3-D and 2.5-D simulated responses at a site using a different strike direction reveal that strong ground motion (SGM) can be predicted just by rotating the strike of the fault counter-clockwise by the same amount as the azimuth of

  9. Predictions for Boson-Jet Observables and Fragmentation Function Ratios from a Hybrid Strong/Weak Coupling Model for Jet Quenching

    CERN Document Server

    Casalderrey-Solana, Jorge; Milhano, José Guilherme; Pablos, Daniel; Rajagopal, Krishna

    2016-01-01

    We have previously introduced a hybrid strong/weak coupling model for jet quenching in heavy ion collisions that describes the production and fragmentation of jets at weak coupling, using PYTHIA, and describes the rate at which each parton in the jet shower loses energy as it propagates through the strongly coupled plasma, dE/dx, using an expression computed holographically at strong coupling. The model has a single free parameter that we fit to a single experimental measurement. We then confront our model with experimental data on many other jet observables, focusing here on boson-jet observables, finding that it provides a good description of present jet data. Next, we provide the predictions of our hybrid model for many measurements to come, including those for inclusive jet, dijet, photon-jet and Z-jet observables in heavy ion collisions with energy $\\sqrt{s}=5.02$ ATeV coming soon at the LHC. As the statistical uncertainties on near-future measurements of photon-jet observables are expected to be much sm...

  10. Predicting long-term recovery of a strongly acidified stream using MAGIC and climate models (Litavka, Czech Republic

    Directory of Open Access Journals (Sweden)

    D. W. Hardekopf

    2008-03-01

    Full Text Available Two branches forming the headwaters of a stream in the Czech Republic were studied. Both streams have similar catchment characteristics and historical deposition; however one is rain-fed and strongly affected by acid atmospheric deposition, the other spring-fed and only moderately acidified. The MAGIC model was used to reconstruct past stream water and soil chemistry of the rain-fed branch, and predict future recovery up to 2050 under current proposed emissions levels. A future increase in air temperature calculated by a regional climate model was then used to derive climate-related scenarios to test possible factors affecting chemical recovery up to 2100. Macroinvertebrates were sampled from both branches, and differences in stream chemistry were reflected in the community structures. According to modelled forecasts, recovery of the rain-fed branch will be gradual and limited, and continued high levels of sulphate release from the soils will continue to dominate stream water chemistry, while scenarios related to a predicted increase in temperature will have little impact. The likelihood of colonization of species from the spring-fed branch was evaluated considering the predicted extent of chemical recovery. The results suggest that the possibility of colonization of species from the spring-fed branch to the rain-fed will be limited to only the acid-tolerant stonefly, caddisfly and dipteran taxa in the modelled period.

  11. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  12. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches

    International Nuclear Information System (INIS)

    Berge-Thierry, C.

    2007-05-01

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  13. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  14. Prediction of North Pacific Height Anomalies During Strong Madden-Julian Oscillation Events

    Science.gov (United States)

    Kai-Chih, T.; Barnes, E. A.; Maloney, E. D.

    2017-12-01

    The Madden Julian Oscillation (MJO) creates strong variations in extratropical atmospheric circulations that have important implications for subseasonal-to-seasonal prediction. In particular, certain MJO phases are characterized by a consistent modulation of geopotential height in the North Pacific and adjacent regions across different MJO events. Until recently, only limited research has examined the relationship between these robust MJO tropical-extratropical teleconnections and model prediction skill. In this study, reanalysis data (MERRA and ERA-Interim) and ECMWF ensemble hindcasts are used to demonstrate that robust teleconnections in specific MJO phases and time lags are also characterized by excellent agreement in the prediction of geopotential height anoma- lies across model ensemble members at forecast leads of up to 3 weeks. These periods of enhanced prediction capabilities extend the possibility for skillful extratropical weather prediction beyond traditional 10-13 day limits. Furthermore, we also examine the phase dependency of teleconnection robustness by using Linear Baroclinic Model (LBM) and the result is consistent with the ensemble hindcasts : the anomalous heating of MJO phase 2 (phase 6) can consistently generate positive (negative) geopotential height anomalies around the extratropical Pacific with a lead of 15-20 days, while other phases are more sensitive to the variaion of the mean state.

  15. Extension of the Nambu-Jona-Lasinio model predictions at high temperatures and strong external magnetic field

    International Nuclear Information System (INIS)

    Gomes, Karina P.; Farias, R.L.S.; Pinto, M.B.; Krein, G.

    2013-01-01

    Full text: Recently much attention is dedicated to understand the effects of an external magnetic field on the QCD phase diagram. Actually there is a contradiction in the literature: while effective models of QCD like the Nambu-Jona- Lasinio model (NJL) and linear sigma model predict an increase of the critical temperature of chiral symmetry restoration a function of the magnetic field, recent lattice results shows the opposite behavior. The NJL model is nonrenormalizable; then the high momentum part of the model has to be regularized in a phenomenological way. The common practice is to regularize the divergent loop amplitudes with a three-dimensional momentum cutoff, which also sets the energy-momentum scale for the validity of the model. That is, the model cannot be used for studying phenomena involving momenta running in loops larger than the cutoff. In particular, the model cannot be used to study quark matter at high densities. One of the symptoms of this problem is the prediction of vanishing superconducting gaps at high baryon densities, a feature of the model that is solely caused by the use of a regularizing momentum cutoff of the divergent vacuum and also in finite loop integrals. In a renormalizable theory all the dependence on the cutoff can be removed in favor of running physical parameters, like the coupling constants of QED and QCD. The running is given by the renormalization group equations of the theory and is controlled by an energy scale that is adjusted to the scale of the experimental conditions under consideration. In a recent publication, Casalbuoni et al. have introduced the concept of a running coupling constant for the NJL model to extend the applicability of the model to high density. Their arguments are based on making the cutoff density dependent, using an analogy with the natural cutoff of the Debye frequency of phonon oscillations in an ordinary solid. In the present work we follow such an approach introducing a magnetic field

  16. Diagnosing a Strong-Fault Model by Conflict and Consistency.

    Science.gov (United States)

    Zhang, Wenfeng; Zhao, Qi; Zhao, Hongbo; Zhou, Gan; Feng, Wenquan

    2018-03-29

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model's prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain-the heat control unit of a spacecraft-where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  17. Diagnosing a Strong-Fault Model by Conflict and Consistency

    Directory of Open Access Journals (Sweden)

    Wenfeng Zhang

    2018-03-01

    Full Text Available The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF. Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  18. Prediction of the Midlatitude Response to Strong Madden-Julian Oscillation Events on S2S Time Scales

    Science.gov (United States)

    Tseng, K.-C.; Barnes, E. A.; Maloney, E. D.

    2018-01-01

    The Madden-Julian Oscillation (MJO) forces strong variations in extratropical atmospheric circulations that have important implications for subseasonal-to-seasonal (S2S) prediction. In particular, certain MJO phases are characterized by a consistent modulation of geopotential height in the North Pacific and adjacent regions across different MJO events. Until recently, only limited research has examined the relationship between these robust MJO tropical-extratropical teleconnections and model prediction skill. In this study, reanalysis data and numerical forecast model ensemble hindcasts are used to demonstrate that robust teleconnections in specific MJO phases and time lags are also characterized by excellent agreement in the prediction of geopotential height anomalies across model ensemble members at forecast leads of up to 3 weeks. These periods of enhanced prediction capabilities extend the possibility for skillful extratropical weather prediction beyond traditional 10-13 day limits.

  19. Prediction of the occurrence of related strong earthquakes in Italy

    International Nuclear Information System (INIS)

    Vorobieva, I.A.; Panza, G.F.

    1993-06-01

    In the seismic flow it is often observed that a Strong Earthquake (SE), is followed by Related Strong Earthquakes (RSEs), which occur near the epicentre of the SE with origin time rather close to the origin time of the SE. The algorithm for the prediction of the occurrence of a RSE has been developed and applied for the first time to the seismicity data of the California-Nevada region and has been successfully tested in several regions of the World, the statistical significance of the result being 97%. So far, it has been possible to make five successful forward predictions, with no false alarms or failures to predict. The algorithm is applied here to the Italian territory, where the occurrence of RSEs is a particularly rare phenomenon. Our results show that the standard algorithm is successfully directly applicable without any adjustment of the parameters. Eleven SEs are considered. Of them, three are followed by a RSE, as predicted by the algorithm, eight SEs are not followed by a RSE, and the algorithm predicts this behaviour for seven of them, giving rise to only one false alarm. Since, in Italy, quite often the series of strong earthquakes are relatively short, the algorithm has been extended to handle such situation. The result of this experiment indicates that it is possible to attempt to test a SE, for the occurrence of a RSE, soon after the occurrence of the SE itself, performing timely ''preliminary'' recognition on reduced data sets. This fact, the high confidence level of the retrospective analysis, and the first successful forward predictions, made in different parts of the World, indicates that, even if additional tests are desirable, the algorithm can already be considered for routine application to Civil Defence. (author). Refs, 3 figs, 7 tabs

  20. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu [University of Michigan, Department of Astronomy, 1085 South University Avenue, Ann Arbor, MI 48109-1107 (United States)

    2016-11-20

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  1. Is It Possible to Predict Strong Earthquakes?

    Science.gov (United States)

    Polyakov, Y. S.; Ryabinin, G. V.; Solovyeva, A. B.; Timashev, S. F.

    2015-07-01

    The possibility of earthquake prediction is one of the key open questions in modern geophysics. We propose an approach based on the analysis of common short-term candidate precursors (2 weeks to 3 months prior to strong earthquake) with the subsequent processing of brain activity signals generated in specific types of rats (kept in laboratory settings) who reportedly sense an impending earthquake a few days prior to the event. We illustrate the identification of short-term precursors using the groundwater sodium-ion concentration data in the time frame from 2010 to 2014 (a major earthquake occurred on 28 February 2013) recorded at two different sites in the southeastern part of the Kamchatka Peninsula, Russia. The candidate precursors are observed as synchronized peaks in the nonstationarity factors, introduced within the flicker-noise spectroscopy framework for signal processing, for the high-frequency component of both time series. These peaks correspond to the local reorganizations of the underlying geophysical system that are believed to precede strong earthquakes. The rodent brain activity signals are selected as potential "immediate" (up to 2 weeks) deterministic precursors because of the recent scientific reports confirming that rodents sense imminent earthquakes and the population-genetic model of K irshvink (Soc Am 90, 312-323, 2000) showing how a reliable genetic seismic escape response system may have developed over the period of several hundred million years in certain animals. The use of brain activity signals, such as electroencephalograms, in contrast to conventional abnormal animal behavior observations, enables one to apply the standard "input-sensor-response" approach to determine what input signals trigger specific seismic escape brain activity responses.

  2. Composite control for raymond mill based on model predictive control and disturbance observer

    Directory of Open Access Journals (Sweden)

    Dan Niu

    2016-03-01

    Full Text Available In the raymond mill grinding process, precise control of operating load is vital for the high product quality. However, strong external disturbances, such as variations of ore size and ore hardness, usually cause great performance degradation. It is not easy to control the current of raymond mill constant. Several control strategies have been proposed. However, most of them (such as proportional–integral–derivative and model predictive control reject disturbances just through feedback regulation, which may lead to poor control performance in the presence of strong disturbances. For improving disturbance rejection, a control method based on model predictive control and disturbance observer is put forward in this article. The scheme employs disturbance observer as feedforward compensation and model predictive control controller as feedback regulation. The test results illustrate that compared with model predictive control method, the proposed disturbance observer–model predictive control method can obtain significant superiority in disturbance rejection, such as shorter settling time and smaller peak overshoot under strong disturbances.

  3. Prediction of strong ground motion based on scaling law of earthquake

    International Nuclear Information System (INIS)

    Kamae, Katsuhiro; Irikura, Kojiro; Fukuchi, Yasunaga.

    1991-01-01

    In order to predict more practically strong ground motion, it is important to study how to use a semi-empirical method in case of having no appropriate observation records for actual small-events as empirical Green's functions. We propose a prediction procedure using artificially simulated small ground motions as substitute for the actual motions. First, we simulate small-event motion by means of stochastic simulation method proposed by Boore (1983) in considering pass effects such as attenuation, and broadening of waveform envelope empirically in the objective region. Finally, we attempt to predict the strong ground motion due to a future large earthquake (M 7, Δ = 13 km) using the same summation procedure as the empirical Green's function method. We obtained the results that the characteristics of the synthetic motion using M 5 motion were in good agreement with those by the empirical Green's function method. (author)

  4. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  5. Cross-Validation of Aerobic Capacity Prediction Models in Adolescents.

    Science.gov (United States)

    Burns, Ryan Donald; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Saint-Maurice, Pedro F; Welk, Greg J; Mahar, Matthew T

    2015-08-01

    Cardiorespiratory endurance is a component of health-related fitness. FITNESSGRAM recommends the Progressive Aerobic Cardiovascular Endurance Run (PACER) or One mile Run/Walk (1MRW) to assess cardiorespiratory endurance by estimating VO2 Peak. No research has cross-validated prediction models from both PACER and 1MRW, including the New PACER Model and PACER-Mile Equivalent (PACER-MEQ) using current standards. The purpose of this study was to cross-validate prediction models from PACER and 1MRW against measured VO2 Peak in adolescents. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years (Mean = 14.7 ± 1.3 years; 32 girls, 52 boys) who completed the PACER and 1MRW in addition to a laboratory maximal treadmill test to measure VO2 Peak. Multiple correlations among various models with measured VO2 Peak were considered moderately strong (R = .74-0.78), and prediction error (RMSE) ranged from 5.95 ml·kg⁻¹,min⁻¹ to 8.27 ml·kg⁻¹.min⁻¹. Criterion-referenced agreement into FITNESSGRAM's Healthy Fitness Zones was considered fair-to-good among models (Kappa = 0.31-0.62; Agreement = 75.5-89.9%; F = 0.08-0.65). In conclusion, prediction models demonstrated moderately strong linear relationships with measured VO2 Peak, fair prediction error, and fair-to-good criterion referenced agreement with measured VO2 Peak into FITNESSGRAM's Healthy Fitness Zones.

  6. Deep subsurface structure modeling and site amplification factor estimation in Niigata plain for broadband strong motion prediction

    International Nuclear Information System (INIS)

    Sato, Hiroaki

    2009-01-01

    This report addresses a methodology of deep subsurface structure modeling in Niigata plain, Japan to estimate site amplification factor in the broadband frequency range for broadband strong motion prediction. In order to investigate deep S-wave velocity structures, we conduct microtremor array measurements at nine sites in Niigata plain, which are important to estimate both long- and short-period ground motion. The estimated depths of the top of the basement layer agree well with those of the Green tuff formation as well as the Bouguer anomaly distribution. Dispersion characteristics derived from the observed long-period ground motion records are well explained by the theoretical dispersion curves of Love wave group velocities calculated from the estimated subsurface structures. These results demonstrate the deep subsurface structures from microtremor array measurements make it possible to estimate long-period ground motions in Niigata plain. Moreover an applicability of microtremor array exploration for inclined basement structure like a folding structure is shown from the two dimensional finite difference numerical simulations. The short-period site amplification factors in Niigata plain are empirically estimated by the spectral inversion analysis from S-wave parts of strong motion data. The resultant characteristics of site amplification are relative large in the frequency range of about 1.5-5 Hz, and decay significantly with the frequency increasing over about 5 Hz. However, these features can't be explained by the calculations from the deep subsurface structures. The estimation of site amplification factors in the frequency range of about 1.5-5 Hz are improved by introducing a shallow detailed structure down to GL-20m depth at a site. We also propose to consider random fluctuation in a modeling of deep S-wave velocity structure for broadband site amplification factor estimation. The Site amplification in the frequency range higher than about 5 Hz are filtered

  7. Rare Plants of Southeastern Hardwood Forests and the Role of Predictive Modeling

    International Nuclear Information System (INIS)

    Imm, D.W.; Shealy, H.E. Jr.; McLeod, K.W.; Collins, B.

    2001-01-01

    Habitat prediction models for rare plants can be useful when large areas must be surveyed or populations must be established. Investigators developed a habitat prediction model for four species of Southeastern hardwood forests. These four examples suggest that models based on resource and vegetation characteristics can accurately predict habitat, but only when plants are strongly associated with these variables and the scale of modeling coincides with habitat size

  8. Note on the hydrodynamic description of thin nematic films: Strong anchoring model

    KAUST Repository

    Lin, Te-Sheng; Cummings, Linda J.; Archer, Andrew J.; Kondic, Lou; Thiele, Uwe

    2013-01-01

    We discuss the long-wave hydrodynamic model for a thin film of nematic liquid crystal in the limit of strong anchoring at the free surface and at the substrate. We rigorously clarify how the elastic energy enters the evolution equation for the film thickness in order to provide a solid basis for further investigation: several conflicting models exist in the literature that predict qualitatively different behaviour. We consolidate the various approaches and show that the long-wave model derived through an asymptotic expansion of the full nemato-hydrodynamic equations with consistent boundary conditions agrees with the model one obtains by employing a thermodynamically motivated gradient dynamics formulation based on an underlying free energy functional. As a result, we find that in the case of strong anchoring the elastic distortion energy is always stabilising. To support the discussion in the main part of the paper, an appendix gives the full derivation of the evolution equation for the film thickness via asymptotic expansion. © 2013 AIP Publishing LLC.

  9. Modelling and prediction of non-stationary optical turbulence behaviour

    NARCIS (Netherlands)

    Doelman, N.J.; Osborn, J.

    2016-01-01

    There is a strong need to model the temporal fluctuations in turbulence parameters, for instance for scheduling, simulation and prediction purposes. This paper aims at modelling the dynamic behaviour of the turbulence coherence length r0, utilising measurement data from the Stereo-SCIDAR instrument

  10. Procedure to predict the storey where plastic drift dominates in two-storey building under strong ground motion

    DEFF Research Database (Denmark)

    Hibino, Y.; Ichinose, T.; Costa, J.L.D.

    2009-01-01

    A procedure is presented to predict the storey where plastic drift dominates in two-storey buildings under strong ground motion. The procedure utilizes the yield strength and the mass of each storey as well as the peak ground acceleration. The procedure is based on two different assumptions: (1....... The efficiency of the procedure is verified by dynamic response analyses using elasto-plastic model....

  11. Enhanced outage prediction modeling for strong extratropical storms and hurricanes in the Northeastern United States

    Science.gov (United States)

    Cerrai, D.; Anagnostou, E. N.; Wanik, D. W.; Bhuiyan, M. A. E.; Zhang, X.; Yang, J.; Astitha, M.; Frediani, M. E.; Schwartz, C. S.; Pardakhti, M.

    2016-12-01

    The overwhelming majority of human activities need reliable electric power. Severe weather events can cause power outages, resulting in substantial economic losses and a temporary worsening of living conditions. Accurate prediction of these events and the communication of forecasted impacts to the affected utilities is necessary for efficient emergency preparedness and mitigation. The University of Connecticut Outage Prediction Model (OPM) uses regression tree models, high-resolution weather reanalysis and real-time weather forecasts (WRF and NCAR ensemble), airport station data, vegetation and electric grid characteristics and historical outage data to forecast the number and spatial distribution of outages in the power distribution grid located within dense vegetation. Recent OPM improvements consist of improved storm classification and addition of new predictive weather-related variables and are demonstrated using a leave-one-storm-out cross-validation based on 130 severe extratropical storms and two hurricanes (Sandy and Irene) in the Northeast US. We show that it is possible to predict the number of trouble spots causing outages in the electric grid with a median absolute percentage error as low as 27% for some storm types, and at most around 40%, in a scale that varies between four orders of magnitude, from few outages to tens of thousands. This outage information can be communicated to the electric utility to manage allocation of crews and equipment and minimize the recovery time for an upcoming storm hazard.

  12. Interpreting Disruption Prediction Models to Improve Plasma Control

    Science.gov (United States)

    Parsons, Matthew

    2017-10-01

    In order for the tokamak to be a feasible design for a fusion reactor, it is necessary to minimize damage to the machine caused by plasma disruptions. Accurately predicting disruptions is a critical capability for triggering any mitigative actions, and a modest amount of attention has been given to efforts that employ machine learning techniques to make these predictions. By monitoring diagnostic signals during a discharge, such predictive models look for signs that the plasma is about to disrupt. Typically these predictive models are interpreted simply to give a `yes' or `no' response as to whether a disruption is approaching. However, it is possible to extract further information from these models to indicate which input signals are more strongly correlated with the plasma approaching a disruption. If highly accurate predictive models can be developed, this information could be used in plasma control schemes to make better decisions about disruption avoidance. This work was supported by a Grant from the 2016-2017 Fulbright U.S. Student Program, administered by the Franco-American Fulbright Commission in France.

  13. Prediction of strongly-heated internal gas flows

    International Nuclear Information System (INIS)

    McEligot, D.M.; Shehata, A.M.; Kunugi, Tomoaki

    1997-01-01

    The purposes of the present article are to remind practitioners why the usual textbook approaches may not be appropriate for treating gas flows heated from the surface with large heat fluxes and to review the successes of some recent applications of turbulence models to this case. Simulations from various turbulence models have been assessed by comparison to the measurements of internal mean velocity and temperature distributions by Shehata for turbulent, laminarizing and intermediate flows with significant gas property variation. Of about fifteen models considered, five were judged to provide adequate predictions

  14. Numerical prediction of local transitional features of turbulent forced gas flows in circular tubes with strong heating

    International Nuclear Information System (INIS)

    Ezato, Koichiro; Kunugi, Tomoaki; Shehata, A.M.; McEligot, D.M.

    1997-03-01

    Previous numerical simulation for the laminarization due to heating of the turbulent flow in pipe were assessed by comparison with only macroscopic characteristics such as heat transfer coefficient and pressure drop, since no experimental data on the local distributions of the velocity and temperature in such flow situation was available. Recently, Shehata and McEligot reported the first measurements of local distributions of velocity and temperature for turbulent forced air flow in a vertical circular tube with strongly heating. They carried out the experiments in three situations from turbulent flow to laminarizing flow according to the heating rate. In the present study, we analyzed numerically the local transitional features of turbulent flow evolving laminarizing due to strong heating in their experiments by using the advanced low-Re two-equation turbulence model. As the result, we successfully predicted the local distributions of velocity and temperature as well as macroscopic characteristics in three turbulent flow conditions. By the present study, a numerical procedure has been established to predict the local characteristics such as velocity distribution of the turbulent flow with large thermal-property variation and laminarizing flow due to strong heating with enough accuracy. (author). 60 refs

  15. Modelling decreased food chain accumulation of HOCs due to strong sorption to carbonaceous materials and metabolic transformation

    NARCIS (Netherlands)

    Moermond, C.T.A.; Traas, T.P.; Roessink, I.; Veltman, K.; Hendriks, A.J.; Koelmans, A.A.

    2007-01-01

    The predictive power of bioaccumulation models may be limited when they do not account for strong sorption of organic contaminants to carbonaceous materials (CM) such as black carbon, and when they do not include metabolic transformation. We tested a food web accumulation model, including sorption

  16. Strong ground motion prediction applying dynamic rupture simulations for Beppu-Haneyama Active Fault Zone, southwestern Japan

    Science.gov (United States)

    Yoshimi, M.; Matsushima, S.; Ando, R.; Miyake, H.; Imanishi, K.; Hayashida, T.; Takenaka, H.; Suzuki, H.; Matsuyama, H.

    2017-12-01

    We conducted strong ground motion prediction for the active Beppu-Haneyama Fault zone (BHFZ), Kyushu island, southwestern Japan. Since the BHFZ runs through Oita and Beppy cities, strong ground motion as well as fault displacement may affect much to the cities.We constructed a 3-dimensional velocity structure of a sedimentary basin, Beppu bay basin, where the fault zone runs through and Oita and Beppu cities are located. Minimum shear wave velocity of the 3d model is 500 m/s. Additional 1-d structure is modeled for sites with softer sediment: holocene plain area. We observed, collected, and compiled data obtained from microtremor surveys, ground motion observations, boreholes etc. phase velocity and H/V ratio. Finer structure of the Oita Plain is modeled, as 250m-mesh model, with empirical relation among N-value, lithology, depth and Vs, using borehole data, then validated with the phase velocity data obtained by the dense microtremor array observation (Yoshimi et al., 2016).Synthetic ground motion has been calculated with a hybrid technique composed of a stochastic Green's function method (for HF wave), a 3D finite difference (LF wave) and 1D amplification calculation. Fault geometry has been determined based on reflection surveys and active fault map. The rake angles are calculated with a dynamic rupture simulation considering three fault segments under a stress filed estimated from source mechanism of earthquakes around the faults (Ando et al., JpGU-AGU2017). Fault parameters such as the average stress drop, a size of asperity etc. are determined based on an empirical relation proposed by Irikura and Miyake (2001). As a result, strong ground motion stronger than 100 cm/s is predicted in the hanging wall side of the Oita plain.This work is supported by the Comprehensive Research on the Beppu-Haneyama Fault Zone funded by the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.

  17. Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century.

    Science.gov (United States)

    Ganusov, Vitaly V

    2016-01-01

    While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest "strong inference in mathematical modeling" as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century.

  18. Strong inference in mathematical modeling: a method for robust science in the 21st century

    Directory of Open Access Journals (Sweden)

    Vitaly V. Ganusov

    2016-07-01

    Full Text Available While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers [1], the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions and data. Following the principle of strong inference for experimental sciences proposed by Platt [2], I suggest ``strong inference in mathematical modeling'' as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are 1 to develop multiple alternative models for the phenomenon in question; 2 to compare the models with available experimental data and to determine which of the models are not consistent with the data; 3 to determine reasons why rejected models failed to explain the data, and 4 to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the 21st century.

  19. Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century

    Science.gov (United States)

    Ganusov, Vitaly V.

    2016-01-01

    While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest “strong inference in mathematical modeling” as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century. PMID:27499750

  20. Catalytic cracking models developed for predictive control purposes

    Directory of Open Access Journals (Sweden)

    Dag Ljungqvist

    1993-04-01

    Full Text Available The paper deals with state-space modeling issues in the context of model-predictive control, with application to catalytic cracking. Emphasis is placed on model establishment, verification and online adjustment. Both the Fluid Catalytic Cracking (FCC and the Residual Catalytic Cracking (RCC units are discussed. Catalytic cracking units involve complex interactive processes which are difficult to operate and control in an economically optimal way. The strong nonlinearities of the FCC process mean that the control calculation should be based on a nonlinear model with the relevant constraints included. However, the model can be simple compared to the complexity of the catalytic cracking plant. Model validity is ensured by a robust online model adjustment strategy. Model-predictive control schemes based on linear convolution models have been successfully applied to the supervisory dynamic control of catalytic cracking units, and the control can be further improved by the SSPC scheme.

  1. Strongly coupled models at the LHC

    International Nuclear Information System (INIS)

    Vries, Maikel de

    2014-10-01

    In this thesis strongly coupled models where the Higgs boson is composite are discussed. These models provide an explanation for the origin of electroweak symmetry breaking including a solution for the hierarchy problem. Strongly coupled models provide an alternative to the weakly coupled supersymmetric extensions of the Standard Model and lead to different and interesting phenomenology at the Large Hadron Collider (LHC). This thesis discusses two particular strongly coupled models, a composite Higgs model with partial compositeness and the Littlest Higgs model with T-parity - a composite model with collective symmetry breaking. The phenomenology relevant for the LHC is covered and the applicability of effective operators for these types of strongly coupled models is explored. First, a composite Higgs model with partial compositeness is discussed. In this model right-handed light quarks could be significantly composite, yet compatible with experimental searches at the LHC and precision tests on Standard Model couplings. In these scenarios, which are motivated by flavour physics, large cross sections for the production of new resonances coupling to light quarks are expected. Experimental signatures of right-handed compositeness at the LHC are studied, and constraints on the parameter space of these models are derived using recent results by ATLAS and CMS. Furthermore, dedicated searches for multi-jet signals at the LHC are proposed which could significantly improve the sensitivity to signatures of right-handed compositeness. The Littlest Higgs model with T-parity, providing an attractive solution to the fine-tuning problem, is discussed next. This solution is only natural if its intrinsic symmetry breaking scale f is relatively close to the electroweak scale. The constraints from the latest results of the 8 TeV run at the LHC are examined. The model's parameter space is being excluded based on a combination of electroweak precision observables, Higgs precision

  2. South African seasonal rainfall prediction performance by a coupled ocean-atmosphere model

    CSIR Research Space (South Africa)

    Landman, WA

    2010-12-01

    Full Text Available Evidence is presented that coupled ocean-atmosphere models can already outscore computationally less expensive atmospheric models. However, if the atmospheric models are forced with highly skillful SST predictions, they may still be a very strong...

  3. Partial widths of boson resonances in the quark-gluon model of strong interactions

    International Nuclear Information System (INIS)

    Kaidalov, A.B.; Volkovitsky, P.E.

    1981-01-01

    The quark-gluon model of strong interactions based on the topological expansion and the string model ib used for the calculation of the partial widths of boson resonances in the channels with two pseudoscalar mesons. The partial widths of mesons with arbitrary spins lying on the vector and tensor Regge trajectories are expressed in terms of the only rho-meson width. The violation of SU(3) symmetry increases with the growth of the spin of the resonance. The theoretical predictions are in a good agreement with experimental data [ru

  4. Nonelectrolyte NRTL-NRF model to study thermodynamics of strong and weak electrolyte solutions

    Energy Technology Data Exchange (ETDEWEB)

    Haghtalab, Ali, E-mail: haghtala@modares.ac.i [Department of Chemical Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of); Shojaeian, Abolfazl; Mazloumi, Seyed Hossein [Department of Chemical Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of)

    2011-03-15

    An electrolyte activity coefficient model is proposed by combining non-electrolyte NRTL-NRF local composition model and Pitzer-Debye-Hueckel equation as short-range and long-range contributions, respectively. With two adjustable parameters per each electrolyte, the present model is applied to correlation of the mean activity coefficients of more than 150 strong aqueous electrolyte solutions at 298.15 K. Also the results of the present model are compared with the other local composition models such as electrolyte-NRTL, electrolyte-NRTL-NRF and electrolyte-Wilson-NRF models. Moreover, the present model is used for prediction of the osmotic coefficient of several aqueous binary electrolytes systems at 298.15 K. Also the present activity coefficient model is adopted for representation of nonideality of the acid gases, as weak gas electrolytes, soluble in alkanolamine solutions. The model is applied for calculation of solubility and heat of absorption (enthalpy of solution) of acid gas in the two {l_brace}(H{sub 2}O + MDEA + CO{sub 2}) and (H{sub 2}O + MDEA + H{sub 2}S){r_brace} systems at different conditions. The results demonstrate that the present model can be successfully applied to study thermodynamic properties of both strong and weak electrolyte solutions.

  5. Strong exploration of a cast iron pipe failure model

    International Nuclear Information System (INIS)

    Moglia, M.; Davis, P.; Burn, S.

    2008-01-01

    A physical probabilistic failure model for buried cast iron pipes is described, which is based on the fracture mechanics of the pipe failure process. Such a model is useful in the asset management of buried pipelines. The model is then applied within a Monte-Carlo simulation framework after adding stochasticity to input variables. Historical failure rates are calculated based on a database of 81,595 pipes and their recorded failures, and model parameters are chosen to provide the best fit between historical and predicted failure rates. This provides an estimated corrosion rate distribution, which agrees well with experimental results. The first model design was chosen in a deliberate simplistic fashion in order to allow for further strong exploration of model assumptions. Therefore, first runs of the initial model resulted in a poor quantitative and qualitative fit in regards to failure rates. However, by exploring natural additional assumptions such as relating to stochastic loads, a number of assumptions were chosen which improved the model to a stage where an acceptable fit was achieved. The model bridges the gap between micro- and macro-level, and this is the novelty in the approach. In this model, data can be used both from the macro-level in terms of failure rates, as well as from the micro-level such as in terms of corrosion rates

  6. Empirical equations for the prediction of PGA and pseudo spectral accelerations using Iranian strong-motion data

    Science.gov (United States)

    Zafarani, H.; Luzi, Lucia; Lanzano, Giovanni; Soghrat, M. R.

    2018-01-01

    A recently compiled, comprehensive, and good-quality strong-motion database of the Iranian earthquakes has been used to develop local empirical equations for the prediction of peak ground acceleration (PGA) and 5%-damped pseudo-spectral accelerations (PSA) up to 4.0 s. The equations account for style of faulting and four site classes and use the horizontal distance from the surface projection of the rupture plane as a distance measure. The model predicts the geometric mean of horizontal components and the vertical-to-horizontal ratio. A total of 1551 free-field acceleration time histories recorded at distances of up to 200 km from 200 shallow earthquakes (depth < 30 km) with moment magnitudes ranging from Mw 4.0 to 7.3 are used to perform regression analysis using the random effects algorithm of Abrahamson and Youngs (Bull Seism Soc Am 82:505-510, 1992), which considers between-events as well as within-events errors. Due to the limited data used in the development of previous Iranian ground motion prediction equations (GMPEs) and strong trade-offs between different terms of GMPEs, it is likely that the previously determined models might have less precision on their coefficients in comparison to the current study. The richer database of the current study allows improving on prior works by considering additional variables that could not previously be adequately constrained. Here, a functional form used by Boore and Atkinson (Earthquake Spect 24:99-138, 2008) and Bindi et al. (Bull Seism Soc Am 9:1899-1920, 2011) has been adopted that allows accounting for the saturation of ground motions at close distances. A regression has been also performed for the V/H in order to retrieve vertical components by scaling horizontal spectra. In order to take into account epistemic uncertainty, the new model can be used along with other appropriate GMPEs through a logic tree framework for seismic hazard assessment in Iran and Middle East region.

  7. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  8. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  9. Evaluating Predictive Models of Software Quality

    Science.gov (United States)

    Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.

    2014-06-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  10. Modeling loss and backscattering in a photonic-bandgap fiber using strong perturbation

    Science.gov (United States)

    Zamani Aghaie, Kiarash; Digonnet, Michel J. F.; Fan, Shanhui

    2013-02-01

    We use coupled-mode theory with strong perturbation to model the loss and backscattering coefficients of a commercial hollow-core fiber (NKT Photonics' HC-1550-02 fiber) induced by the frozen-in longitudinal perturbations of the fiber cross section. Strong perturbation is used, for the first time to the best of our knowledge, because the large difference between the refractive indices of the two fiber materials (silica and air) makes conventional weak-perturbation less accurate. We first study the loss and backscattering using the mathematical description of conventional surface-capillary waves (SCWs). This model implicitly assumes that the mechanical waves on the core wall of a PBF have the same power spectral density (PSD) as the waves that develop on an infinitely thick cylindrical tube with the same diameter as the PBF core. The loss and backscattering coefficients predicted with this thick-wall SCW roughness are 0.5 dB/km and 1.1×10-10 mm-1, respectively. These values are more than one order of magnitude smaller than the measured values (20-30 dB/km and ~1.5×10-9 mm-1, respectively). This result suggests that the thick-wall SCW PSD is not representative of the roughness of our fiber. We found that this discrepancy occurs at least in part because the effect of the finite thickness of the silica membranes (only ~120 nm) is neglected. We present a new expression for the PSD that takes into account this finite thickness and demonstrates that the finite thickness substantially increases the roughness. The predicted loss and backscattering coefficients predicted with this thin-film SCW PSD are 30 dB/km and 1.3×10-9 mm-1, which are both close to the measured values. We also show that the thin-film SCW PSD accurately predicts the roughness PSD measured by others in a solid-core photonic-crystal fiber.

  11. Test of 1-D transport models, and their predictions for ITER

    International Nuclear Information System (INIS)

    Mikkelsen, D.; Bateman, G.; Boucher, D.

    2001-01-01

    A number of proposed tokamak thermal transport models are tested by comparing their predictions with measurements from several tokamaks. The necessary data have been provided for a total of 75 discharges from C-mod, DIII-D, JET, JT-60U, T10, and TFTR. A standard prediction methodology has been developed, and three codes have been benchmarked; these 'standard' codes have been relied on for testing most of the transport models. While a wide range of physical transport processes has been tested, no single model has emerged as clearly superior to all competitors for simulating H-mode discharges. In order to winnow the field, further tests of the effect of sheared flows and of the 'stiffness' of transport are planned. Several of the models have been used to predict ITER performance, with widely varying results. With some transport models ITER's predicted fusion power depends strongly on the 'pedestal' temperature, but ∼ 1GW (Q=10) is predicted for most models if the pedestal temperature is at least 4 keV. (author)

  12. Tests of 1-D transport models, and their predictions for ITER

    International Nuclear Information System (INIS)

    Mikkelsen, D.R.; Bateman, G.; Boucher, D.

    1999-01-01

    A number of proposed tokamak thermal transport models are tested by comparing their predictions with measurements from several tokamaks. The necessary data have been provided for a total of 75 discharges from C-mod, DIII-D, JET, JT-60U, T10, and TFTR. A standard prediction methodology has been developed, and three codes have been benchmarked; these 'standard' codes have been relied on for testing most of the transport models. While a wide range of physical transport processes has been tested, no single model has emerged as clearly superior to all competitors for simulating H-mode discharges. In order to winnow the field, further tests of the effect of sheared flows and of the 'stiffness' of transport are planned. Several of the models have been used to predict ITER performance, with widely varying results. With some transport models ITER's predicted fusion power depends strongly on the 'pedestal' temperature, but ∼ 1GW (Q=10) is predicted for most models if the pedestal temperature is at least 4 keV. (author)

  13. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  14. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  15. Predictive models for pressure ulcers from intensive care unit electronic health records using Bayesian networks.

    Science.gov (United States)

    Kaewprag, Pacharmon; Newton, Cheryl; Vermillion, Brenda; Hyun, Sookyung; Huang, Kun; Machiraju, Raghu

    2017-07-05

    We develop predictive models enabling clinicians to better understand and explore patient clinical data along with risk factors for pressure ulcers in intensive care unit patients from electronic health record data. Identifying accurate risk factors of pressure ulcers is essential to determining appropriate prevention strategies; in this work we examine medication, diagnosis, and traditional Braden pressure ulcer assessment scale measurements as patient features. In order to predict pressure ulcer incidence and better understand the structure of related risk factors, we construct Bayesian networks from patient features. Bayesian network nodes (features) and edges (conditional dependencies) are simplified with statistical network techniques. Upon reviewing a network visualization of our model, our clinician collaborators were able to identify strong relationships between risk factors widely recognized as associated with pressure ulcers. We present a three-stage framework for predictive analysis of patient clinical data: 1) Developing electronic health record feature extraction functions with assistance of clinicians, 2) simplifying features, and 3) building Bayesian network predictive models. We evaluate all combinations of Bayesian network models from different search algorithms, scoring functions, prior structure initializations, and sets of features. From the EHRs of 7,717 ICU patients, we construct Bayesian network predictive models from 86 medication, diagnosis, and Braden scale features. Our model not only identifies known and suspected high PU risk factors, but also substantially increases sensitivity of the prediction - nearly three times higher comparing to logistical regression models - without sacrificing the overall accuracy. We visualize a representative model with which our clinician collaborators identify strong relationships between risk factors widely recognized as associated with pressure ulcers. Given the strong adverse effect of pressure ulcers

  16. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    Science.gov (United States)

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  17. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media.

    Science.gov (United States)

    Ma, Manman; Xu, Zhenli

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  18. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Manman, E-mail: mmm@sjtu.edu.cn; Xu, Zhenli, E-mail: xuzl@sjtu.edu.cn [Department of Mathematics, Institute of Natural Sciences, and MoE Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  19. Seismic attenuation relationship with homogeneous and heterogeneous prediction-error variance models

    Science.gov (United States)

    Mu, He-Qing; Xu, Rong-Rong; Yuen, Ka-Veng

    2014-03-01

    Peak ground acceleration (PGA) estimation is an important task in earthquake engineering practice. One of the most well-known models is the Boore-Joyner-Fumal formula, which estimates the PGA using the moment magnitude, the site-to-fault distance and the site foundation properties. In the present study, the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an efficiency-robustness balanced formula is proposed. For this purpose, a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship. In this approach, each model class (a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data. The one with the highest plausibility is robust since it possesses the optimal balance between the data fitting capability and the sensitivity to noise. A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis. The optimal predictive formula is proposed based on this database. It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore, Joyner and Fumal (1993).

  20. Strong coupling electroweak symmetry breaking

    Energy Technology Data Exchange (ETDEWEB)

    Barklow, T.L. [Stanford Linear Accelerator Center, Menlo Park, CA (United States); Burdman, G. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Chivukula, R.S. [Boston Univ., MA (United States). Dept. of Physics

    1997-04-01

    The authors review models of electroweak symmetry breaking due to new strong interactions at the TeV energy scale and discuss the prospects for their experimental tests. They emphasize the direct observation of the new interactions through high-energy scattering of vector bosons. They also discuss indirect probes of the new interactions and exotic particles predicted by specific theoretical models.

  1. Strong coupling electroweak symmetry breaking

    International Nuclear Information System (INIS)

    Barklow, T.L.; Burdman, G.; Chivukula, R.S.

    1997-04-01

    The authors review models of electroweak symmetry breaking due to new strong interactions at the TeV energy scale and discuss the prospects for their experimental tests. They emphasize the direct observation of the new interactions through high-energy scattering of vector bosons. They also discuss indirect probes of the new interactions and exotic particles predicted by specific theoretical models

  2. Quantitative prediction of strong motion for a potential earthquake fault

    Directory of Open Access Journals (Sweden)

    Shamita Das

    2010-02-01

    Full Text Available This paper describes a new method for calculating strong motion records for a given seismic region on the basis of the laws of physics using information on the tectonics and physical properties of the earthquake fault. Our method is based on a earthquake model, called a «barrier model», which is characterized by five source parameters: fault length, width, maximum slip, rupture velocity, and barrier interval. The first three parameters may be constrained from plate tectonics, and the fourth parameter is roughly a constant. The most important parameter controlling the earthquake strong motion is the last parameter, «barrier interval». There are three methods to estimate the barrier interval for a given seismic region: 1 surface measurement of slip across fault breaks, 2 model fitting with observed near and far-field seismograms, and 3 scaling law data for small earthquakes in the region. The barrier intervals were estimated for a dozen earthquakes and four seismic regions by the above three methods. Our preliminary results for California suggest that the barrier interval may be determined if the maximum slip is given. The relation between the barrier interval and maximum slip varies from one seismic region to another. For example, the interval appears to be unusually long for Kilauea, Hawaii, which may explain why only scattered evidence of strong ground shaking was observed in the epicentral area of the Island of Hawaii earthquake of November 29, 1975. The stress drop associated with an individual fault segment estimated from the barrier interval and maximum slip lies between 100 and 1000 bars. These values are about one order of magnitude greater than those estimated earlier by the use of crack models without barriers. Thus, the barrier model can resolve, at least partially, the well known discrepancy between the stress-drops measured in the laboratory and those estimated for earthquakes.

  3. Optimal model-free prediction from multivariate time series

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  4. Model for prediction of strip temperature in hot strip steel mill

    International Nuclear Information System (INIS)

    Panjkovic, Vladimir

    2007-01-01

    Proper functioning of set-up models in a hot strip steel mill requires reliable prediction of strip temperature. Temperature prediction is particularly important for accurate calculation of rolling force because of strong dependence of yield stress and strip microstructure on temperature. A comprehensive model was developed to replace an obsolete model in the Western Port hot strip mill of BlueScope Steel. The new model predicts the strip temperature evolution from the roughing mill exit to the finishing mill exit. It takes into account the radiative and convective heat losses, forced flow boiling and film boiling of water at strip surface, deformation heat in the roll gap, frictional sliding heat, heat of scale formation and the heat transfer between strip and work rolls through an oxide layer. The significance of phase transformation was also investigated. Model was tested with plant measurements and benchmarked against other models in the literature, and its performance was very good

  5. Model for prediction of strip temperature in hot strip steel mill

    Energy Technology Data Exchange (ETDEWEB)

    Panjkovic, Vladimir [BlueScope Steel, TEOB, 1 Bayview Road, Hastings Vic. 3915 (Australia)]. E-mail: Vladimir.Panjkovic@BlueScopeSteel.com

    2007-10-15

    Proper functioning of set-up models in a hot strip steel mill requires reliable prediction of strip temperature. Temperature prediction is particularly important for accurate calculation of rolling force because of strong dependence of yield stress and strip microstructure on temperature. A comprehensive model was developed to replace an obsolete model in the Western Port hot strip mill of BlueScope Steel. The new model predicts the strip temperature evolution from the roughing mill exit to the finishing mill exit. It takes into account the radiative and convective heat losses, forced flow boiling and film boiling of water at strip surface, deformation heat in the roll gap, frictional sliding heat, heat of scale formation and the heat transfer between strip and work rolls through an oxide layer. The significance of phase transformation was also investigated. Model was tested with plant measurements and benchmarked against other models in the literature, and its performance was very good.

  6. The prediction of surface temperature in the new seasonal prediction system based on the MPI-ESM coupled climate model

    Science.gov (United States)

    Baehr, J.; Fröhlich, K.; Botzet, M.; Domeisen, D. I. V.; Kornblueh, L.; Notz, D.; Piontek, R.; Pohlmann, H.; Tietsche, S.; Müller, W. A.

    2015-05-01

    A seasonal forecast system is presented, based on the global coupled climate model MPI-ESM as used for CMIP5 simulations. We describe the initialisation of the system and analyse its predictive skill for surface temperature. The presented system is initialised in the atmospheric, oceanic, and sea ice component of the model from reanalysis/observations with full field nudging in all three components. For the initialisation of the ensemble, bred vectors with a vertically varying norm are implemented in the ocean component to generate initial perturbations. In a set of ensemble hindcast simulations, starting each May and November between 1982 and 2010, we analyse the predictive skill. Bias-corrected ensemble forecasts for each start date reproduce the observed surface temperature anomalies at 2-4 months lead time, particularly in the tropics. Niño3.4 sea surface temperature anomalies show a small root-mean-square error and predictive skill up to 6 months. Away from the tropics, predictive skill is mostly limited to the ocean, and to regions which are strongly influenced by ENSO teleconnections. In summary, the presented seasonal prediction system based on a coupled climate model shows predictive skill for surface temperature at seasonal time scales comparable to other seasonal prediction systems using different underlying models and initialisation strategies. As the same model underlying our seasonal prediction system—with a different initialisation—is presently also used for decadal predictions, this is an important step towards seamless seasonal-to-decadal climate predictions.

  7. Monitoring of the future strong Vrancea events by using the CN formal earthquake prediction algorithm

    International Nuclear Information System (INIS)

    Moldoveanu, C.L.; Novikova, O.V.; Panza, G.F.; Radulian, M.

    2003-06-01

    The preparation process of the strong subcrustal events originating in Vrancea region, Romania, is monitored using an intermediate-term medium-range earthquake prediction method - the CN algorithm (Keilis-Borok and Rotwain, 1990). We present the results of the monitoring of the preparation of future strong earthquakes for the time interval from January 1, 1994 (1994.1.1), to January 1, 2003 (2003.1.1) using the updated catalogue of the Romanian local network. The database considered for the CN monitoring of the preparation of future strong earthquakes in Vrancea covers the period from 1966.3.1 to 2003.1.1 and the geographical rectangle 44.8 deg - 48.4 deg N, 25.0 deg - 28.0 deg E. The algorithm correctly identifies, by retrospective prediction, the TJPs for all the three strong earthquakes (Mo=6.4) that occurred in Vrancea during this period. The cumulated duration of the TIPs represents 26.5% of the total period of time considered (1966.3.1-2003.1.1). The monitoring of current seismicity using the algorithm CN has been carried out since 1994. No strong earthquakes occurred from 1994.1.1 to 2003.1.1 but the CN declared an extended false alarm from 1999.5.1 to 2000.11.1. No alarm has currently been declared in the region (on January 1, 2003), as can be seen from the TJPs diagram shown. (author)

  8. Swarm Intelligence-Based Hybrid Models for Short-Term Power Load Prediction

    Directory of Open Access Journals (Sweden)

    Jianzhou Wang

    2014-01-01

    Full Text Available Swarm intelligence (SI is widely and successfully applied in the engineering field to solve practical optimization problems because various hybrid models, which are based on the SI algorithm and statistical models, are developed to further improve the predictive abilities. In this paper, hybrid intelligent forecasting models based on the cuckoo search (CS as well as the singular spectrum analysis (SSA, time series, and machine learning methods are proposed to conduct short-term power load prediction. The forecasting performance of the proposed models is augmented by a rolling multistep strategy over the prediction horizon. The test results are representative of the out-performance of the SSA and CS in tuning the seasonal autoregressive integrated moving average (SARIMA and support vector regression (SVR in improving load forecasting, which indicates that both the SSA-based data denoising and SI-based intelligent optimization strategy can effectively improve the model’s predictive performance. Additionally, the proposed CS-SSA-SARIMA and CS-SSA-SVR models provide very impressive forecasting results, demonstrating their strong robustness and universal forecasting capacities in terms of short-term power load prediction 24 hours in advance.

  9. A strong viscous–inviscid interaction model for rotating airfoils

    DEFF Research Database (Denmark)

    Ramos García, Néstor; Sørensen, Jens Nørkær; Shen, Wen Zhong

    2014-01-01

    Two-dimensional (2D) and quasi-three dimensional (3D), steady and unsteady, viscous–inviscid interactive codes capable of predicting the aerodynamic behavior of wind turbine airfoils are presented. The model is based on a viscous–inviscid interaction technique using strong coupling between...... a boundary-layer trip or computed using an en envelope transition method. Validation of the incompressible 2D version of the code is carried out against measurements and other numerical codes for different airfoil geometries at various Reynolds numbers, ranging from 0.9 ⋅ 106 to 8.2 ⋅ 106. In the quasi-3D...... version, a parametric study on rotational effects induced by the Coriolis and centrifugal forces in the boundary-layer equations shows that the effects of rotation are to decrease the growth of the boundary-layer and delay the onset of separation, hence increasing the lift coefficient slightly while...

  10. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  11. Predictability in the Epidemic-Type Aftershock Sequence model of interacting triggered seismicity

    Science.gov (United States)

    Helmstetter, AgnèS.; Sornette, Didier

    2003-10-01

    As part of an effort to develop a systematic methodology for earthquake forecasting, we use a simple model of seismicity on the basis of interacting events which may trigger a cascade of earthquakes, known as the Epidemic-Type Aftershock Sequence model (ETAS). The ETAS model is constructed on a bare (unrenormalized) Omori law, the Gutenberg-Richter law, and the idea that large events trigger more numerous aftershocks. For simplicity, we do not use the information on the spatial location of earthquakes and work only in the time domain. We demonstrate the essential role played by the cascade of triggered seismicity in controlling the rate of aftershock decay as well as the overall level of seismicity in the presence of a constant external seismicity source. We offer an analytical approach to account for the yet unobserved triggered seismicity adapted to the problem of forecasting future seismic rates at varying horizons from the present. Tests presented on synthetic catalogs validate strongly the importance of taking into account all the cascades of still unobserved triggered events in order to predict correctly the future level of seismicity beyond a few minutes. We find a strong predictability if one accepts to predict only a small fraction of the large-magnitude targets. Specifically, we find a prediction gain (defined as the ratio of the fraction of predicted events over the fraction of time in alarms) equal to 21 for a fraction of alarm of 1%, a target magnitude M ≥ 6, an update time of 0.5 days between two predictions, and for realistic parameters of the ETAS model. However, the probability gains degrade fast when one attempts to predict a larger fraction of the targets. This is because a significant fraction of events remain uncorrelated from past seismicity. This delineates the fundamental limits underlying forecasting skills, stemming from an intrinsic stochastic component in these interacting triggered seismicity models. Quantitatively, the fundamental

  12. A nonlinear efficient layerwise finite element model for smart piezolaminated composites under strong applied electric field

    International Nuclear Information System (INIS)

    Kapuria, S; Yaqoob Yasin, M

    2013-01-01

    In this work, we present an electromechanically coupled efficient layerwise finite element model for the static response of piezoelectric laminated composite and sandwich plates, considering the nonlinear behavior of piezoelectric materials under strong electric field. The nonlinear model is developed consistently using a variational principle, considering a rotationally invariant second order nonlinear constitutive relationship, and full electromechanical coupling. In the piezoelectric layer, the electric potential is approximated to have a quadratic variation across the thickness, as observed from exact three dimensional solutions, and the equipotential condition of electroded piezoelectric surfaces is modeled using the novel concept of an electric node. The results predicted by the nonlinear model compare very well with the experimental data available in the literature. The effect of the piezoelectric nonlinearity on the static response and deflection/stress control is studied for piezoelectric bimorph as well as hybrid laminated plates with isotropic, angle-ply composite and sandwich substrates. For high electric fields, the difference between the nonlinear and linear predictions is large, and cannot be neglected. The error in the prediction of the smeared counterpart of the present theory with the same number of primary displacement unknowns is also examined. (paper)

  13. Characteristic Model-Based Robust Model Predictive Control for Hypersonic Vehicles with Constraints

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2017-06-01

    Full Text Available Designing robust control for hypersonic vehicles in reentry is difficult, due to the features of the vehicles including strong coupling, non-linearity, and multiple constraints. This paper proposed a characteristic model-based robust model predictive control (MPC for hypersonic vehicles with reentry constraints. First, the hypersonic vehicle is modeled by a characteristic model composed of a linear time-varying system and a lumped disturbance. Then, the identification data are regenerated by the accumulative sum idea in the gray theory, which weakens effects of the random noises and strengthens regularity of the identification data. Based on the regenerated data, the time-varying parameters and the disturbance are online estimated according to the gray identification. At last, the mixed H2/H∞ robust predictive control law is proposed based on linear matrix inequalities (LMIs and receding horizon optimization techniques. Using active tackling system constraints of MPC, the input and state constraints are satisfied in the closed-loop control system. The validity of the proposed control is verified theoretically according to Lyapunov theory and illustrated by simulation results.

  14. Right Heart End-Systolic Remodeling Index Strongly Predicts Outcomes in Pulmonary Arterial Hypertension: Comparison With Validated Models.

    Science.gov (United States)

    Amsallem, Myriam; Sweatt, Andrew J; Aymami, Marie C; Kuznetsova, Tatiana; Selej, Mona; Lu, HongQuan; Mercier, Olaf; Fadel, Elie; Schnittger, Ingela; McConnell, Michael V; Rabinovitch, Marlene; Zamanian, Roham T; Haddad, Francois

    2017-06-01

    Right ventricular (RV) end-systolic dimensions provide information on both size and function. We investigated whether an internally scaled index of end-systolic dimension is incremental to well-validated prognostic scores in pulmonary arterial hypertension. From 2005 to 2014, 228 patients with pulmonary arterial hypertension were prospectively enrolled. RV end-systolic remodeling index (RVESRI) was defined by lateral length divided by septal height. The incremental values of RV free wall longitudinal strain and RVESRI to risk scores were determined. Mean age was 49±14 years, 78% were female, 33% had connective tissue disease, 52% were in New York Heart Association class ≥III, and mean pulmonary vascular resistance was 11.2±6.4 WU. RVESRI and right atrial area were strongly connected to the other right heart metrics. Three zones of adaptation (adapted, maladapted, and severely maladapted) were identified based on the RVESRI to RV systolic pressure relationship. During a mean follow-up of 3.9±2.4 years, the primary end point of death, transplant, or admission for heart failure was reached in 88 patients. RVESRI was incremental to risk prediction scores in pulmonary arterial hypertension, including the Registry to Evaluate Early and Long-Term PAH Disease Management score, the Pulmonary Hypertension Connection equation, and the Mayo Clinic model. Using multivariable analysis, New York Heart Association class III/IV, RVESRI, and log NT-proBNP (N-Terminal Pro-B-Type Natriuretic Peptide) were retained (χ 2 , 62.2; P right heart metrics, RVESRI demonstrated the best test-retest characteristics. RVESRI is a simple reproducible prognostic marker in patients with pulmonary arterial hypertension. © 2017 American Heart Association, Inc.

  15. CN earthquake prediction algorithm and the monitoring of the future strong Vrancea events

    International Nuclear Information System (INIS)

    Moldoveanu, C.L.; Radulian, M.; Novikova, O.V.; Panza, G.F.

    2002-01-01

    The strong earthquakes originating at intermediate-depth in the Vrancea region (located in the SE corner of the highly bent Carpathian arc) represent one of the most important natural disasters able to induce heavy effects (high tool of casualties and extensive damage) in the Romanian territory. The occurrence of these earthquakes is irregular, but not infrequent. Their effects are felt over a large territory, from Central Europe to Moscow and from Greece to Scandinavia. The largest cultural and economical center exposed to the seismic risk due to the Vrancea earthquakes is Bucharest. This metropolitan area (230 km 2 wide) is characterized by the presence of 2.5 million inhabitants (10% of the country population) and by a considerable number of high-risk structures and infrastructures. The best way to face strong earthquakes is to mitigate the seismic risk by using the two possible complementary approaches represented by (a) the antiseismic design of structures and infrastructures (able to support strong earthquakes without significant damage), and (b) the strong earthquake prediction (in terms of alarm intervals declared for long, intermediate or short-term space-and time-windows). The intermediate term medium-range earthquake prediction represents the most realistic target to be reached at the present state of knowledge. The alarm declared in this case extends over a time window of about one year or more, and a space window of a few hundreds of kilometers. In the case of Vrancea events the spatial uncertainty is much less, being of about 100 km. The main measures for the mitigation of the seismic risk allowed by the intermediate-term medium-range prediction are: (a) verification of the buildings and infrastructures stability and reinforcement measures when required, (b) elaboration of emergency plans of action, (c) schedule of the main actions required in order to restore the normality of the social and economical life after the earthquake. The paper presents the

  16. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  17. Prediction of strong acceleration motion depended on focal mechanism; Shingen mechanism wo koryoshita jishindo yosoku ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Kaneda, Y; Ejiri, J [Obayashi Corp., Tokyo (Japan)

    1996-10-01

    This paper describes simulation results of strong acceleration motion with varying uncertain fault parameters mainly for a fault model of Hyogo-ken Nanbu earthquake. For the analysis, based on the fault parameters, the strong acceleration motion was simulated using the radiation patterns and the breaking time difference of composite faults as parameters. A statistic waveform composition method was used for the simulation. For the theoretical radiation patterns, directivity was emphasized which depended on the strike of faults, and the maximum acceleration was more than 220 gal. While, for the homogeneous radiation patterns, the maximum accelerations were isotopically distributed around the fault as a center. For variations in the maximum acceleration and the predominant frequency due to the breaking time difference of three faults, the response spectral value of maximum/minimum was about 1.7 times. From the viewpoint of seismic disaster prevention, underground structures including potential faults and non-arranging properties can be grasped using this simulation. Significance of the prediction of strong acceleration motion was also provided through this simulation using uncertain factors, such as breaking time of composite faults, as parameters. 4 refs., 4 figs., 1 tab.

  18. Strong-Weak CP Hierarchy from Non-Renormalization Theorems

    Energy Technology Data Exchange (ETDEWEB)

    Hiller, Gudrun

    2002-01-28

    We point out that the hierarchy between the measured values of the CKM phase and the strong CP phase has a natural origin in supersymmetry with spontaneous CP violation and low energy supersymmetry breaking. The underlying reason is simple and elegant: in supersymmetry the strong CP phase is protected by an exact non-renormalization theorem while the CKM phase is not. We present explicit examples of models which exploit this fact and discuss corrections to the non-renormalization theorem in the presence of supersymmetry breaking. This framework for solving the strong CP problem has generic predictions for the superpartner spectrum, for CP and flavor violation, and predicts a preferred range of values for electric dipole moments.

  19. The Cornwall-Norton model in the strong coupling regime

    International Nuclear Information System (INIS)

    Natale, A.A.

    1991-01-01

    The Cornwall-Norton model is studied in the strong coupling regime. It is shown that the fermionic self-energy at large momenta behaves as Σ(p) ∼ (m 2 /p) ln (p/m). We verify that in the strong coupling phase the dynamically generated masses of gauge and scalar bosons are of the same order, and the essential features of the model remain intact. (author)

  20. Earthquake source model using strong motion displacement

    Indian Academy of Sciences (India)

    The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...

  1. Strong convective storm nowcasting using a hybrid approach of convolutional neural network and hidden Markov model

    Science.gov (United States)

    Zhang, Wei; Jiang, Ling; Han, Lei

    2018-04-01

    Convective storm nowcasting refers to the prediction of the convective weather initiation, development, and decay in a very short term (typically 0 2 h) .Despite marked progress over the past years, severe convective storm nowcasting still remains a challenge. With the boom of machine learning, it has been well applied in various fields, especially convolutional neural network (CNN). In this paper, we build a servere convective weather nowcasting system based on CNN and hidden Markov model (HMM) using reanalysis meteorological data. The goal of convective storm nowcasting is to predict if there is a convective storm in 30min. In this paper, we compress the VDRAS reanalysis data to low-dimensional data by CNN as the observation vector of HMM, then obtain the development trend of strong convective weather in the form of time series. It shows that, our method can extract robust features without any artificial selection of features, and can capture the development trend of strong convective storm.

  2. Developing models for the prediction of hospital healthcare waste generation rate.

    Science.gov (United States)

    Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe

    2016-01-01

    An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. © The Author(s) 2015.

  3. Testing strong factorial invariance using three-level structural equation modeling

    Directory of Open Access Journals (Sweden)

    Suzanne eJak

    2014-07-01

    Full Text Available Within structural equation modeling, the most prevalent model to investigate measurement bias is the multigroup model. Equal factor loadings and intercepts across groups in a multigroup model represent strong factorial invariance (absence of measurement bias across groups. Although this approach is possible in principle, it is hardly practical when the number of groups is large or when the group size is relatively small. Jak, Oort and Dolan (2013 showed how strong factorial invariance across large numbers of groups can be tested in a multilevel structural equation modeling framework, by treating group as a random instead of a fixed variable. In the present study, this model is extended for use with three-level data. The proposed method is illustrated with an investigation of strong factorial invariance across 156 school classes and 50 schools in a Dutch dyscalculia test, using three-level structural equation modeling.

  4. A predictive model for the tokamak density limit

    International Nuclear Information System (INIS)

    Teng, Q.; Brennan, D. P.; Delgado-Aparicio, L.; Gates, D. A.; Swerdlow, J.; White, R. B.

    2016-01-01

    We reproduce the Greenwald density limit, in all tokamak experiments by using a phenomenologically correct model with parameters in the range of experiments. A simple model of equilibrium evolution and local power balance inside the island has been implemented to calculate the radiation-driven thermo-resistive tearing mode growth and explain the density limit. Strong destabilization of the tearing mode due to an imbalance of local Ohmic heating and radiative cooling in the island predicts the density limit within a few percent. Furthermore, we found the density limit and it is a local edge limit and weakly dependent on impurity densities. Our results are robust to a substantial variation in model parameters within the range of experiments.

  5. Modeling and synthesis of strong ground motion

    Indian Academy of Sciences (India)

    There have been many developments in modeling techniques, and ... damage life and property in a city or region. How- ... quake of 26 January 2001 as a case study. 2. ...... quake derived from a dense strong-motion network; Bull. Seismol.

  6. The ordered network structure of M {>=} 6 strong earthquakes and its prediction in the Jiangsu-South Yellow Sea region

    Energy Technology Data Exchange (ETDEWEB)

    Men, Ke-Pei [Nanjing Univ. of Information Science and Technology (China). College of Mathematics and Statistics; Cui, Lei [California Univ., Santa Barbara, CA (United States). Applied Probability and Statistics Dept.

    2013-05-15

    The the Jiangsu-South Yellow Sea region is one of the key seismic monitoring defence areas in the eastern part of China. Since 1846, M {>=} 6 strong earthquakes have showed an obvious commensurability and orderliness in this region. The main orderly values are 74 {proportional_to} 75 a, 57 {proportional_to} 58 a, 11 {proportional_to} 12 a, and 5 {proportional_to} 6 a, wherein 74 {proportional_to} 75 a and 57 {proportional_to} 58 a with an outstanding predictive role. According to the information prediction theory of Wen-Bo Weng, we conceived the M {>=} 6 strong earthquake ordered network structure in the South Yellow Sea and the whole region. Based on this, we analyzed and discussed the variation of seismicity in detail and also made a trend prediction of M {>=} 6 strong earthquakes in the future. The results showed that since 1998 it has entered into a new quiet episode which may continue until about 2042; and the first M {>=} 6 strong earthquake in the next active episode will probably occur in 2053 pre and post, with the location likely in the sea area of the South Yellow Sea; also, the second and the third ones or strong earthquake swarm in the future will probably occur in 2058 and 2070 pre and post. (orig.)

  7. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  8. Strong plasma turbulence in the earth's electron foreshock

    Science.gov (United States)

    Robinson, P. A.; Newman, D. L.

    1991-01-01

    A quantitative model is developed to account for the distribution in magnitude and location of the intense plasma waves observed in the earth's electron foreshock given the observed rms levels of waves. In this model, nonlinear strong-turbulence effects cause solitonlike coherent wave packets to form and decouple from incoherent background beam-excited weak turbulence, after which they convect downstream with the solar wind while collapsing to scales as short as 100 m and fields as high as 2 V/m. The existence of waves with energy densities above the strong-turbulence wave-collapse threshold is inferred from observations from IMP 6 and ISEE 1 and quantitative agreement is found between the predicted distribution of fields in an ensemble of such wave packets and the actual field distribution observed in situ by IMP 6. Predictions for the polarization of plasma waves and the bandwidth of ion-sound waves are also consistent with the observations. It is shown that strong-turbulence effects must be incorporated in any comprehensive theory of the propagation and evolution of electron beams in the foreshock. Previous arguments against the existence of strong turbulence in the foreshock are refuted.

  9. Strong plasma turbulence in the earth's electron foreshock

    International Nuclear Information System (INIS)

    Robinson, P.A.; Newman, D.L.

    1991-01-01

    A quantitative model is developed to account for the distribution in magnitude and location of the intense plasma waves observed in the Earth's electron foreshock given the observed rms levels of waves. In this model, nonlinear strong-turbulence effects cause solitonlike coherent wave packets to form and decouple from incoherent background beam-excited weak turbulence, after which they convect downstream with the solar wind while collapsing to scales as short as 100 m and fields as high as 2 V m -1 . The existence of waves with energy densities above the strong-turbulence wave-collapse threshold is inferred from observations from IMP 6 and ISEE 1 and quantitative agreement is found between the predicted distribution of fields in an ensemble of such wave packets and the actual field distribution observed in situ by IMP 6. Predictions for the polarization of plasma waves and the bandwidth of ion-sound waves are also consistent with the observations. It is shown that strong-turbulence effects must be incorporated in any comprehensive theory of the propagation and evolution of electron beams in the foreshock. Previous arguments against the existence of strong turbulence in the foreshock are refuted

  10. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  11. Seasonal prediction of East Asian summer rainfall using a multi-model ensemble system

    Science.gov (United States)

    Ahn, Joong-Bae; Lee, Doo-Young; Yoo, Jin‑Ho

    2015-04-01

    Using the retrospective forecasts of seven state-of-the-art coupled models and their multi-model ensemble (MME) for boreal summers, the prediction skills of climate models in the western tropical Pacific (WTP) and East Asian region are assessed. The prediction of summer rainfall anomalies in East Asia is difficult, while the WTP has a strong correlation between model prediction and observation. We focus on developing a new approach to further enhance the seasonal prediction skill for summer rainfall in East Asia and investigate the influence of convective activity in the WTP on East Asian summer rainfall. By analyzing the characteristics of the WTP convection, two distinct patterns associated with El Niño-Southern Oscillation developing and decaying modes are identified. Based on the multiple linear regression method, the East Asia Rainfall Index (EARI) is developed by using the interannual variability of the normalized Maritime continent-WTP Indices (MPIs), as potentially useful predictors for rainfall prediction over East Asia, obtained from the above two main patterns. For East Asian summer rainfall, the EARI has superior performance to the East Asia summer monsoon index or each MPI. Therefore, the regressed rainfall from EARI also shows a strong relationship with the observed East Asian summer rainfall pattern. In addition, we evaluate the prediction skill of the East Asia reconstructed rainfall obtained by hybrid dynamical-statistical approach using the cross-validated EARI from the individual models and their MME. The results show that the rainfalls reconstructed from simulations capture the general features of observed precipitation in East Asia quite well. This study convincingly demonstrates that rainfall prediction skill is considerably improved by using a hybrid dynamical-statistical approach compared to the dynamical forecast alone. Acknowledgements This work was carried out with the support of Rural Development Administration Cooperative Research

  12. Lexical prediction via forward models: N400 evidence from German Sign Language.

    Science.gov (United States)

    Hosemann, Jana; Herrmann, Annika; Steinbach, Markus; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias

    2013-09-01

    Models of language processing in the human brain often emphasize the prediction of upcoming input-for example in order to explain the rapidity of language understanding. However, the precise mechanisms of prediction are still poorly understood. Forward models, which draw upon the language production system to set up expectations during comprehension, provide a promising approach in this regard. Here, we present an event-related potential (ERP) study on German Sign Language (DGS) which tested the hypotheses of a forward model perspective on prediction. Sign languages involve relatively long transition phases between one sign and the next, which should be anticipated as part of a forward model-based prediction even though they are semantically empty. Native speakers of DGS watched videos of naturally signed DGS sentences which either ended with an expected or a (semantically) unexpected sign. Unexpected signs engendered a biphasic N400-late positivity pattern. Crucially, N400 onset preceded critical sign onset and was thus clearly elicited by properties of the transition phase. The comprehension system thereby clearly anticipated modality-specific information about the realization of the predicted semantic item. These results provide strong converging support for the application of forward models in language comprehension. © 2013 Elsevier Ltd. All rights reserved.

  13. On the model dependence of the determination of the strong coupling constant in second order QCD from e+e--annihilation into hadrons

    International Nuclear Information System (INIS)

    Achterberg, O.; D'Agostini, G.; Apel, W.D.; Engler, J.; Fluegge, G.; Forstbauer, B.; Fries, D.C.; Fues, W.; Gamerdinger, K.; Henkes, T.; Hopp, G.; Krueger, M.; Kuester, H.; Mueller, H.; Randoll, H.; Schmidt, G.; Schneider, H.; Boer, W. de; Buschhorn, G.; Grindhammer, G.; Grosse-Wiesmann, P.; Gunderson, B.; Kiesling, C.; Kotthaus, R.; Kruse, U.; Lierl, H.; Lueers, D.; Oberlack, H.; Schacht, P.; Bonneaud, G.; Colas, P.; Cordier, A.; Davier, M.; Fournier, D.; Grivaz, J.F.; Haissinski, J.; Journe, V.; Laplanche, F.; Le Diberder, F.; Mallik, U.; Ros, E.; Veillet, J.J.; Behrend, H.J.; Fenner, H.; Schachter, M.J.; Schroeder, V.; Sindt, H.

    1983-12-01

    Hadronic events obtained with the CELLO detector at PETRA are compared with second order QCD predictions using different models for the fragmentation of quarks and gluons into hadrons. We find that the model dependence in the determination of the strong coupling constant persists when going from first to second order QCD calculations. (orig.)

  14. Prediction of Coal Face Gas Concentration by Multi-Scale Selective Ensemble Hybrid Modeling

    Directory of Open Access Journals (Sweden)

    WU Xiang

    2014-06-01

    Full Text Available A selective ensemble hybrid modeling prediction method based on wavelet transformation is proposed to improve the fitting and generalization capability of the existing prediction models of the coal face gas concentration, which has a strong stochastic volatility. Mallat algorithm was employed for the multi-scale decomposition and single-scale reconstruction of the gas concentration time series. Then, it predicted every subsequence by sparsely weighted multi unstable ELM(extreme learning machine predictor within method SERELM(sparse ensemble regressors of ELM. At last, it superimposed the predicted values of these models to obtain the predicted values of the original sequence. The proposed method takes advantage of characteristics of multi scale analysis of wavelet transformation, accuracy and fast characteristics of ELM prediction and the generalization ability of L1 regularized selective ensemble learning method. The results show that the forecast accuracy has large increase by using the proposed method. The average relative error is 0.65%, the maximum relative error is 4.16% and the probability of relative error less than 1% reaches 0.785.

  15. A high and low noise model for strong motion accelerometers

    Science.gov (United States)

    Clinton, J. F.; Cauzzi, C.; Olivieri, M.

    2010-12-01

    We present reference noise models for high-quality strong motion accelerometer installations. We use continuous accelerometer data acquired by the Swiss Seismological Service (SED) since 2006 and other international high-quality accelerometer network data to derive very broadband (50Hz-100s) high and low noise models. The proposed noise models are compared to the Peterson (1993) low and high noise models designed for broadband seismometers; the datalogger self-noise; background noise levels at existing Swiss strong motion stations; and typical earthquake signals recorded in Switzerland and worldwide. The standard strong motion station operated by the SED consists of a Kinemetrics Episensor (2g clip level; flat acceleration response from 200 Hz to DC; insulated sensor / datalogger systems placed in vault quality sites. At all frequencies, there is at least one order of magnitude between the ALNM and the AHNM; at high frequencies (> 1Hz) this extends to 2 orders of magnitude. This study provides remarkable confirmation of the capability of modern strong motion accelerometers to record low-amplitude ground motions with seismic observation quality. In particular, an accelerometric station operating at the ALNM is capable of recording the full spectrum of near source earthquakes, out to 100 km, down to M2. Of particular interest for the SED, this study provides acceptable noise limits for candidate sites for the on-going Strong Motion Network modernisation.

  16. Comparison of RNA-seq and microarray-based models for clinical endpoint prediction.

    Science.gov (United States)

    Zhang, Wenqian; Yu, Ying; Hertwig, Falk; Thierry-Mieg, Jean; Zhang, Wenwei; Thierry-Mieg, Danielle; Wang, Jian; Furlanello, Cesare; Devanarayan, Viswanath; Cheng, Jie; Deng, Youping; Hero, Barbara; Hong, Huixiao; Jia, Meiwen; Li, Li; Lin, Simon M; Nikolsky, Yuri; Oberthuer, André; Qing, Tao; Su, Zhenqiang; Volland, Ruth; Wang, Charles; Wang, May D; Ai, Junmei; Albanese, Davide; Asgharzadeh, Shahab; Avigad, Smadar; Bao, Wenjun; Bessarabova, Marina; Brilliant, Murray H; Brors, Benedikt; Chierici, Marco; Chu, Tzu-Ming; Zhang, Jibin; Grundy, Richard G; He, Min Max; Hebbring, Scott; Kaufman, Howard L; Lababidi, Samir; Lancashire, Lee J; Li, Yan; Lu, Xin X; Luo, Heng; Ma, Xiwen; Ning, Baitang; Noguera, Rosa; Peifer, Martin; Phan, John H; Roels, Frederik; Rosswog, Carolina; Shao, Susan; Shen, Jie; Theissen, Jessica; Tonini, Gian Paolo; Vandesompele, Jo; Wu, Po-Yen; Xiao, Wenzhong; Xu, Joshua; Xu, Weihong; Xuan, Jiekun; Yang, Yong; Ye, Zhan; Dong, Zirui; Zhang, Ke K; Yin, Ye; Zhao, Chen; Zheng, Yuanting; Wolfinger, Russell D; Shi, Tieliu; Malkas, Linda H; Berthold, Frank; Wang, Jun; Tong, Weida; Shi, Leming; Peng, Zhiyu; Fischer, Matthias

    2015-06-25

    Gene expression profiling is being widely applied in cancer research to identify biomarkers for clinical endpoint prediction. Since RNA-seq provides a powerful tool for transcriptome-based applications beyond the limitations of microarrays, we sought to systematically evaluate the performance of RNA-seq-based and microarray-based classifiers in this MAQC-III/SEQC study for clinical endpoint prediction using neuroblastoma as a model. We generate gene expression profiles from 498 primary neuroblastomas using both RNA-seq and 44 k microarrays. Characterization of the neuroblastoma transcriptome by RNA-seq reveals that more than 48,000 genes and 200,000 transcripts are being expressed in this malignancy. We also find that RNA-seq provides much more detailed information on specific transcript expression patterns in clinico-genetic neuroblastoma subgroups than microarrays. To systematically compare the power of RNA-seq and microarray-based models in predicting clinical endpoints, we divide the cohort randomly into training and validation sets and develop 360 predictive models on six clinical endpoints of varying predictability. Evaluation of factors potentially affecting model performances reveals that prediction accuracies are most strongly influenced by the nature of the clinical endpoint, whereas technological platforms (RNA-seq vs. microarrays), RNA-seq data analysis pipelines, and feature levels (gene vs. transcript vs. exon-junction level) do not significantly affect performances of the models. We demonstrate that RNA-seq outperforms microarrays in determining the transcriptomic characteristics of cancer, while RNA-seq and microarray-based models perform similarly in clinical endpoint prediction. Our findings may be valuable to guide future studies on the development of gene expression-based predictive models and their implementation in clinical practice.

  17. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  18. In silico modeling predicts drug sensitivity of patient-derived cancer cells.

    Science.gov (United States)

    Pingle, Sandeep C; Sultana, Zeba; Pastorino, Sandra; Jiang, Pengfei; Mukthavaram, Rajesh; Chao, Ying; Bharati, Ila Sri; Nomura, Natsuko; Makale, Milan; Abbasi, Taher; Kapoor, Shweta; Kumar, Ansu; Usmani, Shahabuddin; Agrawal, Ashish; Vali, Shireen; Kesari, Santosh

    2014-05-21

    Glioblastoma (GBM) is an aggressive disease associated with poor survival. It is essential to account for the complexity of GBM biology to improve diagnostic and therapeutic strategies. This complexity is best represented by the increasing amounts of profiling ("omics") data available due to advances in biotechnology. The challenge of integrating these vast genomic and proteomic data can be addressed by a comprehensive systems modeling approach. Here, we present an in silico model, where we simulate GBM tumor cells using genomic profiling data. We use this in silico tumor model to predict responses of cancer cells to targeted drugs. Initially, we probed the results from a recent hypothesis-independent, empirical study by Garnett and co-workers that analyzed the sensitivity of hundreds of profiled cancer cell lines to 130 different anticancer agents. We then used the tumor model to predict sensitivity of patient-derived GBM cell lines to different targeted therapeutic agents. Among the drug-mutation associations reported in the Garnett study, our in silico model accurately predicted ~85% of the associations. While testing the model in a prospective manner using simulations of patient-derived GBM cell lines, we compared our simulation predictions with experimental data using the same cells in vitro. This analysis yielded a ~75% agreement of in silico drug sensitivity with in vitro experimental findings. These results demonstrate a strong predictability of our simulation approach using the in silico tumor model presented here. Our ultimate goal is to use this model to stratify patients for clinical trials. By accurately predicting responses of cancer cells to targeted agents a priori, this in silico tumor model provides an innovative approach to personalizing therapy and promises to improve clinical management of cancer.

  19. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  20. Prediction and design of first super-strong liquid-crystalline polymers

    International Nuclear Information System (INIS)

    Dowell, F.

    1989-01-01

    This paper presents the details of the theoretical prediction and design (atom by atom, bond by bond) of the molecule chemical structures of the first candidate super-strong liquid-crystalline polymers (SS LCPs). These LCPs are the first LCPs designed to have good compressive strengths, as well as to have tensile strengths and tensile moduli significantly larger than those of existing strong LCPs (such as Kevlar). The key feature of this new class of LCPs is that the exceptional strength is three dimensional on a microscopic, molecular level (thus, on a macroscopic level), in contrast to present LCPs (such as Kevlar) with their one-dimensional exceptional strength. These SS LCPs also have some solubility and processing advantages over existing strong LCPs. These SS LCPs are specially-designed combined LCPs such that the side chains of a molecule interdigitate with the side chains of other molecules. This paper also presents other essential general and specific features required for SS LCPs. Considerations in the design of SS LCPs include the spacing distance between side chains along the backbone, the need for rigid sections in the backbone and side chains, the degree of polymerization, the length of the side chains, the regularity of spacing of the side chains along the backbone, the interdigitation of side chains in submolecular strips, the packing of the side chains on one or two sides of the backbone, the symmetry of the side chains, the points of attachment of the side chains to the backbone, the flexibility and size of the chemical group connecting each side chain to the backbone, the effect of semiflexible sections in the backbone and side chains, and the choice of types of dipolar and/or hydrogen bonding forces in the backbones and side chains for easy alignment

  1. Robust human body model injury prediction in simulated side impact crashes.

    Science.gov (United States)

    Golman, Adam J; Danelson, Kerry A; Stitzel, Joel D

    2016-01-01

    This study developed a parametric methodology to robustly predict occupant injuries sustained in real-world crashes using a finite element (FE) human body model (HBM). One hundred and twenty near-side impact motor vehicle crashes were simulated over a range of parameters using a Toyota RAV4 (bullet vehicle), Ford Taurus (struck vehicle) FE models and a validated human body model (HBM) Total HUman Model for Safety (THUMS). Three bullet vehicle crash parameters (speed, location and angle) and two occupant parameters (seat position and age) were varied using a Latin hypercube design of Experiments. Four injury metrics (head injury criterion, half deflection, thoracic trauma index and pelvic force) were used to calculate injury risk. Rib fracture prediction and lung strain metrics were also analysed. As hypothesized, bullet speed had the greatest effect on each injury measure. Injury risk was reduced when bullet location was further from the B-pillar or when the bullet angle was more oblique. Age had strong correlation to rib fractures frequency and lung strain severity. The injuries from a real-world crash were predicted using two different methods by (1) subsampling the injury predictors from the 12 best crush profile matching simulations and (2) using regression models. Both injury prediction methods successfully predicted the case occupant's low risk for pelvic injury, high risk for thoracic injury, rib fractures and high lung strains with tight confidence intervals. This parametric methodology was successfully used to explore crash parameter interactions and to robustly predict real-world injuries.

  2. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  3. An Improved Car-Following Model Accounting for Impact of Strong Wind

    Directory of Open Access Journals (Sweden)

    Dawei Liu

    2017-01-01

    Full Text Available In order to investigate the effect of strong wind on dynamic characteristic of traffic flow, an improved car-following model based on the full velocity difference model is developed in this paper. Wind force is introduced as the influence factor of car-following behavior. Among three components of wind force, lift force and side force are taken into account. The linear stability analysis is carried out and the stability condition of the newly developed model is derived. Numerical analysis is made to explore the effect of strong wind on spatial-time evolution of a small perturbation. The results show that the strong wind can significantly affect the stability of traffic flow. Driving safety in strong wind is also studied by comparing the lateral force under different wind speeds with the side friction of vehicles. Finally, the fuel consumption of vehicle in strong wind condition is explored and the results show that the fuel consumption decreased with the increase of wind speed.

  4. Regional Characterization of the Crust in Metropolitan Areas for Prediction of Strong Ground Motion

    Science.gov (United States)

    Hirata, N.; Sato, H.; Koketsu, K.; Umeda, Y.; Iwata, T.; Kasahara, K.

    2003-12-01

    Introduction: After the 1995 Kobe earthquake, the Japanese government increased its focus and funding of earthquake hazards evaluation, studies of man-made structures integrity, and emergency response planning in the major urban centers. A new agency, the Ministry of Education, Science, Sports and Culture (MEXT) has started a five-year program titled as Special Project for Earthquake Disaster Mitigation in Urban Areas (abbreviated to Dai-dai-toku in Japanese) since 2002. The project includes four programs: I. Regional characterization of the crust in metropolitan areas for prediction of strong ground motion. II. Significant improvement of seismic performance of structure. III. Advanced disaster management system. IV. Investigation of earthquake disaster mitigation research results. We will present the results from the first program conducted in 2002 and 2003. Regional Characterization of the Crust in Metropolitan Areas for Prediction of Strong Ground Motion: A long-term goal is to produce map of reliable estimations of strong ground motion. This requires accurate determination of ground motion response, which includes a source process, an effect of propagation path, and near surface response. The new five-year project was aimed to characterize the "source" and "propagation path" in the Kanto (Tokyo) region and Kinki (Osaka) region. The 1923 Kanto Earthquake is one of the important targets to be addressed in the project. The proximity of the Pacific and Philippine Sea subducting plates requires study of the relationship between earthquakes and regional tectonics. This project focuses on identification and geometry of: 1) Source faults, 2) Subducting plates and mega-thrust faults, 3) Crustal structure, 4) Seismogenic zone, 5) Sedimentary basins, 6) 3D velocity properties We have conducted a series of seismic reflection and refraction experiment in the Kanto region. In 2002 we have completed to deploy seismic profiling lines in the Boso peninsula (112 km) and the

  5. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  6. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    Science.gov (United States)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.

  7. Comparing an Annual and a Daily Time-Step Model for Predicting Field-Scale Phosphorus Loss.

    Science.gov (United States)

    Bolster, Carl H; Forsberg, Adam; Mittelstet, Aaron; Radcliffe, David E; Storm, Daniel; Ramirez-Avila, John; Sharpley, Andrew N; Osmond, Deanna

    2017-11-01

    A wide range of mathematical models are available for predicting phosphorus (P) losses from agricultural fields, ranging from simple, empirically based annual time-step models to more complex, process-based daily time-step models. In this study, we compare field-scale P-loss predictions between the Annual P Loss Estimator (APLE), an empirically based annual time-step model, and the Texas Best Management Practice Evaluation Tool (TBET), a process-based daily time-step model based on the Soil and Water Assessment Tool. We first compared predictions of field-scale P loss from both models using field and land management data collected from 11 research sites throughout the southern United States. We then compared predictions of P loss from both models with measured P-loss data from these sites. We observed a strong and statistically significant ( loss between the two models; however, APLE predicted, on average, 44% greater dissolved P loss, whereas TBET predicted, on average, 105% greater particulate P loss for the conditions simulated in our study. When we compared model predictions with measured P-loss data, neither model consistently outperformed the other, indicating that more complex models do not necessarily produce better predictions of field-scale P loss. Our results also highlight limitations with both models and the need for continued efforts to improve their accuracy. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  8. Modeling long period swell in Southern California: Practical boundary conditions from buoy observations and global wave model predictions

    Science.gov (United States)

    Crosby, S. C.; O'Reilly, W. C.; Guza, R. T.

    2016-02-01

    Accurate, unbiased, high-resolution (in space and time) nearshore wave predictions are needed to drive models of beach erosion, coastal flooding, and alongshore transport of sediment, biota and pollutants. On highly sheltered shorelines, wave predictions are sensitive to the directions of onshore propagating waves, and nearshore model prediction error is often dominated by uncertainty in offshore boundary conditions. Offshore islands and shoals, and coastline curvature, create complex sheltering patterns over the 250km span of southern California (SC) shoreline. Here, regional wave model skill in SC was compared for different offshore boundary conditions created using offshore buoy observations and global wave model hindcasts (National Oceanographic and Atmospheric Administration Wave Watch 3, WW3). Spectral ray-tracing methods were used to transform incident offshore swell (0.04-0.09Hz) energy at high directional resolution (1-deg). Model skill is assessed for predictions (wave height, direction, and alongshore radiation stress) at 16 nearshore buoy sites between 2000 and 2009. Model skill using buoy-derived boundary conditions is higher than with WW3-derived boundary conditions. Buoy-driven nearshore model results are similar with various assumptions about the true offshore directional distribution (maximum entropy, Bayesian direct, and 2nd derivative smoothness). Two methods combining offshore buoy observations with WW3 predictions in the offshore boundary condition did not improve nearshore skill above buoy-only methods. A case example at Oceanside harbor shows strong sensitivity of alongshore sediment transport predictions to different offshore boundary conditions. Despite this uncertainty in alongshore transport magnitude, alongshore gradients in transport (e.g. the location of model accretion and erosion zones) are determined by the local bathymetry, and are similar for all predictions.

  9. Researches of fruit quality prediction model based on near infrared spectrum

    Science.gov (United States)

    Shen, Yulin; Li, Lian

    2018-04-01

    With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.

  10. Unification predictions

    International Nuclear Information System (INIS)

    Ghilencea, D.; Ross, G.G.; Lanzagorta, M.

    1997-07-01

    The unification of gauge couplings suggests that there is an underlying (supersymmetric) unification of the strong, electromagnetic and weak interactions. The prediction of the unification scale may be the first quantitative indication that this unification may extend to unification with gravity. We make a precise determination of these predictions for a class of models which extend the multiplet structure of the Minimal Supersymmetric Standard Model to include the heavy states expected in many Grand Unified and/or superstring theories. We show that there is a strong cancellation between the 2-loop and threshold effects. As a result the net effect is smaller than previously thought, giving a small increase in both the unification scale and the value of the strong coupling at low energies. (author). 15 refs, 5 figs

  11. Protein (multi-)location prediction: utilizing interdependencies via a generative model

    Science.gov (United States)

    Shatkay, Hagit

    2015-01-01

    Motivation: Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein’s function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. Results: We introduce a probabilistic generative model for protein localization, and develop a system based on it—which we call MDLoc—that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. Availability and implementation: MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. Contact: shatkay@udel.edu. PMID:26072505

  12. Protein (multi-)location prediction: utilizing interdependencies via a generative model.

    Science.gov (United States)

    Simha, Ramanuja; Briesemeister, Sebastian; Kohlbacher, Oliver; Shatkay, Hagit

    2015-06-15

    Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein's function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. We introduce a probabilistic generative model for protein localization, and develop a system based on it-which we call MDLoc-that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. © The Author 2015. Published by Oxford University Press.

  13. Reproducing tailing in breakthrough curves: Are statistical models equally representative and predictive?

    Science.gov (United States)

    Pedretti, Daniele; Bianchi, Marco

    2018-03-01

    Breakthrough curves (BTCs) observed during tracer tests in highly heterogeneous aquifers display strong tailing. Power laws are popular models for both the empirical fitting of these curves, and the prediction of transport using upscaling models based on best-fitted estimated parameters (e.g. the power law slope or exponent). The predictive capacity of power law based upscaling models can be however questioned due to the difficulties to link model parameters with the aquifers' physical properties. This work analyzes two aspects that can limit the use of power laws as effective predictive tools: (a) the implication of statistical subsampling, which often renders power laws undistinguishable from other heavily tailed distributions, such as the logarithmic (LOG); (b) the difficulties to reconcile fitting parameters obtained from models with different formulations, such as the presence of a late-time cutoff in the power law model. Two rigorous and systematic stochastic analyses, one based on benchmark distributions and the other on BTCs obtained from transport simulations, are considered. It is found that a power law model without cutoff (PL) results in best-fitted exponents (αPL) falling in the range of typical experimental values reported in the literature (1.5 tailing becomes heavier. Strong fluctuations occur when the number of samples is limited, due to the effects of subsampling. On the other hand, when the power law model embeds a cutoff (PLCO), the best-fitted exponent (αCO) is insensitive to the degree of tailing and to the effects of subsampling and tends to a constant αCO ≈ 1. In the PLCO model, the cutoff rate (λ) is the parameter that fully reproduces the persistence of the tailing and is shown to be inversely correlated to the LOG scale parameter (i.e. with the skewness of the distribution). The theoretical results are consistent with the fitting analysis of a tracer test performed during the MADE-5 experiment. It is shown that a simple

  14. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  15. Testing strong factorial invariance using three-level structural equation modeling

    NARCIS (Netherlands)

    Jak, Suzanne

    Within structural equation modeling, the most prevalent model to investigate measurement bias is the multigroup model. Equal factor loadings and intercepts across groups in a multigroup model represent strong factorial invariance (absence of measurement bias) across groups. Although this approach is

  16. Group Targets Tracking Using Multiple Models GGIW-CPHD Based on Best-Fitting Gaussian Approximation and Strong Tracking Filter

    Directory of Open Access Journals (Sweden)

    Yun Wang

    2016-01-01

    Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.

  17. Models of alien species richness show moderate predictive accuracy and poor transferability

    Directory of Open Access Journals (Sweden)

    César Capinha

    2018-06-01

    Full Text Available Robust predictions of alien species richness are useful to assess global biodiversity change. Nevertheless, the capacity to predict spatial patterns of alien species richness remains largely unassessed. Using 22 data sets of alien species richness from diverse taxonomic groups and covering various parts of the world, we evaluated whether different statistical models were able to provide useful predictions of absolute and relative alien species richness, as a function of explanatory variables representing geographical, environmental and socio-economic factors. Five state-of-the-art count data modelling techniques were used and compared: Poisson and negative binomial generalised linear models (GLMs, multivariate adaptive regression splines (MARS, random forests (RF and boosted regression trees (BRT. We found that predictions of absolute alien species richness had a low to moderate accuracy in the region where the models were developed and a consistently poor accuracy in new regions. Predictions of relative richness performed in a superior manner in both geographical settings, but still were not good. Flexible tree ensembles-type techniques (RF and BRT were shown to be significantly better in modelling alien species richness than parametric linear models (such as GLM, despite the latter being more commonly applied for this purpose. Importantly, the poor spatial transferability of models also warrants caution in assuming the generality of the relationships they identify, e.g. by applying projections under future scenario conditions. Ultimately, our results strongly suggest that predictability of spatial variation in richness of alien species richness is limited. The somewhat more robust ability to rank regions according to the number of aliens they have (i.e. relative richness, suggests that models of aliens species richness may be useful for prioritising and comparing regions, but not for predicting exact species numbers.

  18. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Science.gov (United States)

    Rosa, Isabel M D; Purves, Drew; Souza, Carlos; Ewers, Robert M

    2013-01-01

    Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia")-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently

  19. Long-term predictability of regions and dates of strong earthquakes

    Science.gov (United States)

    Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey

    2016-04-01

    Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation

  20. Quantum field model of strong-coupling binucleon

    International Nuclear Information System (INIS)

    Amirkhanov, I.V.; Puzynin, I.V.; Puzynina, T.P.; Strizh, T.A.; Zemlyanaya, E.V.; Lakhno, V.D.

    1996-01-01

    The quantum field binucleon model for the case of the nucleon spot interaction with the scalar and pseudoscalar meson fields is considered. It is shown that the nonrelativistic problem of the two nucleon interaction reduces to the one-particle problem. For the strong coupling limit the nonlinear equations describing two nucleons in the meson field are developed [ru

  1. Prediction and discovery of extremely strong hydrodynamic instabilities due to a velocity jump: theory and experiments

    International Nuclear Information System (INIS)

    Fridman, A M

    2008-01-01

    The theory and the experimental discovery of extremely strong hydrodynamic instabilities are described, viz. the Kelvin-Helmholtz, centrifugal, and superreflection instabilities. The discovery of the last two instabilities was predicted and the Kelvin-Helmholtz instability in real systems was revised by us. (reviews of topical problems)

  2. Radial Distribution Functions of Strongly Coupled Two-Temperature Plasmas

    Science.gov (United States)

    Shaffer, Nathaniel R.; Tiwari, Sanat Kumar; Baalrud, Scott D.

    2017-10-01

    We present tests of three theoretical models for the radial distribution functions (RDFs) in two-temperature strongly coupled plasmas. RDFs are useful in extending plasma thermodynamics and kinetic theory to strong coupling, but they are usually known only for thermal equilibrium or for approximate one-component model plasmas. Accurate two-component modeling is necessary to understand the impact of strong coupling on inter-species transport, e.g., ambipolar diffusion and electron-ion temperature relaxation. We demonstrate that the Seuferling-Vogel-Toeppfer (SVT) extension of the hypernetted chain equations not only gives accurate RDFs (as compared with classical molecular dynamics simulations), but also has a simple connection with the Yukawa OCP model. This connection gives a practical means to recover the structure of the electron background from knowledge of the ion-ion RDF alone. Using the model RDFs in Effective Potential Theory, we report the first predictions of inter-species transport coefficients of strongly coupled plasmas far from equilibrium. This work is supported by NSF Grant No. PHY-1453736, AFSOR Award No. FA9550-16-1-0221, and used XSEDE computational resources.

  3. Influence of the Human Skin Tumor Type in Photodynamic Therapy Analysed by a Predictive Model

    Directory of Open Access Journals (Sweden)

    I. Salas-García

    2012-01-01

    Full Text Available Photodynamic Therapy (PDT modeling allows the prediction of the treatment results depending on the lesion properties, the photosensitizer distribution, or the optical source characteristics. We employ a predictive PDT model and apply it to different skin tumors. It takes into account optical radiation distribution, a nonhomogeneous topical photosensitizer spatial temporal distribution, and the time-dependent photochemical interaction. The predicted singlet oxygen molecular concentrations with varying optical irradiance are compared and could be directly related with the necrosis area. The results show a strong dependence on the particular lesion. This suggests the need to design optimal PDT treatment protocols adapted to the specific patient and lesion.

  4. Strongly coupled models with a Higgs-like boson

    International Nuclear Information System (INIS)

    Pich, A.; Rosell, I.; Sanz-Cillero, J. J.

    2013-01-01

    Considering the one-loop calculation of the oblique S and T parameters, we have presented a study of the viability of strongly-coupled scenarios of electroweak symmetry breaking with a light Higgs-like boson. The calculation has been done by using an effective Lagrangian, being short-distance constraints and dispersive relations the main ingredients of the estimation. Contrary to a widely spread believe, we have demonstrated that strongly coupled electroweak models with massive resonances are not in conflict with experimental constraints on these parameters and the recently observed Higgs-like resonance. So there is room for these models, but they are stringently constrained. The vector and axial-vector states should be heavy enough (with masses above the TeV scale), the mass splitting between them is highly preferred to be small and the Higgs-like scalar should have a WW coupling close to the Standard Model one. It is important to stress that these conclusions do not depend critically on the inclusion of the second Weinberg sum rule. (authors)

  5. In silico and cell-based analyses reveal strong divergence between prediction and observation of T-cell-recognized tumor antigen T-cell epitopes.

    Science.gov (United States)

    Schmidt, Julien; Guillaume, Philippe; Dojcinovic, Danijel; Karbach, Julia; Coukos, George; Luescher, Immanuel

    2017-07-14

    Tumor exomes provide comprehensive information on mutated, overexpressed genes and aberrant splicing, which can be exploited for personalized cancer immunotherapy. Of particular interest are mutated tumor antigen T-cell epitopes, because neoepitope-specific T cells often are tumoricidal. However, identifying tumor-specific T-cell epitopes is a major challenge. A widely used strategy relies on initial prediction of human leukocyte antigen-binding peptides by in silico algorithms, but the predictive power of this approach is unclear. Here, we used the human tumor antigen NY-ESO-1 (ESO) and the human leukocyte antigen variant HLA-A*0201 (A2) as a model and predicted in silico the 41 highest-affinity, A2-binding 8-11-mer peptides and assessed their binding, kinetic complex stability, and immunogenicity in A2-transgenic mice and on peripheral blood mononuclear cells from ESO-vaccinated melanoma patients. We found that 19 of the peptides strongly bound to A2, 10 of which formed stable A2-peptide complexes and induced CD8 + T cells in A2-transgenic mice. However, only 5 of the peptides induced cognate T cells in humans; these peptides exhibited strong binding and complex stability and contained multiple large hydrophobic and aromatic amino acids. These results were not predicted by in silico algorithms and provide new clues to improving T-cell epitope identification. In conclusion, our findings indicate that only a small fraction of in silico -predicted A2-binding ESO peptides are immunogenic in humans, namely those that have high peptide-binding strength and complex stability. This observation highlights the need for improving in silico predictions of peptide immunogenicity. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  6. Predictive modeling of reactive wetting and metal joining.

    Energy Technology Data Exchange (ETDEWEB)

    van Swol, Frank B.

    2013-09-01

    The performance, reproducibility and reliability of metal joints are complex functions of the detailed history of physical processes involved in their creation. Prediction and control of these processes constitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy and reactive wetting. Understanding this process requires coupling strong molecularscale chemistry at the interface with microscopic (diffusion) and macroscopic mass transport (flow) inside the liquid followed by subsequent cooling and solidification of the new metal mixture. The final joint displays compositional heterogeneity and its resulting microstructure largely determines the success or failure of the entire component. At present there exists no computational tool at Sandia that can predict the formation and success of a braze joint, as current capabilities lack the ability to capture surface/interface reactions and their effect on interface properties. This situation precludes us from implementing a proactive strategy to deal with joining problems. Here, we describe what is needed to arrive at a predictive modeling and simulation capability for multicomponent metals with complicated phase diagrams for melting and solidification, incorporating dissolutive and composition-dependent wetting.

  7. A molecular prognostic model predicts esophageal squamous cell carcinoma prognosis.

    Directory of Open Access Journals (Sweden)

    Hui-Hui Cao

    Full Text Available Esophageal squamous cell carcinoma (ESCC has the highest mortality rates in China. The 5-year survival rate of ESCC remains dismal despite improvements in treatments such as surgical resection and adjuvant chemoradiation, and current clinical staging approaches are limited in their ability to effectively stratify patients for treatment options. The aim of the present study, therefore, was to develop an immunohistochemistry-based prognostic model to improve clinical risk assessment for patients with ESCC.We developed a molecular prognostic model based on the combined expression of axis of epidermal growth factor receptor (EGFR, phosphorylated Specificity protein 1 (p-Sp1, and Fascin proteins. The presence of this prognostic model and associated clinical outcomes were analyzed for 130 formalin-fixed, paraffin-embedded esophageal curative resection specimens (generation dataset and validated using an independent cohort of 185 specimens (validation dataset.The expression of these three genes at the protein level was used to build a molecular prognostic model that was highly predictive of ESCC survival in both generation and validation datasets (P = 0.001. Regression analysis showed that this molecular prognostic model was strongly and independently predictive of overall survival (hazard ratio = 2.358 [95% CI, 1.391-3.996], P = 0.001 in generation dataset; hazard ratio = 1.990 [95% CI, 1.256-3.154], P = 0.003 in validation dataset. Furthermore, the predictive ability of these 3 biomarkers in combination was more robust than that of each individual biomarker.This technically simple immunohistochemistry-based molecular model accurately predicts ESCC patient survival and thus could serve as a complement to current clinical risk stratification approaches.

  8. Classical trajectory perspective of atomic ionization in strong laser fields. Semiclassical modeling

    International Nuclear Information System (INIS)

    Liu, Jie

    2014-01-01

    Dealing with timely and interesting issues in strong laser physics. Illustrates complex strong field atomic ionization with the simple semiclassical model of classical trajectory perspective for the first time. Provides a theoretical model that can be used to account for recent experiments. The ionization of atoms and molecules in strong laser fields is an active field in modern physics and has versatile applications in such as attosecond physics, X-ray generation, inertial confined fusion (ICF), medical science and so on. Classical Trajectory Perspective of Atomic Ionization in Strong Laser Fields covers the basic concepts in this field and discusses many interesting topics using the semiclassical model of classical trajectory ensemble simulation, which is one of the most successful ionization models and has the advantages of a clear picture, feasible computing and accounting for many exquisite experiments quantitatively. The book also presents many applications of the model in such topics as the single ionization, double ionization, neutral atom acceleration and other timely issues in strong field physics, and delivers useful messages to readers with presenting the classical trajectory perspective on the strong field atomic ionization. The book is intended for graduate students and researchers in the field of laser physics, atom molecule physics and theoretical physics. Dr. Jie Liu is a professor of Institute of Applied Physics and Computational Mathematics, China and Peking University.

  9. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  10. Linking spring phenology with mechanistic models of host movement to predict disease transmission risk

    Science.gov (United States)

    Merkle, Jerod A.; Cross, Paul C.; Scurlock, Brandon M.; Cole, Eric K.; Courtemanch, Alyson B.; Dewey, Sarah R.; Kauffman, Matthew J.

    2018-01-01

    Disease models typically focus on temporal dynamics of infection, while often neglecting environmental processes that determine host movement. In many systems, however, temporal disease dynamics may be slow compared to the scale at which environmental conditions alter host space-use and accelerate disease transmission.Using a mechanistic movement modelling approach, we made space-use predictions of a mobile host (elk [Cervus Canadensis] carrying the bacterial disease brucellosis) under environmental conditions that change daily and annually (e.g., plant phenology, snow depth), and we used these predictions to infer how spring phenology influences the risk of brucellosis transmission from elk (through aborted foetuses) to livestock in the Greater Yellowstone Ecosystem.Using data from 288 female elk monitored with GPS collars, we fit step selection functions (SSFs) during the spring abortion season and then implemented a master equation approach to translate SSFs into predictions of daily elk distribution for five plausible winter weather scenarios (from a heavy snow, to an extreme winter drought year). We predicted abortion events by combining elk distributions with empirical estimates of daily abortion rates, spatially varying elk seroprevelance and elk population counts.Our results reveal strong spatial variation in disease transmission risk at daily and annual scales that is strongly governed by variation in host movement in response to spring phenology. For example, in comparison with an average snow year, years with early snowmelt are predicted to have 64% of the abortions occurring on feedgrounds shift to occurring on mainly public lands, and to a lesser extent on private lands.Synthesis and applications. Linking mechanistic models of host movement with disease dynamics leads to a novel bridge between movement and disease ecology. Our analysis framework offers new avenues for predicting disease spread, while providing managers tools to proactively mitigate

  11. A network security situation prediction model based on wavelet neural network with optimized parameters

    Directory of Open Access Journals (Sweden)

    Haibo Zhang

    2016-08-01

    Full Text Available The security incidents ion networks are sudden and uncertain, it is very hard to precisely predict the network security situation by traditional methods. In order to improve the prediction accuracy of the network security situation, we build a network security situation prediction model based on Wavelet Neural Network (WNN with optimized parameters by the Improved Niche Genetic Algorithm (INGA. The proposed model adopts WNN which has strong nonlinear ability and fault-tolerance performance. Also, the parameters for WNN are optimized through the adaptive genetic algorithm (GA so that WNN searches more effectively. Considering the problem that the adaptive GA converges slowly and easily turns to the premature problem, we introduce a novel niche technology with a dynamic fuzzy clustering and elimination mechanism to solve the premature convergence of the GA. Our final simulation results show that the proposed INGA-WNN prediction model is more reliable and effective, and it achieves faster convergence-speed and higher prediction accuracy than the Genetic Algorithm-Wavelet Neural Network (GA-WNN, Genetic Algorithm-Back Propagation Neural Network (GA-BPNN and WNN.

  12. The influence of fragmentation models on the determination of the strong coupling constant in e+e- annihilation into hadrons

    International Nuclear Information System (INIS)

    Behrend, H.J.; Chen, C.; Fenner, H.; Schachter, M.J.; Schroeder, V.; Sindt, H.; D'Agostini, G.; Apel, W.D.; Banerjee, S.; Bodenkamp, J.; Chrobaczek, D.; Engler, J.; Fluegge, G.; Fries, D.C.; Fues, W.; Gamerdinger, K.; Hopp, G.; Kuester, H.; Mueller, H.; Randoll, H.; Schmidt, G.; Schneider, H.; Boer, W. de; Buschhorn, G.; Grindhammer, G.; Grosse-Wiesmann, P.; Gunderson, B.; Kiesling, C.; Kotthaus, R.; Kruse, U.; Lierl, H.; Lueers, D.; Oberlack, H.; Schacht, P.; Colas, P.; Cordier, A.; Davier, M.; Fournier, D.; Grivaz, J.F.; Haissinski, J.; Journe, V.; Klarsfeld, A.; Laplanche, F.; Le Diberder, F.; Mallik, U.; Veillet, J.J.; Field, J.H.; George, R.; Goldberg, M.; Grossetete, B.; Hamon, O.; Kapusta, F.; Kovacs, F.; London, G.; Poggioli, L.; Rivoal, M.; Aleksan, R.; Bouchez, J.; Carnesecchi, G.; Cozzika, G.; Ducros, Y.; Gaidot, A.; Jadach, S.; Lavagne, Y.; Pamela, J.; Pansart, J.P.; Pierre, F.

    1983-01-01

    Hadronic events obtained with the CELLO detector at PETRA were compared with first-order QCD predictions using two different models for the fragmentation of quarks and gluons, the Hoyer model and the Lund model. Both models are in reasonable agreement with the data, although they do not completely reproduce the details of many distributions. Several methods have been applied to determine the strong coupling constant αsub(s). Although within one model the value of αsub(s) varies by 20% among the different methods, the values determined using the Lund model are 30% or more larger (depending on the method used) than the values determined with the Hoyer model. Our results using the Hoyer model are in agreement with previous results based on this approach. (orig.)

  13. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  14. Strongly interacting W's and Z's

    International Nuclear Information System (INIS)

    Gaillard, M.K.

    1984-01-01

    The study focussed primarily on the dynamics of a strongly interacting W, Z(SIW) sector, with the aim of sharpening predictions for total W, Z yield and W, Z multiplicities expected from WW fusion for various scenarios. Specific issues raised in the context of the general problem of modeling SIW included the specificity of the technicolor (or, equivalently, QCD) model, whether or not a composite scalar model can be evaded, and whether the standard model necessarily implies an I = J = O state (≅ Higgs particle) that is relatively ''light'' (M ≤ hundreds of TeV). The consensus on the last issue was that existing arguments are inconclusive. While the author shall briefly address compositeness and alternatives to the technicolor model, quantitative estimates will be of necessity based on technicolor or an extrapolation of pion data

  15. Using Predictability for Lexical Segmentation.

    Science.gov (United States)

    Çöltekin, Çağrı

    2017-09-01

    This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.

  16. Strong coupling and quasispinor representations of the SU(3) rotor model

    International Nuclear Information System (INIS)

    Rowe, D.J.; De Guise, H.

    1992-01-01

    We define a coupling scheme, in close parallel to the coupling scheme of Elliott and Wilsdon, in which nucleonic intrinsic spins are strongly coupled to SU(3) spatial wave functions. The scheme is proposed for shell-model calculations in strongly deformed nuclei and for semimicroscopic analyses of rotations in odd-mass nuclei and other nuclei for which the spin-orbit interaction is believed to play an important role. The coupling scheme extends the domain of utility of the SU(3) model, and the symplectic model, to heavy nuclei and odd-mass nuclei. It is based on the observation that the low angular-momentum states of an SU(3) irrep have properties that mimic those of a corresponding irrep of the rotor algebra. Thus, we show that strongly coupled spin-SU(3) bands behave like strongly coupled rotor bands with properties that approach those of irreducible representations of the rigid-rotor algebra in the limit of large SU(3) quantum numbers. Moreover, we determine that the low angular-momentum states of a strongly coupled band of states of half-odd integer angular momentum behave to a high degree of accuracy as if they belonged to an SU(3) irrep. These are the quasispinor SU(3) irreps referred to in the title. (orig.)

  17. Finding the strong CP problem at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    D' Agnolo, Raffaele Tito, E-mail: dagnolo@ias.edu; Hook, Anson

    2016-11-10

    We show that a class of parity based solutions to the strong CP problem predicts new colored particles with mass at the TeV scale, due to constraints from Planck suppressed operators. The new particles are copies of the Standard Model quarks and leptons. The new quarks can be produced at the LHC and are either collider stable or decay into Standard Model quarks through a Higgs, a W or a Z boson. We discuss some simple but generic predictions of the models for the LHC and find signatures not related to the traditional solutions of the hierarchy problem. We thus provide alternative motivation for new physics searches at the weak scale. We also briefly discuss the cosmological history of these models and how to obtain successful baryogenesis.

  18. Finding the strong CP problem at the LHC

    Science.gov (United States)

    D'Agnolo, Raffaele Tito; Hook, Anson

    2016-11-01

    We show that a class of parity based solutions to the strong CP problem predicts new colored particles with mass at the TeV scale, due to constraints from Planck suppressed operators. The new particles are copies of the Standard Model quarks and leptons. The new quarks can be produced at the LHC and are either collider stable or decay into Standard Model quarks through a Higgs, a W or a Z boson. We discuss some simple but generic predictions of the models for the LHC and find signatures not related to the traditional solutions of the hierarchy problem. We thus provide alternative motivation for new physics searches at the weak scale. We also briefly discuss the cosmological history of these models and how to obtain successful baryogenesis.

  19. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  20. Strongly correlating liquids and their isomorphs

    OpenAIRE

    Pedersen, Ulf R.; Gnan, Nicoletta; Bailey, Nicholas P.; Schröder, Thomas B.; Dyre, Jeppe C.

    2010-01-01

    This paper summarizes the properties of strongly correlating liquids, i.e., liquids with strong correlations between virial and potential energy equilibrium fluctuations at constant volume. We proceed to focus on the experimental predictions for strongly correlating glass-forming liquids. These predictions include i) density scaling, ii) isochronal superposition, iii) that there is a single function from which all frequency-dependent viscoelastic response functions may be calculated, iv) that...

  1. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  2. Atomic excitation and acceleration in strong laser fields

    International Nuclear Information System (INIS)

    Zimmermann, H; Eichmann, U

    2016-01-01

    Atomic excitation in the tunneling regime of a strong-field laser–matter interaction has been recently observed. It is conveniently explained by the concept of frustrated tunneling ionization (FTI), which naturally evolves from the well-established tunneling picture followed by classical dynamics of the electron in the combined laser field and Coulomb field of the ionic core. Important predictions of the FTI model such as the n distribution of Rydberg states after strong-field excitation and the dependence on the laser polarization have been confirmed in experiments. The model also establishes a sound basis to understand strong-field acceleration of neutral atoms in strong laser fields. The experimental observation has become possible recently and initiated a variety of experiments such as atomic acceleration in an intense standing wave and the survival of Rydberg states in strong laser fields. Furthermore, the experimental investigations on strong-field dissociation of molecules, where neutral excited fragments after the Coulomb explosion of simple molecules have been observed, can be explained. In this review, we introduce the subject and give an overview over relevant experiments supplemented by new results. (paper)

  3. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  5. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  6. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  7. Predicting adsorptive removal of chlorophenol from aqueous solution using artificial intelligence based modeling approaches.

    Science.gov (United States)

    Singh, Kunwar P; Gupta, Shikha; Ojha, Priyanka; Rai, Premanjali

    2013-04-01

    The research aims to develop artificial intelligence (AI)-based model to predict the adsorptive removal of 2-chlorophenol (CP) in aqueous solution by coconut shell carbon (CSC) using four operational variables (pH of solution, adsorbate concentration, temperature, and contact time), and to investigate their effects on the adsorption process. Accordingly, based on a factorial design, 640 batch experiments were conducted. Nonlinearities in experimental data were checked using Brock-Dechert-Scheimkman (BDS) statistics. Five nonlinear models were constructed to predict the adsorptive removal of CP in aqueous solution by CSC using four variables as input. Performances of the constructed models were evaluated and compared using statistical criteria. BDS statistics revealed strong nonlinearity in experimental data. Performance of all the models constructed here was satisfactory. Radial basis function network (RBFN) and multilayer perceptron network (MLPN) models performed better than generalized regression neural network, support vector machines, and gene expression programming models. Sensitivity analysis revealed that the contact time had highest effect on adsorption followed by the solution pH, temperature, and CP concentration. The study concluded that all the models constructed here were capable of capturing the nonlinearity in data. A better generalization and predictive performance of RBFN and MLPN models suggested that these can be used to predict the adsorption of CP in aqueous solution using CSC.

  8. Predicting 30-Day Readmissions in an Asian Population: Building a Predictive Model by Incorporating Markers of Hospitalization Severity.

    Directory of Open Access Journals (Sweden)

    Lian Leng Low

    Full Text Available To reduce readmissions, it may be cost-effective to consider risk stratification, with targeting intervention programs to patients at high risk of readmissions. In this study, we aimed to derive and validate a prediction model including several novel markers of hospitalization severity, and compare the model with the LACE index (Length of stay, Acuity of admission, Charlson comorbidity index, Emergency department visits in past 6 months, an established risk stratification tool.This was a retrospective cohort study of all patients ≥ 21 years of age, who were admitted to a tertiary hospital in Singapore from January 1, 2013 through May 31, 2015. Data were extracted from the hospital's electronic health records. The outcome was defined as unplanned readmissions within 30 days of discharge from the index hospitalization. Candidate predictive variables were broadly grouped into five categories: Patient demographics, social determinants of health, past healthcare utilization, medical comorbidities, and markers of hospitalization severity. Multivariable logistic regression was used to predict the outcome, and receiver operating characteristic analysis was performed to compare our model with the LACE index.74,102 cases were enrolled for analysis. Of these, 11,492 patient cases (15.5% were readmitted within 30 days of discharge. A total of fifteen predictive variables were strongly associated with the risk of 30-day readmissions, including number of emergency department visits in the past 6 months, Charlson Comorbidity Index, markers of hospitalization severity such as 'requiring inpatient dialysis during index admission, and 'treatment with intravenous furosemide 40 milligrams or more' during index admission. Our predictive model outperformed the LACE index by achieving larger area under the curve values: 0.78 (95% confidence interval [CI]: 0.77-0.79 versus 0.70 (95% CI: 0.69-0.71.Several factors are important for the risk of 30-day readmissions

  9. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  10. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  11. Towards a non-perturbative study of the strongly coupled standard model

    International Nuclear Information System (INIS)

    Dagotto, E.; Kogut, J.

    1988-01-01

    The strongly coupled standard model of Abbott and Farhi can be a good alternative to the standard model if it has a phase where chiral symmetry is not broken, the SU(2) sector confines and the scalar field is in the symmetric regime. To look for such a phase we did a numerical analysis in the context of lattice gauge theory. To simplify the model we studied a U(1) gauge theory with Higgs fields and four species of dynamical fermions. In this toy model we did not find a phase with the correct properties required by the strongly coupled standard model. We also speculate about a possible solution to this problem using a new phase of the SU(2) gauge theory with a large number of flavors. (orig.)

  12. Multi-model comparison highlights consistency in predicted effect of warming on a semi-arid shrub

    Science.gov (United States)

    Renwick, Katherine M.; Curtis, Caroline; Kleinhesselink, Andrew R.; Schlaepfer, Daniel R.; Bradley, Bethany A.; Aldridge, Cameron L.; Poulter, Benjamin; Adler, Peter B.

    2018-01-01

    A number of modeling approaches have been developed to predict the impacts of climate change on species distributions, performance, and abundance. The stronger the agreement from models that represent different processes and are based on distinct and independent sources of information, the greater the confidence we can have in their predictions. Evaluating the level of confidence is particularly important when predictions are used to guide conservation or restoration decisions. We used a multi-model approach to predict climate change impacts on big sagebrush (Artemisia tridentata), the dominant plant species on roughly 43 million hectares in the western United States and a key resource for many endemic wildlife species. To evaluate the climate sensitivity of A. tridentata, we developed four predictive models, two based on empirically derived spatial and temporal relationships, and two that applied mechanistic approaches to simulate sagebrush recruitment and growth. This approach enabled us to produce an aggregate index of climate change vulnerability and uncertainty based on the level of agreement between models. Despite large differences in model structure, predictions of sagebrush response to climate change were largely consistent. Performance, as measured by change in cover, growth, or recruitment, was predicted to decrease at the warmest sites, but increase throughout the cooler portions of sagebrush's range. A sensitivity analysis indicated that sagebrush performance responds more strongly to changes in temperature than precipitation. Most of the uncertainty in model predictions reflected variation among the ecological models, raising questions about the reliability of forecasts based on a single modeling approach. Our results highlight the value of a multi-model approach in forecasting climate change impacts and uncertainties and should help land managers to maximize the value of conservation investments.

  13. The fitness landscape of HIV-1 gag: advanced modeling approaches and validation of model predictions by in vitro testing.

    Directory of Open Access Journals (Sweden)

    Jaclyn K Mann

    2014-08-01

    Full Text Available Viral immune evasion by sequence variation is a major hindrance to HIV-1 vaccine design. To address this challenge, our group has developed a computational model, rooted in physics, that aims to predict the fitness landscape of HIV-1 proteins in order to design vaccine immunogens that lead to impaired viral fitness, thus blocking viable escape routes. Here, we advance the computational models to address previous limitations, and directly test model predictions against in vitro fitness measurements of HIV-1 strains containing multiple Gag mutations. We incorporated regularization into the model fitting procedure to address finite sampling. Further, we developed a model that accounts for the specific identity of mutant amino acids (Potts model, generalizing our previous approach (Ising model that is unable to distinguish between different mutant amino acids. Gag mutation combinations (17 pairs, 1 triple and 25 single mutations within these predicted to be either harmful to HIV-1 viability or fitness-neutral were introduced into HIV-1 NL4-3 by site-directed mutagenesis and replication capacities of these mutants were assayed in vitro. The predicted and measured fitness of the corresponding mutants for the original Ising model (r = -0.74, p = 3.6×10-6 are strongly correlated, and this was further strengthened in the regularized Ising model (r = -0.83, p = 3.7×10-12. Performance of the Potts model (r = -0.73, p = 9.7×10-9 was similar to that of the Ising model, indicating that the binary approximation is sufficient for capturing fitness effects of common mutants at sites of low amino acid diversity. However, we show that the Potts model is expected to improve predictive power for more variable proteins. Overall, our results support the ability of the computational models to robustly predict the relative fitness of mutant viral strains, and indicate the potential value of this approach for understanding viral immune evasion

  14. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  15. Strong dynamics and lattice gauge theory

    Science.gov (United States)

    Schaich, David

    In this dissertation I use lattice gauge theory to study models of electroweak symmetry breaking that involve new strong dynamics. Electroweak symmetry breaking (EWSB) is the process by which elementary particles acquire mass. First proposed in the 1960s, this process has been clearly established by experiments, and can now be considered a law of nature. However, the physics underlying EWSB is still unknown, and understanding it remains a central challenge in particle physics today. A natural possibility is that EWSB is driven by the dynamics of some new, strongly-interacting force. Strong interactions invalidate the standard analytical approach of perturbation theory, making these models difficult to study. Lattice gauge theory is the premier method for obtaining quantitatively-reliable, nonperturbative predictions from strongly-interacting theories. In this approach, we replace spacetime by a regular, finite grid of discrete sites connected by links. The fields and interactions described by the theory are likewise discretized, and defined on the lattice so that we recover the original theory in continuous spacetime on an infinitely large lattice with sites infinitesimally close together. The finite number of degrees of freedom in the discretized system lets us simulate the lattice theory using high-performance computing. Lattice gauge theory has long been applied to quantum chromodynamics, the theory of strong nuclear interactions. Using lattice gauge theory to study dynamical EWSB, as I do in this dissertation, is a new and exciting application of these methods. Of particular interest is non-perturbative lattice calculation of the electroweak S parameter. Experimentally S ≈ -0.15(10), which tightly constrains dynamical EWSB. On the lattice, I extract S from the momentum-dependence of vector and axial-vector current correlators. I created and applied computer programs to calculate these correlators and analyze them to determine S. I also calculated the masses

  16. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  17. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  18. Classical and quantum models of strong cosmic censorship

    International Nuclear Information System (INIS)

    Moncrief, V.E.

    1983-01-01

    The cosmic censorship conjecture states that naked singularities should not evolve from regular initial conditions in general relativity. In its strong form the conjecture asserts that space-times with Cauchy horizons must always be unstable and thus that the generic solution of Einstein's equations must be inextendible beyond its maximal Cauchy development. In this paper it is shown that one can construct an infinite-dimensional family of extendible cosmological solutions similar to Taub-NUT space-time; however, each of these solutions is unstable in precisely the way demanded by strong cosmic censorship. Finally it is shown that quantum fluctuations in the metric always provide (though in an unexpectedly subtle way) the ''generic perturbations'' which destroy the Cauchy horizons in these models. (author)

  19. Classical and quantum models of strong cosmic censorship

    Energy Technology Data Exchange (ETDEWEB)

    Moncrief, V.E. (Yale Univ., New Haven, CT (USA). Dept. of Physics)

    1983-04-01

    The cosmic censorship conjecture states that naked singularities should not evolve from regular initial conditions in general relativity. In its strong form the conjecture asserts that space-times with Cauchy horizons must always be unstable and thus that the generic solution of Einstein's equations must be inextendible beyond its maximal Cauchy development. In this paper it is shown that one can construct an infinite-dimensional family of extendible cosmological solutions similar to Taub-NUT space-time; however, each of these solutions is unstable in precisely the way demanded by strong cosmic censorship. Finally it is shown that quantum fluctuations in the metric always provide (though in an unexpectedly subtle way) the ''generic perturbations'' which destroy the Cauchy horizons in these models.

  20. Caviton dynamics in strong Langmuir turbulence

    International Nuclear Information System (INIS)

    DuBois, D.; Rose, H.A.; Russell, D.

    1989-01-01

    Recent studies based on long time computer simulations of Langmuir turbulence as described by Zakharov's model will be reviewed. These show that for strong to moderate ion sound samping the turbulent energy is dominantly in nonlinear ''caviton'' excitations which are localized in space and time. A local caviton model will be presented which accounts for the nucleation-collapse-burnout cycles of individual cavitons as well as their space-time correlations. This model is in detailed agreement with many features of the electron density fluctuation spectra in the ionosphere modified by powerful hf waves as measured by incoherent scatter radar. Recently such observations have verified a prediction of the theory that ''free'' Langmuir waves are emitted in the caviton collapse process. These observations and theoretical considerations also strongly imply that cavitons in the heated ionosphere, under certain conditions, evolve to states in which they are ordered in space and time. The sensitivity of the high frequency Langmuir field dynamics to the low frequency ion density fluctuations and the related caviton nucleation process will be discussed. 40 refs., 19 figs

  1. Caviton dynamics in strong Langmuir turbulence

    Science.gov (United States)

    DuBois, Don; Rose, Harvey A.; Russell, David

    1990-01-01

    Recent studies based on long time computer simulations of Langmuir turbulence as described by Zakharov's model will be reviewed. These show that for strong to moderate ion sound damping the turbulent energy is dominantly in non-linear "caviton" excitations which are localized in space and time. A local caviton model will be presented which accounts for the nucleation-collapse-burnout cycles of individual cavitons as well as their space-time correlations. This model is in detailed agreement with many features of the electron density fluctuation spectra in the ionosphere modified by powerful HF waves as measured by incoherent scatter radar. Recently such observations have verified a prediction of the theory that "free" Langmuir waves are emitted in the caviton collapse process. These observations and theoretical considerations also strongly imply that cavitons in the heated ionosphere, under certain conditions, evolve to states in which they are ordered in space and time. The sensitivity of the high frequency Langmuir field dynamics to the low frequency ion density fluctuations and the related caviton nucleation process will be discussed.

  2. Caviton dynamics in strong Langmuir turbulence

    International Nuclear Information System (INIS)

    DuBois, D.; Rose, H.A.; Russell, D.

    1990-01-01

    Recent studies based on long time computer simulations of Langmuir turbulence as described by Zakharov's model will be reviewed. These show that for strong to moderate ion sound damping the turbulent energy is dominantly in non-linear ''caviton'' excitations which are localized in space and time. A local caviton model will be presented which accounts for the nucleation-collapse-burnout cycles of individual cavitons as well as their space-time correlations. This model is in detailed agreement with many features of the electron density fluctuation spectra in the ionosphere modified by powerful HF waves as measured by incoherent scatter radar. Recently such observations have verified a prediction of the theory that ''free'' Langmuir waves are emitted in the caviton collapse process. These observations and theoretical considerations also strongly imply that cavitons in the heated ionosphere, under certain conditions, evolve to states in which they are ordered in space and time. The sensitivity of the high frequency Langmuir field dynamics to the low frequency ion density fluctuations and the related caviton nucleation process will be discussed. (orig.)

  3. Enhancement of a Turbulence Sub-Model for More Accurate Predictions of Vertical Stratifications in 3D Coastal and Estuarine Modeling

    Directory of Open Access Journals (Sweden)

    Wenrui Huang

    2010-03-01

    Full Text Available This paper presents an improvement of the Mellor and Yamada's 2nd order turbulence model in the Princeton Ocean Model (POM for better predictions of vertical stratifications of salinity in estuaries. The model was evaluated in the strongly stratified estuary, Apalachicola River, Florida, USA. The three-dimensional hydrodynamic model was applied to study the stratified flow and salinity intrusion in the estuary in response to tide, wind, and buoyancy forces. Model tests indicate that model predictions over estimate the stratification when using the default turbulent parameters. Analytic studies of density-induced and wind-induced flows indicate that accurate estimation of vertical eddy viscosity plays an important role in describing vertical profiles. Initial model revision experiments show that the traditional approach of modifying empirical constants in the turbulence model leads to numerical instability. In order to improve the performance of the turbulence model while maintaining numerical stability, a stratification factor was introduced to allow adjustment of the vertical turbulent eddy viscosity and diffusivity. Sensitivity studies indicate that the stratification factor, ranging from 1.0 to 1.2, does not cause numerical instability in Apalachicola River. Model simulations show that increasing the turbulent eddy viscosity by a stratification factor of 1.12 results in an optimal agreement between model predictions and observations in the case study presented in this study. Using the proposed stratification factor provides a useful way for coastal modelers to improve the turbulence model performance in predicting vertical turbulent mixing in stratified estuaries and coastal waters.

  4. Stability in a fiber bundle model: Existence of strong links and the effect of disorder

    Science.gov (United States)

    Roy, Subhadeep

    2018-05-01

    The present paper deals with a fiber bundle model which consists of a fraction α of infinitely strong fibers. The inclusion of such an unbreakable fraction has been proven to affect the failure process in early studies, especially around a critical value αc. The present work has a twofold purpose: (i) a study of failure abruptness, mainly the brittle to quasibrittle transition point with varying α and (ii) variation of αc as we change the strength of disorder introduced in the model. The brittle to quasibrittle transition is confirmed from the failure abruptness. On the other hand, the αc is obtained from the knowledge of failure abruptness as well as the statistics of avalanches. It is observed that the brittle to quasibrittle transition point scales to lower values, suggesting more quasi-brittle-like continuous failure when α is increased. At the same time, the bundle becomes stronger as there are larger numbers of strong links to support the external stress. High α in a highly disordered bundle leads to an ideal situation where the bundle strength, as well as the predictability in failure process is very high. Also, the critical fraction αc, required to make the model deviate from the conventional results, increases with decreasing strength of disorder. The analytical expression for αc shows good agreement with the numerical results. Finally, the findings in the paper are compared with previous results and real-life applications of composite materials.

  5. Strong moduli stabilization and phenomenology

    CERN Document Server

    Dudas, Emilian; Mambrini, Yann; Mustafayev, Azar; Olive, Keith A

    2013-01-01

    We describe the resulting phenomenology of string theory/supergravity models with strong moduli stabilization. The KL model with F-term uplifting, is one such example. Models of this type predict universal scalar masses equal to the gravitino mass. In contrast, A-terms receive highly suppressed gravity mediated contributions. Under certain conditions, the same conclusion is valid for gaugino masses, which like A-terms, are then determined by anomalies. In such models, we are forced to relatively large gravitino masses (30-1000 TeV). We compute the low energy spectrum as a function of m_{3/2}. We see that the Higgs masses naturally takes values between 125-130 GeV. The lower limit is obtained from the requirement of chargino masses greater than 104 GeV, while the upper limit is determined by the relic density of dark matter (wino-like).

  6. Seasonal prediction of the Leeuwin Current using the POAMA dynamical seasonal forecast model

    Energy Technology Data Exchange (ETDEWEB)

    Hendon, Harry H.; Wang, Guomin [Centre for Australian Weather and Climate Research, Bureau of Meteorology, PO Box 1289, Melbourne (Australia)

    2010-06-15

    The potential for predicting interannual variations of the Leeuwin Current along the west coast of Australia is addressed. The Leeuwin Current flows poleward against the prevailing winds and transports warm-fresh tropical water southward along the coast, which has a great impact on local climate and ecosystems. Variations of the current are tightly tied to El Nino/La Nina (weak during El Nino and strong during La Nina). Skilful seasonal prediction of the Leeuwin Current to 9-month lead time is achieved by empirical downscaling of dynamical coupled model forecasts of El Nino and the associated upper ocean heat content anomalies off the north west coast of Australia from the Australian Bureau of Meteorology Predictive Ocean Atmosphere Model for Australia (POAMA) seasonal forecast system. Prediction of the Leeuwin Current is possible because the heat content fluctuations off the north west coast are the primary driver of interannual annual variations of the current and these heat content variations are tightly tied to the occurrence of El Nino/La Nina. POAMA can skilfully predict both the occurrence of El Nino/La Nina and the subsequent transmission of the heat content anomalies from the Pacific onto the north west coast. (orig.)

  7. Solution of the strong CP problem by color exchange

    International Nuclear Information System (INIS)

    Barr, S.M.; Zee, A.

    1985-08-01

    We present a new way to solve the strong CP problem in models with a spontaneously broken CP invariance. It is simpler than existing non-Peccei-Quinn approaches. It predicts the existence of light (i.e. weak scale) colored Higgs bosons which could be seen in colliders. 25 refs., 3 figs

  8. Strongly coupled gauge theories: What can lattice calculations teach us?

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Electroweak symmetry breaking and the dynamical origin of the Higgs boson are central questions today. Strongly coupled systems predicting the Higgs boson as a bound state of a new gauge-fermion interaction are candidates to describe beyond Standard Model physics. The phenomenologically viable models are strongly coupled, near the conformal boundary, requiring non-perturbative studies to reveal their properties. Lattice studies show that many of the beyond-Standard Model candidates have a relatively light isosinglet scalar state that is well separated from the rest of the spectrum. When the scale is set via the vev of electroweak symmetry breaking, a 2 TeV vector resonance appears to be a general feature of many of these models with several other resonances that are not much heavier.

  9. Performance of a Predictive Model for Calculating Ascent Time to a Target Temperature

    Directory of Open Access Journals (Sweden)

    Jin Woo Moon

    2016-12-01

    Full Text Available The aim of this study was to develop an artificial neural network (ANN prediction model for controlling building heating systems. This model was used to calculate the ascent time of indoor temperature from the setback period (when a building was not occupied to a target setpoint temperature (when a building was occupied. The calculated ascent time was applied to determine the proper moment to start increasing the temperature from the setback temperature to reach the target temperature at an appropriate time. Three major steps were conducted: (1 model development; (2 model optimization; and (3 performance evaluation. Two software programs—Matrix Laboratory (MATLAB and Transient Systems Simulation (TRNSYS—were used for model development, performance tests, and numerical simulation methods. Correlation analysis between input variables and the output variable of the ANN model revealed that two input variables (current indoor air temperature and temperature difference from the target setpoint temperature, presented relatively strong relationships with the ascent time to the target setpoint temperature. These two variables were used as input neurons. Analyzing the difference between the simulated and predicted values from the ANN model provided the optimal number of hidden neurons (9, hidden layers (3, moment (0.9, and learning rate (0.9. At the study’s conclusion, the optimized model proved its prediction accuracy with acceptable errors.

  10. Multivariable model predictive control design of reactive distillation column for Dimethyl Ether production

    Science.gov (United States)

    Wahid, A.; Putra, I. G. E. P.

    2018-03-01

    Dimethyl ether (DME) as an alternative clean energy has attracted a growing attention in the recent years. DME production via reactive distillation has potential for capital cost and energy requirement savings. However, combination of reaction and distillation on a single column makes reactive distillation process a very complex multivariable system with high non-linearity of process and strong interaction between process variables. This study investigates a multivariable model predictive control (MPC) based on two-point temperature control strategy for the DME reactive distillation column to maintain the purities of both product streams. The process model is estimated by a first order plus dead time model. The DME and water purity is maintained by controlling a stage temperature in rectifying and stripping section, respectively. The result shows that the model predictive controller performed faster responses compared to conventional PI controller that are showed by the smaller ISE values. In addition, the MPC controller is able to handle the loop interactions well.

  11. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  12. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  13. External validation of models predicting the individual risk of metachronous peritoneal carcinomatosis from colon and rectal cancer.

    Science.gov (United States)

    Segelman, J; Akre, O; Gustafsson, U O; Bottai, M; Martling, A

    2016-04-01

    To externally validate previously published predictive models of the risk of developing metachronous peritoneal carcinomatosis (PC) after resection of nonmetastatic colon or rectal cancer and to update the predictive model for colon cancer by adding new prognostic predictors. Data from all patients with Stage I-III colorectal cancer identified from a population-based database in Stockholm between 2008 and 2010 were used. We assessed the concordance between the predicted and observed probabilities of PC and utilized proportional-hazard regression to update the predictive model for colon cancer. When applied to the new validation dataset (n = 2011), the colon and rectal cancer risk-score models predicted metachronous PC with a concordance index of 79% and 67%, respectively. After adding the subclasses of pT3 and pT4 stage and mucinous tumour to the colon cancer model, the concordance index increased to 82%. In validation of external and recent cohorts, the predictive accuracy was strong in colon cancer and moderate in rectal cancer patients. The model can be used to identify high-risk patients for planned second-look laparoscopy/laparotomy for possible subsequent cytoreductive surgery and hyperthermic intraperitoneal chemotherapy. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.

  14. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  15. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  16. Ensemble-based Kalman Filters in Strongly Nonlinear Dynamics

    Institute of Scientific and Technical Information of China (English)

    Zhaoxia PU; Joshua HACKER

    2009-01-01

    This study examines the effectiveness of ensemble Kalman filters in data assimilation with the strongly nonlinear dynamics of the Lorenz-63 model, and in particular their use in predicting the regime transition that occurs when the model jumps from one basin of attraction to the other. Four configurations of the ensemble-based Kalman filtering data assimilation techniques, including the ensemble Kalman filter, ensemble adjustment Kalman filter, ensemble square root filter and ensemble transform Kalman filter, are evaluated with their ability in predicting the regime transition (also called phase transition) and also are compared in terms of their sensitivity to both observational and sampling errors. The sensitivity of each ensemble-based filter to the size of the ensemble is also examined.

  17. Can phenological models predict tree phenology accurately under climate change conditions?

    Science.gov (United States)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  18. Ruling out a strongly interacting standard Higgs model

    International Nuclear Information System (INIS)

    Riesselmann, K.; Willenbrock, S.

    1997-01-01

    Previous work has suggested that perturbation theory is unreliable for Higgs- and Goldstone-boson scattering, at energies above the Higgs-boson mass, for relatively small values of the Higgs quartic coupling λ(μ). By performing a summation of nonlogarithmic terms, we show that perturbation theory is in fact reliable up to relatively large coupling. This eliminates the possibility of a strongly interacting standard Higgs model at energies above the Higgs-boson mass, complementing earlier studies which excluded strong interactions at energies near the Higgs-boson mass. The summation can be formulated in terms of an appropriate scale in the running coupling, μ=√(s)/e∼√(s)/2.7, so it can be incorporated easily in renormalization-group-improved tree-level amplitudes as well as higher-order calculations. copyright 1996 The American Physical Society

  19. Remnants of strong tidal interactions

    International Nuclear Information System (INIS)

    Mcglynn, T.A.

    1990-01-01

    This paper examines the properties of stellar systems that have recently undergone a strong tidal shock, i.e., a shock which removes a significant fraction of the particles in the system, and where the shocked system has a much smaller mass than the producer of the tidal field. N-body calculations of King models shocked in a variety of ways are performed, and the consequences of the shocks are investigated. The results confirm the prediction of Jaffe for shocked systems. Several models are also run where the tidal forces on the system are constant, simulating a circular orbit around a primary, and the development of tidal radii under these static conditions appears to be a mild process which does not dramatically affect material that is not stripped. The tidal radii are about twice as large as classical formulas would predict. Remnant density profiles are compared with a sample of elliptical galaxies, and the implications of the results for the development of stellar populations and galaxies are considered. 38 refs

  20. Intra- and interspecies gene expression models for predicting drug response in canine osteosarcoma.

    Science.gov (United States)

    Fowles, Jared S; Brown, Kristen C; Hess, Ann M; Duval, Dawn L; Gustafson, Daniel L

    2016-02-19

    Genomics-based predictors of drug response have the potential to improve outcomes associated with cancer therapy. Osteosarcoma (OS), the most common primary bone cancer in dogs, is commonly treated with adjuvant doxorubicin or carboplatin following amputation of the affected limb. We evaluated the use of gene-expression based models built in an intra- or interspecies manner to predict chemosensitivity and treatment outcome in canine OS. Models were built and evaluated using microarray gene expression and drug sensitivity data from human and canine cancer cell lines, and canine OS tumor datasets. The "COXEN" method was utilized to filter gene signatures between human and dog datasets based on strong co-expression patterns. Models were built using linear discriminant analysis via the misclassification penalized posterior algorithm. The best doxorubicin model involved genes identified in human lines that were co-expressed and trained on canine OS tumor data, which accurately predicted clinical outcome in 73 % of dogs (p = 0.0262, binomial). The best carboplatin model utilized canine lines for gene identification and model training, with canine OS tumor data for co-expression. Dogs whose treatment matched our predictions had significantly better clinical outcomes than those that didn't (p = 0.0006, Log Rank), and this predictor significantly associated with longer disease free intervals in a Cox multivariate analysis (hazard ratio = 0.3102, p = 0.0124). Our data show that intra- and interspecies gene expression models can successfully predict response in canine OS, which may improve outcome in dogs and serve as pre-clinical validation for similar methods in human cancer research.

  1. The differences in hadronic cross-sections and the residues of secondary reggeons in the quark-gluon model for strong interactions

    International Nuclear Information System (INIS)

    Kaidalov, A.B.; Volkovitsky, P.E.

    1981-01-01

    In the framework of the quark-gluon picture for strong interactions based on the topological expansion and the string model, the relations between t differences of hadronic cross- section are obtained. The system of equations for the contribution of secondary reggeons (rho, ω, f, A 2 and phi and f' poles) to the elastic scattering amplitudes for arbitrary hadrons is derived. It is shown that this system has a factorized solution and the secondary reggeon residues for all hadrons are expressed in terms of the universal function g(t). The model predictions are in a good agreement with experimental data [ru

  2. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  3. Efem vs. XFEM: a comparative study for modeling strong discontinuity in geomechanics

    OpenAIRE

    Das, Kamal C.; Ausas, Roberto Federico; Segura Segarra, José María; Narang, Ankur; Rodrigues, Eduardo; Carol, Ignacio; Lakshmikantha, Ramasesha Mookanahallipatna; Mello,, U.

    2015-01-01

    Modeling of big faults or weak planes of strong and weak discontinuities is of major importance to assess the Geomechanical behaviour of mining/civil tunnel, reservoirs etc. For modelling fractures in Geomechanics, prior art has been limited to Interface Elements which suffer from numerical instability and where faults are required to be aligned with element edges. In this paper, we consider comparative study on finite elements for capturing strong discontinuities by means of elemental (EFEM)...

  4. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  5. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  6. Constraints on cosmological models from strong gravitational lensing systems

    International Nuclear Information System (INIS)

    Cao, Shuo; Pan, Yu; Zhu, Zong-Hong; Biesiada, Marek; Godlowski, Wlodzimierz

    2012-01-01

    Strong lensing has developed into an important astrophysical tool for probing both cosmology and galaxies (their structure, formation, and evolution). Using the gravitational lensing theory and cluster mass distribution model, we try to collect a relatively complete observational data concerning the Hubble constant independent ratio between two angular diameter distances D ds /D s from various large systematic gravitational lens surveys and lensing by galaxy clusters combined with X-ray observations, and check the possibility to use it in the future as complementary to other cosmological probes. On one hand, strongly gravitationally lensed quasar-galaxy systems create such a new opportunity by combining stellar kinematics (central velocity dispersion measurements) with lensing geometry (Einstein radius determination from position of images). We apply such a method to a combined gravitational lens data set including 70 data points from Sloan Lens ACS (SLACS) and Lens Structure and Dynamics survey (LSD). On the other hand, a new sample of 10 lensing galaxy clusters with redshifts ranging from 0.1 to 0.6 carefully selected from strong gravitational lensing systems with both X-ray satellite observations and optical giant luminous arcs, is also used to constrain three dark energy models (ΛCDM, constant w and CPL) under a flat universe assumption. For the full sample (n = 80) and the restricted sample (n = 46) including 36 two-image lenses and 10 strong lensing arcs, we obtain relatively good fitting values of basic cosmological parameters, which generally agree with the results already known in the literature. This results encourages further development of this method and its use on larger samples obtained in the future

  7. Constraints on cosmological models from strong gravitational lensing systems

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Shuo; Pan, Yu; Zhu, Zong-Hong [Department of Astronomy, Beijing Normal University, Beijing 100875 (China); Biesiada, Marek [Department of Astrophysics and Cosmology, Institute of Physics, University of Silesia, Uniwersytecka 4, 40-007 Katowice (Poland); Godlowski, Wlodzimierz, E-mail: baodingcaoshuo@163.com, E-mail: panyu@cqupt.edu.cn, E-mail: biesiada@us.edu.pl, E-mail: godlowski@uni.opole.pl, E-mail: zhuzh@bnu.edu.cn [Institute of Physics, Opole University, Oleska 48, 45-052 Opole (Poland)

    2012-03-01

    Strong lensing has developed into an important astrophysical tool for probing both cosmology and galaxies (their structure, formation, and evolution). Using the gravitational lensing theory and cluster mass distribution model, we try to collect a relatively complete observational data concerning the Hubble constant independent ratio between two angular diameter distances D{sub ds}/D{sub s} from various large systematic gravitational lens surveys and lensing by galaxy clusters combined with X-ray observations, and check the possibility to use it in the future as complementary to other cosmological probes. On one hand, strongly gravitationally lensed quasar-galaxy systems create such a new opportunity by combining stellar kinematics (central velocity dispersion measurements) with lensing geometry (Einstein radius determination from position of images). We apply such a method to a combined gravitational lens data set including 70 data points from Sloan Lens ACS (SLACS) and Lens Structure and Dynamics survey (LSD). On the other hand, a new sample of 10 lensing galaxy clusters with redshifts ranging from 0.1 to 0.6 carefully selected from strong gravitational lensing systems with both X-ray satellite observations and optical giant luminous arcs, is also used to constrain three dark energy models (ΛCDM, constant w and CPL) under a flat universe assumption. For the full sample (n = 80) and the restricted sample (n = 46) including 36 two-image lenses and 10 strong lensing arcs, we obtain relatively good fitting values of basic cosmological parameters, which generally agree with the results already known in the literature. This results encourages further development of this method and its use on larger samples obtained in the future.

  8. QCD : the theory of strong interactions Conference MT17

    CERN Multimedia

    2001-01-01

    The theory of strong interactions,Quantum Chromodynamics (QCD), predicts that the strong interaction is transmitted by the exchange of particles called gluons. Unlike the messengers of electromagnetism photons, which are electrically neutral - gluons carry a strong charge associated with the interaction they mediate. QCD predicts that the strength of the interaction between quarks and gluons becomes weaker at higher energies. LEP has measured the evolution of the strong coupling constant up to energies of 200 GeV and has confirmed this prediction.

  9. Prediction Model of Machining Failure Trend Based on Large Data Analysis

    Science.gov (United States)

    Li, Jirong

    2017-12-01

    The mechanical processing has high complexity, strong coupling, a lot of control factors in the machining process, it is prone to failure, in order to improve the accuracy of fault detection of large mechanical equipment, research on fault trend prediction requires machining, machining fault trend prediction model based on fault data. The characteristics of data processing using genetic algorithm K mean clustering for machining, machining feature extraction which reflects the correlation dimension of fault, spectrum characteristics analysis of abnormal vibration of complex mechanical parts processing process, the extraction method of the abnormal vibration of complex mechanical parts processing process of multi-component spectral decomposition and empirical mode decomposition Hilbert based on feature extraction and the decomposition results, in order to establish the intelligent expert system for the data base, combined with large data analysis method to realize the machining of the Fault trend prediction. The simulation results show that this method of fault trend prediction of mechanical machining accuracy is better, the fault in the mechanical process accurate judgment ability, it has good application value analysis and fault diagnosis in the machining process.

  10. Evaluation of DNA bending models in their capacity to predict electrophoretic migration anomalies of satellite DNA sequences.

    Science.gov (United States)

    Matyášek, Roman; Fulneček, Jaroslav; Kovařík, Aleš

    2013-09-01

    DNA containing a sequence that generates a local curvature exhibits a pronounced retardation in electrophoretic mobility. Various theoretical models have been proposed to explain relationship between DNA structural features and migration anomaly. Here, we studied the capacity of 15 static wedge-bending models to predict electrophoretic behavior of 69 satellite monomers derived from four divergent families. All monomers exhibited retarded mobility in PAGE corresponding to retardation factors ranging 1.02-1.54. The curvature varied both within and across the groups and correlated with the number, position, and lengths of A-tracts. Two dinucleotide models provided strong correlation between gel mobility and curvature prediction; two trinucleotide models were satisfactory while remaining dinucleotide models provided intermediate results with reliable prediction for subsets of sequences only. In some cases, similarly shaped molecules exhibited relatively large differences in mobility and vice versa. Generally less accurate predictions were obtained in groups containing less homogeneous sequences possessing distinct structural features. In conclusion, relatively universal theoretical models were identified suitable for the analysis of natural sequences known to harbor relatively moderate curvature. These models could be potentially applied to genome wide studies. However, in silico predictions should be viewed in context of experimental measurement of intrinsic DNA curvature. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  12. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  13. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  14. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  15. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  16. Strongly Coupled Models with a Higgs-like Boson

    Science.gov (United States)

    Pich, Antonio; Rosell, Ignasi; José Sanz-Cillero, Juan

    2013-11-01

    Considering the one-loop calculation of the oblique S and T parameters, we have presented a study of the viability of strongly-coupled scenarios of electroweak symmetry breaking with a light Higgs-like boson. The calculation has been done by using an effective Lagrangian, being short-distance constraints and dispersive relations the main ingredients of the estimation. Contrary to a widely spread believe, we have demonstrated that strongly coupled electroweak models with massive resonances are not in conflict with experimentalconstraints on these parameters and the recently observed Higgs-like resonance. So there is room for these models, but they are stringently constrained. The vector and axial-vector states should be heavy enough (with masses above the TeV scale), the mass splitting between them is highly preferred to be small and the Higgs-like scalar should have a WW coupling close to the Standard Model one. It is important to stress that these conclusions do not depend critically on the inclusion of the second Weinberg sum rule. We wish to thank the organizers of LHCP 2013 for the pleasant conference. This work has been supported in part by the Spanish Government and the European Commission [FPA2010-17747, FPA2011- 23778, AIC-D-2011-0818, SEV-2012-0249 (Severo Ochoa Program), CSD2007-00042 (Consolider Project CPAN)], the Generalitat Valenciana [PrometeoII/2013/007] and the Comunidad de Madrid [HEPHACOS S2009/ESP-1473].

  17. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  18. The organization of irrational beliefs in posttraumatic stress symptomology: testing the predictions of REBT theory using structural equation modelling.

    Science.gov (United States)

    Hyland, Philip; Shevlin, Mark; Adamson, Gary; Boduszek, Daniel

    2014-01-01

    This study directly tests a central prediction of rational emotive behaviour therapy (REBT) that has received little empirical attention regarding the core and intermediate beliefs in the development of posttraumatic stress symptoms. A theoretically consistent REBT model of posttraumatic stress disorder (PTSD) was examined using structural equation modelling techniques among a sample of 313 trauma-exposed military and law enforcement personnel. The REBT model of PTSD provided a good fit of the data, χ(2) = 599.173, df = 356, p depreciation beliefs. Results were consistent with the predictions of REBT theory and provides strong empirical support that the cognitive variables described by REBT theory are critical cognitive constructs in the prediction of PTSD symptomology. © 2013 Wiley Periodicals, Inc.

  19. A minimal model of the Atlantic Multidecadal Variability: its genesis and predictability

    Energy Technology Data Exchange (ETDEWEB)

    Ou, Hsien-Wang [Lamont-Doherty Earth Observatory of Columbia University, Department of Earth and Environmental Sciences, Palisades, NY (United States)

    2012-02-15

    Through a box model of the subpolar North Atlantic, we examine the genesis and predictability of the Atlantic Multidecadal Variability (AMV), posited as a linear perturbation sustained by the stochastic atmosphere. Postulating a density-dependent thermohaline circulation (THC), the latter would strongly differentiate the thermal and saline damping, and facilitate a negative feedback between the two fields. This negative feedback preferentially suppresses the low-frequency thermal variance to render a broad multidecadal peak bounded by the thermal and saline damping time. We offer this ''differential variance suppression'' as an alternative paradigm of the AMV in place of the ''damped oscillation'' - the latter generally not allowed by the deterministic dynamics and in any event bears no relation to the thermal peak. With the validated dynamics, we then assess the AMV predictability based on the relative entropy - a difference of the forecast and climatological probability distributions, which decays through both error growth and dynamical damping. Since the stochastic forcing is mainly in the surface heat flux, the thermal noise grows rapidly and together with its climatological variance limited by the THC-aided thermal damping, they strongly curtail the thermal predictability. The latter may be prolonged if the initial thermal and saline anomalies are of the same sign, but even rare events of less than 1% chance of occurrence yield a predictable time that is well short of a decade; we contend therefore that the AMV is in effect unpredictable. (orig.)

  20. A study of single multiplicative neuron model with nonlinear filters for hourly wind speed prediction

    International Nuclear Information System (INIS)

    Wu, Xuedong; Zhu, Zhiyu; Su, Xunliang; Fan, Shaosheng; Du, Zhaoping; Chang, Yanchao; Zeng, Qingjun

    2015-01-01

    Wind speed prediction is one important methods to guarantee the wind energy integrated into the whole power system smoothly. However, wind power has a non–schedulable nature due to the strong stochastic nature and dynamic uncertainty nature of wind speed. Therefore, wind speed prediction is an indispensable requirement for power system operators. Two new approaches for hourly wind speed prediction are developed in this study by integrating the single multiplicative neuron model and the iterated nonlinear filters for updating the wind speed sequence accurately. In the presented methods, a nonlinear state–space model is first formed based on the single multiplicative neuron model and then the iterated nonlinear filters are employed to perform dynamic state estimation on wind speed sequence with stochastic uncertainty. The suggested approaches are demonstrated using three cases wind speed data and are compared with autoregressive moving average, artificial neural network, kernel ridge regression based residual active learning and single multiplicative neuron model methods. Three types of prediction errors, mean absolute error improvement ratio and running time are employed for different models’ performance comparison. Comparison results from Tables 1–3 indicate that the presented strategies have much better performance for hourly wind speed prediction than other technologies. - Highlights: • Developed two novel hybrid modeling methods for hourly wind speed prediction. • Uncertainty and fluctuations of wind speed can be better explained by novel methods. • Proposed strategies have online adaptive learning ability. • Proposed approaches have shown better performance compared with existed approaches. • Comparison and analysis of two proposed novel models for three cases are provided

  1. Bayesian uncertainty assessment of flood predictions in ungauged urban basins for conceptual rainfall-runoff models

    Directory of Open Access Journals (Sweden)

    A. E. Sikorska

    2012-04-01

    Full Text Available Urbanization and the resulting land-use change strongly affect the water cycle and runoff-processes in watersheds. Unfortunately, small urban watersheds, which are most affected by urban sprawl, are mostly ungauged. This makes it intrinsically difficult to assess the consequences of urbanization. Most of all, it is unclear how to reliably assess the predictive uncertainty given the structural deficits of the applied models. In this study, we therefore investigate the uncertainty of flood predictions in ungauged urban basins from structurally uncertain rainfall-runoff models. To this end, we suggest a procedure to explicitly account for input uncertainty and model structure deficits using Bayesian statistics with a continuous-time autoregressive error model. In addition, we propose a concise procedure to derive prior parameter distributions from base data and successfully apply the methodology to an urban catchment in Warsaw, Poland. Based on our results, we are able to demonstrate that the autoregressive error model greatly helps to meet the statistical assumptions and to compute reliable prediction intervals. In our study, we found that predicted peak flows were up to 7 times higher than observations. This was reduced to 5 times with Bayesian updating, using only few discharge measurements. In addition, our analysis suggests that imprecise rainfall information and model structure deficits contribute mostly to the total prediction uncertainty. In the future, flood predictions in ungauged basins will become more important due to ongoing urbanization as well as anthropogenic and climatic changes. Thus, providing reliable measures of uncertainty is crucial to support decision making.

  2. QCD : the theory of strong interactions Exhibition LEPFest 2000

    CERN Multimedia

    2000-01-01

    The theory of strong interactions,Quantum Chromodynamics (QCD),predicts that the strong interac- tion is transmitted by the exchange of particles called glu- ons.Unlike the messengers of electromagnetism -pho- tons,which are electrically neutral -gluons carry a strong charge associated with the interaction they mediate. QCD predicts that the strength of the interaction between quarks and gluons becomes weaker at higher energies.LEP has measured the evolution of the strong coupling constant up to energies of 200 GeV and has confirmed this prediction.

  3. Modelling and prediction of crop losses from NOAA polar-orbiting operational satellites

    Directory of Open Access Journals (Sweden)

    Felix Kogan

    2016-05-01

    Full Text Available Weather-related crop losses have always been a concern for farmers, governments, traders, and policy-makers for the purpose of balanced food supply/demands, trade, and distribution of aid to the nations in need. Among weather disasters, drought plays a major role in large-scale crop losses. This paper discusses utility of operational satellite-based vegetation health (VH indices for modelling cereal yield and for early warning of drought-related crop losses. The indices were tested in Saratov oblast (SO, one of the principal grain growing regions of Russia. Correlation and regression analysis were applied to model cereal yield from VH indices during 1982–2001. A strong correlation between mean SO's cereal yield and VH indices were found during the critical period of cereals, which starts two–three weeks before and ends two–three weeks after the heading stage. Several models were constructed where VH indices served as independent variables (predictors. The models were validated independently based on SO cereal yield during 1982–2012. Drought-related cereal yield losses can be predicted three months in advance of harvest and six–eight months in advance of official grain production statistic is released. The error of production losses prediction is 7%–10%. The error of prediction drops to 3%–5% in the years of intensive droughts.

  4. A review of model predictive control: moving from linear to nonlinear design methods

    International Nuclear Information System (INIS)

    Nandong, J.; Samyudia, Y.; Tade, M.O.

    2006-01-01

    Linear model predictive control (LMPC) has now been considered as an industrial control standard in process industry. Its extension to nonlinear cases however has not yet gained wide acceptance due to many reasons, e.g. excessively heavy computational load and effort, thus, preventing its practical implementation in real-time control. The application of nonlinear MPC (NMPC) is advantageous for processes with strong nonlinearity or when the operating points are frequently moved from one set point to another due to, for instance, changes in market demands. Much effort has been dedicated towards improving the computational efficiency of NMPC as well as its stability analysis. This paper provides a review on alternative ways of extending linear MPC to the nonlinear one. We also highlight the critical issues pertinent to the applications of NMPC and discuss possible solutions to address these issues. In addition, we outline the future research trend in the area of model predictive control by emphasizing on the potential applications of multi-scale process model within NMPC

  5. Orbifolds and Exact Solutions of Strongly-Coupled Matrix Models

    Science.gov (United States)

    Córdova, Clay; Heidenreich, Ben; Popolitov, Alexandr; Shakirov, Shamil

    2018-02-01

    We find an exact solution to strongly-coupled matrix models with a single-trace monomial potential. Our solution yields closed form expressions for the partition function as well as averages of Schur functions. The results are fully factorized into a product of terms linear in the rank of the matrix and the parameters of the model. We extend our formulas to include both logarithmic and finite-difference deformations, thereby generalizing the celebrated Selberg and Kadell integrals. We conjecture a formula for correlators of two Schur functions in these models, and explain how our results follow from a general orbifold-like procedure that can be applied to any one-matrix model with a single-trace potential.

  6. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  7. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Copula Entropy coupled with Wavelet Neural Network Model for Hydrological Prediction

    Science.gov (United States)

    Wang, Yin; Yue, JiGuang; Liu, ShuGuang; Wang, Li

    2018-02-01

    Artificial Neural network(ANN) has been widely used in hydrological forecasting. in this paper an attempt has been made to find an alternative method for hydrological prediction by combining Copula Entropy(CE) with Wavelet Neural Network(WNN), CE theory permits to calculate mutual information(MI) to select Input variables which avoids the limitations of the traditional linear correlation(LCC) analysis. Wavelet analysis can provide the exact locality of any changes in the dynamical patterns of the sequence Coupled with ANN Strong non-linear fitting ability. WNN model was able to provide a good fit with the hydrological data. finally, the hybrid model(CE+WNN) have been applied to daily water level of Taihu Lake Basin, and compared with CE ANN, LCC WNN and LCC ANN. Results showed that the hybrid model produced better results in estimating the hydrograph properties than the latter models.

  9. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  10. Multiple Model Predictive Hybrid Feedforward Control of Fuel Cell Power Generation System

    Directory of Open Access Journals (Sweden)

    Long Wu

    2018-02-01

    Full Text Available Solid oxide fuel cell (SOFC is widely considered as an alternative solution among the family of the sustainable distributed generation. Its load flexibility enables it adjusting the power output to meet the requirements from power grid balance. Although promising, its control is challenging when faced with load changes, during which the output voltage is required to be maintained as constant and fuel utilization rate kept within a safe range. Moreover, it makes the control even more intractable because of the multivariable coupling and strong nonlinearity within the wide-range operating conditions. To this end, this paper developed a multiple model predictive control strategy for reliable SOFC operation. The resistance load is regarded as a measurable disturbance, which is an input to the model predictive control as feedforward compensation. The coupling is accommodated by the receding horizon optimization. The nonlinearity is mitigated by the multiple linear models, the weighted sum of which serves as the final control execution. The merits of the proposed control structure are demonstrated by the simulation results.

  11. Explicit Nonlinear Model Predictive Control for a Saucer-Shaped Unmanned Aerial Vehicle

    Directory of Open Access Journals (Sweden)

    Zhihui Xing

    2013-01-01

    Full Text Available A lifting body unmanned aerial vehicle (UAV generates lift by its body and shows many significant advantages due to the particular shape, such as huge loading space, small wetted area, high-strength fuselage structure, and large lifting area. However, designing the control law for a lifting body UAV is quite challenging because it has strong nonlinearity and coupling, and usually lacks it rudders. In this paper, an explicit nonlinear model predictive control (ENMPC strategy is employed to design a control law for a saucer-shaped UAV which can be adequately modeled with a rigid 6-degrees-of-freedom (DOF representation. In the ENMPC, control signal is calculated by approximation of the tracking error in the receding horizon by its Taylor-series expansion to any specified order. It enhances the advantages of the nonlinear model predictive control and eliminates the time-consuming online optimization. The simulation results show that ENMPC is a propriety strategy for controlling lifting body UAVs and can compensate the insufficient control surface area.

  12. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  13. Machine Learning Phases of Strongly Correlated Fermions

    Directory of Open Access Journals (Sweden)

    Kelvin Ch’ng

    2017-08-01

    Full Text Available Machine learning offers an unprecedented perspective for the problem of classifying phases in condensed matter physics. We employ neural-network machine learning techniques to distinguish finite-temperature phases of the strongly correlated fermions on cubic lattices. We show that a three-dimensional convolutional network trained on auxiliary field configurations produced by quantum Monte Carlo simulations of the Hubbard model can correctly predict the magnetic phase diagram of the model at the average density of one (half filling. We then use the network, trained at half filling, to explore the trend in the transition temperature as the system is doped away from half filling. This transfer learning approach predicts that the instability to the magnetic phase extends to at least 5% doping in this region. Our results pave the way for other machine learning applications in correlated quantum many-body systems.

  14. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  15. Nonperturbative Dynamics of Strong Interactions from Gauge/Gravity Duality

    Energy Technology Data Exchange (ETDEWEB)

    Grigoryan, Hovhannes [Louisiana State Univ., Baton Rouge, LA (United States)

    2008-08-01

    This thesis studies important dynamical observables of strong interactions such as form factors. It is known that Quantum Chromodynamics (QCD) is a theory which describes strong interactions. For large energies, one can apply perturbative techniques to solve some of the QCD problems. However, for low energies QCD enters into the nonperturbative regime, where di erent analytical or numerical tools have to be applied to solve problems of strong interactions. The holographic dual model of QCD is such an analytical tool that allows one to solve some nonperturbative QCD problems by translating them into a dual ve-dimensional theory de ned on some warped Anti de Sitter (AdS) background. Working within the framework of the holographic dual model of QCD, we develop a formalism to calculate form factors and wave functions of vector mesons and pions. As a result, we provide predictions of the electric radius, the magnetic and quadrupole moments which can be directly veri ed in lattice calculations or even experimentally. To nd the anomalous pion form factor, we propose an extension of the holographic model by including the Chern-Simons term required to reproduce the chiral anomaly of QCD. This allows us to nd the slope of the form factor with one real and one slightly o -shell photon which appeared to be close to the experimental ndings. We also analyze the limit of large virtualities (when the photon is far o -shell) and establish that predictions of the holographic model analytically coincide with those of perturbative QCD with asymptotic pion distribution amplitude. We also study the e ects of higher dimensional terms in the AdS/QCD model and show that these terms improve the holographic description towards a more realistic scenario. We show this by calculating corrections to the vector meson form factors and corrections to the observables such as electric radii, magnetic and quadrupole moments.

  16. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  17. The neural correlates of problem states: testing FMRI predictions of a computational model of multitasking.

    Directory of Open Access Journals (Sweden)

    Jelmer P Borst

    Full Text Available BACKGROUND: It has been shown that people can only maintain one problem state, or intermediate mental representation, at a time. When more than one problem state is required, for example in multitasking, performance decreases considerably. This effect has been explained in terms of a problem state bottleneck. METHODOLOGY: In the current study we use the complimentary methodologies of computational cognitive modeling and neuroimaging to investigate the neural correlates of this problem state bottleneck. In particular, an existing computational cognitive model was used to generate a priori fMRI predictions for a multitasking experiment in which the problem state bottleneck plays a major role. Hemodynamic responses were predicted for five brain regions, corresponding to five cognitive resources in the model. Most importantly, we predicted the intraparietal sulcus to show a strong effect of the problem state manipulations. CONCLUSIONS: Some of the predictions were confirmed by a subsequent fMRI experiment, while others were not matched by the data. The experiment supported the hypothesis that the problem state bottleneck is a plausible cause of the interference in the experiment and that it could be located in the intraparietal sulcus.

  18. Exchange and spin-fluctuation superconducting pairing in the strong correlation limit of the Hubbard model

    International Nuclear Information System (INIS)

    Plakida, N. M.; Anton, L.; Adam, S. . Department of Theoretical Physics, Horia Hulubei National Institute for Physics and Nuclear Engineering, PO Box MG-6, RO-76900 Bucharest - Magurele; RO); Adam, Gh. . Department of Theoretical Physics, Horia Hulubei National Institute for Physics and Nuclear Engineering, PO Box MG-6, RO-76900 Bucharest - Magurele; RO)

    2001-01-01

    A microscopical theory of superconductivity in the two-band singlet-hole Hubbard model, in the strong coupling limit in a paramagnetic state, is developed. The model Hamiltonian is obtained by projecting the p-d model to an asymmetric Hubbard model with the lower Hubbard subband occupied by one-hole Cu d-like states and the upper Hubbard subband occupied by two-hole p-d singlet states. The model requires two microscopical parameters only, the p-d hybridization parameter t and the charge-transfer gap Δ. It was previously shown to secure an appropriate description of the normal state properties of the high -T c cuprates. To treat rigorously the strong correlations, the Hubbard operator technique within the projection method for the Green function is used. The Dyson equation is derived. In the molecular field approximation, d-wave superconducting pairing of conventional hole (electron) pairs in one Hubbard subband is found, which is mediated by the exchange interaction given by the interband hopping, J ij = 4 (t ij ) 2 / Δ. The normal and anomalous components of the self-energy matrix are calculated in the self-consistent Born approximation for the electron-spin-fluctuation scattering mediated by kinematic interaction of the second order of the intraband hopping. The derived numerical and analytical solutions predict the occurrence of singlet d x 2 -y 2 -wave pairing both in the d-hole and singlet Hubbard subbands. The gap functions and T c are calculated for different hole concentrations. The exchange interaction is shown to be the most important pairing interaction in the Hubbard model in the strong correlation limit, while the spin-fluctuation coupling results only in a moderate enhancement of T c . The smaller weight of the latter comes from two specific features: its vanishing inside the Brillouin zone (BZ) along the lines, |k x | + |k y |=π pointing towards the hot spots and the existence of a small energy shell within which the pairing is effective. By

  19. Heat Transfer Characteristics and Prediction Model of Supercritical Carbon Dioxide (SC-CO2 in a Vertical Tube

    Directory of Open Access Journals (Sweden)

    Can Cai

    2017-11-01

    Full Text Available Due to its distinct capability to improve the efficiency of shale gas production, supercritical carbon dioxide (SC-CO2 fracturing has attracted increased attention in recent years. Heat transfer occurs in the transportation and fracture processes. To better predict and understand the heat transfer of SC-CO2 near the critical region, numerical simulations focusing on a vertical flow pipe were performed. Various turbulence models and turbulent Prandtl numbers (Prt were evaluated to capture the heat transfer deterioration (HTD. The simulations show that the turbulent Prandtl number model (TWL model combined with the Shear Stress Transport (SST k-ω turbulence model accurately predicts the HTD in the critical region. It was found that Prt has a strong effect on the heat transfer prediction. The HTD occurred under larger heat flux density conditions, and an acceleration process was observed. Gravity also affects the HTD through the linkage of buoyancy, and HTD did not occur under zero-gravity conditions.

  20. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  1. Western Validation of a Novel Gastric Cancer Prognosis Prediction Model in US Gastric Cancer Patients.

    Science.gov (United States)

    Woo, Yanghee; Goldner, Bryan; Son, Taeil; Song, Kijun; Noh, Sung Hoon; Fong, Yuman; Hyung, Woo Jin

    2018-03-01

    A novel prediction model for accurate determination of 5-year overall survival of gastric cancer patients was developed by an international collaborative group (G6+). This prediction model was created using a single institution's database of 11,851 Korean patients and included readily available and clinically relevant factors. Already validated using external East Asian cohorts, its applicability in the American population was yet to be determined. Using the Surveillance, Epidemiology, and End Results (SEER) dataset, 2014 release, all patients diagnosed with gastric adenocarcinoma who underwent surgical resection between 2002 and 2012, were selected. Characteristics for analysis included: age, sex, depth of tumor invasion, number of positive lymph nodes, total lymph nodes retrieved, presence of distant metastasis, extent of resection, and histology. Concordance index (C-statistic) was assessed using the novel prediction model and compared with the prognostic index, the seventh edition of the TNM staging system. Of the 26,019 gastric cancer patients identified from the SEER database, 15,483 had complete datasets. Validation of the novel prediction tool revealed a C-statistic of 0.762 (95% CI 0.754 to 0.769) compared with the seventh TNM staging model, C-statistic 0.683 (95% CI 0.677 to 0.689), (p prediction model for gastric cancer in the American patient population. Its superior prediction of the 5-year survival of gastric cancer patients in a large Western cohort strongly supports its global applicability. Importantly, this model allows for accurate prognosis for an increasing number of gastric cancer patients worldwide, including those who received inadequate lymphadenectomy or underwent a noncurative resection. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  2. Predicting climate-induced range shifts: model differences and model reliability.

    Science.gov (United States)

    Joshua J. Lawler; Denis White; Ronald P. Neilson; Andrew R. Blaustein

    2006-01-01

    Predicted changes in the global climate are likely to cause large shifts in the geographic ranges of many plant and animal species. To date, predictions of future range shifts have relied on a variety of modeling approaches with different levels of model accuracy. Using a common data set, we investigated the potential implications of alternative modeling approaches for...

  3. Energy Decay Laws in Strongly Anisotropic Magnetohydrodynamic Turbulence

    International Nuclear Information System (INIS)

    Bigot, Barbara; Galtier, Sebastien; Politano, Helene

    2008-01-01

    We investigate the influence of a uniform magnetic field B 0 =B 0 e parallel on energy decay laws in incompressible magnetohydrodynamic (MHD) turbulence. The nonlinear transfer reduction along B 0 is included in a model that distinguishes parallel and perpendicular directions, following a phenomenology of Kraichnan. We predict a slowing down of the energy decay due to anisotropy in the limit of strong B 0 , with distinct power laws for energy decay of shear- and pseudo-Alfven waves. Numerical results from the kinetic equations of Alfven wave turbulence recover these predictions, and MHD numerical results clearly tend to follow them in the lowest perpendicular planes

  4. Solution of the strong CP problem in models with scalars

    International Nuclear Information System (INIS)

    Dimopoulos, S.

    1978-01-01

    A possible solution to the strong CP problem is pointed out within the context of a Weinberg--Salam model with two Higgs fields coupled in a Peccei--Quinn symmetric fashion. This is done by extending the colour group to a bigger simple group which is broken at some very high energy. The model contains a heavy axion. No old or new U(1) problem re-emerges. 31 references

  5. Predictive Modeling of a Paradigm Mechanical Cooling Tower Model: II. Optimal Best-Estimate Results with Reduced Predicted Uncertainties

    Directory of Open Access Journals (Sweden)

    Ruixian Fang

    2016-09-01

    Full Text Available This work uses the adjoint sensitivity model of the counter-flow cooling tower derived in the accompanying PART I to obtain the expressions and relative numerical rankings of the sensitivities, to all model parameters, of the following model responses: (i outlet air temperature; (ii outlet water temperature; (iii outlet water mass flow rate; and (iv air outlet relative humidity. These sensitivities are subsequently used within the “predictive modeling for coupled multi-physics systems” (PM_CMPS methodology to obtain explicit formulas for the predicted optimal nominal values for the model responses and parameters, along with reduced predicted standard deviations for the predicted model parameters and responses. These explicit formulas embody the assimilation of experimental data and the “calibration” of the model’s parameters. The results presented in this work demonstrate that the PM_CMPS methodology reduces the predicted standard deviations to values that are smaller than either the computed or the experimentally measured ones, even for responses (e.g., the outlet water flow rate for which no measurements are available. These improvements stem from the global characteristics of the PM_CMPS methodology, which combines all of the available information simultaneously in phase-space, as opposed to combining it sequentially, as in current data assimilation procedures.

  6. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  7. Evolutionary Modeling Predicts a Decrease in Postcopulatory Sperm Viability as a Response to Increasing Levels of Sperm Competition

    NARCIS (Netherlands)

    Engqvist, Leif

    Sperm competition has been found to have a strong influence on the evolution of many male and female reproductive traits. Theoretical models have shown that, with increasing levels of sperm competition, males are predicted to increase ejaculate investment, and there is ample empirical evidence

  8. Strong interactions between learned helplessness and risky decision-making in a rat gambling model.

    Science.gov (United States)

    Nobrega, José N; Hedayatmofidi, Parisa S; Lobo, Daniela S

    2016-11-18

    Risky decision-making is characteristic of depression and of addictive disorders, including pathological gambling. However it is not clear whether a propensity to risky choices predisposes to depressive symptoms or whether the converse is the case. Here we tested the hypothesis that rats showing risky decision-making in a rat gambling task (rGT) would be more prone to depressive-like behaviour in the learned helplessness (LH) model. Results showed that baseline rGT choice behaviour did not predict escape deficits in the LH protocol. In contrast, exposure to the LH protocol resulted in a significant increase in risky rGT choices on retest. Unexpectedly, control rats subjected only to escapable stress in the LH protocol showed a subsequent decrease in riskier rGT choices. Further analyses indicated that the LH protocol affected primarily rats with high baseline levels of risky choices and that among these it had opposite effects in rats exposed to LH-inducing stress compared to rats exposed only to the escape trials. Together these findings suggest that while baseline risky decision making may not predict LH behaviour it interacts strongly with LH conditions in modulating subsequent decision-making behaviour. The suggested possibility that stress controllability may be a key factor should be further investigated.

  9. 1D energy transport in a strongly scattering laboratory model

    International Nuclear Information System (INIS)

    Wijk, Kasper van; Scales, John A.; Haney, Matthew

    2004-01-01

    Radiative transfer (RT) theory is often invoked to describe energy propagation in strongly scattering media. Fitting RT to measured wave field intensities is rather different at late times, when the transport is diffusive, than at intermediate times (around one extinction mean free time), when ballistic and diffusive behavior coexist. While there are many examples of late-time RT fits, we describe ultrasonic multiple scattering measurements with RT over the entire range of times--from ballistic to diffusive. In addition to allowing us to retrieve the scattering and absorption mean free paths independently, our results also support theoretical predictions in 1D that suggest an intermediate regime of diffusive (nonlocalized) behavior

  10. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  11. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  12. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  13. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  14. Prediction of Thermal Properties of Sweet Sorghum Bagasse as a Function of Moisture Content Using Artificial Neural Networks and Regression Models

    Directory of Open Access Journals (Sweden)

    Gosukonda Ramana

    2017-06-01

    Full Text Available Artificial neural networks (ANN and traditional regression models were developed for prediction of thermal properties of sweet sorghum bagasse as a function of moisture content and room temperature. Predictions were made for three thermal properties: 1 thermal conductivity, 2 volumetric specific heat, and 3 thermal diffusivity. Each thermal property had five levels of moisture content (8.52%, 12.93%, 18.94%, 24.63%, and 28.62%, w. b. and room temperature as inputs. Data were sub-partitioned for training, testing, and validation of models. Backpropagation (BP and Kalman Filter (KF learning algorithms were employed to develop nonparametric models between input and output data sets. Statistical indices including correlation coefficient (R between actual and predicted outputs were produced for selecting the suitable models. Prediction plots for thermal properties indicated that the ANN models had better accuracy from unseen patterns as compared to regression models. In general, ANN models were able to strongly generalize and interpolate unseen patterns within the domain of training.

  15. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  16. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  17. Predicting water main failures using Bayesian model averaging and survival modelling approach

    International Nuclear Information System (INIS)

    Kabir, Golam; Tesfamariam, Solomon; Sadiq, Rehan

    2015-01-01

    To develop an effective preventive or proactive repair and replacement action plan, water utilities often rely on water main failure prediction models. However, in predicting the failure of water mains, uncertainty is inherent regardless of the quality and quantity of data used in the model. To improve the understanding of water main failure, a Bayesian framework is developed for predicting the failure of water mains considering uncertainties. In this study, Bayesian model averaging method (BMA) is presented to identify the influential pipe-dependent and time-dependent covariates considering model uncertainties whereas Bayesian Weibull Proportional Hazard Model (BWPHM) is applied to develop the survival curves and to predict the failure rates of water mains. To accredit the proposed framework, it is implemented to predict the failure of cast iron (CI) and ductile iron (DI) pipes of the water distribution network of the City of Calgary, Alberta, Canada. Results indicate that the predicted 95% uncertainty bounds of the proposed BWPHMs capture effectively the observed breaks for both CI and DI water mains. Moreover, the performance of the proposed BWPHMs are better compare to the Cox-Proportional Hazard Model (Cox-PHM) for considering Weibull distribution for the baseline hazard function and model uncertainties. - Highlights: • Prioritize rehabilitation and replacements (R/R) strategies of water mains. • Consider the uncertainties for the failure prediction. • Improve the prediction capability of the water mains failure models. • Identify the influential and appropriate covariates for different models. • Determine the effects of the covariates on failure

  18. Trait-based representation of biological nitrification: Model development, testing, and predicted community composition

    Directory of Open Access Journals (Sweden)

    Nick eBouskill

    2012-10-01

    Full Text Available Trait-based microbial models show clear promise as tools to represent the diversity and activity of microorganisms across ecosystem gradients. These models parameterize specific traits that determine the relative fitness of an ‘organism’ in a given environment, and represent the complexity of biological systems across temporal and spatial scales. In this study we introduce a microbial community trait-based modeling framework (MicroTrait focused on nitrification (MicroTrait-N that represents the ammonia-oxidizing bacteria (AOB and ammonia-oxidizing archaea (AOA and nitrite oxidizing bacteria (NOB using traits related to enzyme kinetics and physiological properties. We used this model to predict nitrifier diversity, ammonia (NH3 oxidation rates and nitrous oxide (N2O production across pH, temperature and substrate gradients. Predicted nitrifier diversity was predominantly determined by temperature and substrate availability, the latter was strongly influenced by pH. The model predicted that transient N2O production rates are maximized by a decoupling of the AOB and NOB communities, resulting in an accumulation and detoxification of nitrite to N2O by AOB. However, cumulative N2O production (over six month simulations is maximized in a system where the relationship between AOB and NOB is maintained. When the reactions uncouple, the AOB become unstable and biomass declines rapidly, resulting in decreased NH3 oxidation and N2O production. We evaluated this model against site level chemical datasets from the interior of Alaska and accurately simulated NH3 oxidation rates and the relative ratio of AOA:AOB biomass. The predicted community structure and activity indicate (a parameterization of a small number of traits may be sufficient to broadly characterize nitrifying community structure and (b changing decadal trends in climate and edaphic conditions could impact nitrification rates in ways that are not captured by extant biogeochemical models.

  19. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  20. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  1. How Sensitive Are Transdermal Transport Predictions by Microscopic Stratum Corneum Models to Geometric and Transport Parameter Input?

    Science.gov (United States)

    Wen, Jessica; Koo, Soh Myoung; Lape, Nancy

    2018-02-01

    While predictive models of transdermal transport have the potential to reduce human and animal testing, microscopic stratum corneum (SC) model output is highly dependent on idealized SC geometry, transport pathway (transcellular vs. intercellular), and penetrant transport parameters (e.g., compound diffusivity in lipids). Most microscopic models are limited to a simple rectangular brick-and-mortar SC geometry and do not account for variability across delivery sites, hydration levels, and populations. In addition, these models rely on transport parameters obtained from pure theory, parameter fitting to match in vivo experiments, and time-intensive diffusion experiments for each compound. In this work, we develop a microscopic finite element model that allows us to probe model sensitivity to variations in geometry, transport pathway, and hydration level. Given the dearth of experimentally-validated transport data and the wide range in theoretically-predicted transport parameters, we examine the model's response to a variety of transport parameters reported in the literature. Results show that model predictions are strongly dependent on all aforementioned variations, resulting in order-of-magnitude differences in lag times and permeabilities for distinct structure, hydration, and parameter combinations. This work demonstrates that universally predictive models cannot fully succeed without employing experimentally verified transport parameters and individualized SC structures. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  2. Improving a two-equation eddy-viscosity turbulence model to predict the aerodynamic performance of thick wind turbine airfoils

    Science.gov (United States)

    Bangga, Galih; Kusumadewi, Tri; Hutomo, Go; Sabila, Ahmad; Syawitri, Taurista; Setiadi, Herlambang; Faisal, Muhamad; Wiranegara, Raditya; Hendranata, Yongki; Lastomo, Dwi; Putra, Louis; Kristiadi, Stefanus

    2018-03-01

    Numerical simulations for relatively thick airfoils are carried out in the present studies. An attempt to improve the accuracy of the numerical predictions is done by adjusting the turbulent viscosity of the eddy-viscosity Menter Shear-Stress-Transport (SST) model. The modification involves the addition of a damping factor on the wall-bounded flows incorporating the ratio of the turbulent kinetic energy to its specific dissipation rate for separation detection. The results are compared with available experimental data and CFD simulations using the original Menter SST model. The present model improves the lift polar prediction even though the stall angle is still overestimated. The improvement is caused by the better prediction of separated flow under a strong adverse pressure gradient. The results show that the Reynolds stresses are damped near the wall causing variation of the logarithmic velocity profiles.

  3. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  4. Predicting Environmental Suitability for a Rare and Threatened Species (Lao Newt, Laotriton laoensis) Using Validated Species Distribution Models

    Science.gov (United States)

    Chunco, Amanda J.; Phimmachak, Somphouthone; Sivongxay, Niane; Stuart, Bryan L.

    2013-01-01

    The Lao newt (Laotriton laoensis) is a recently described species currently known only from northern Laos. Little is known about the species, but it is threatened as a result of overharvesting. We integrated field survey results with climate and altitude data to predict the geographic distribution of this species using the niche modeling program Maxent, and we validated these predictions by using interviews with local residents to confirm model predictions of presence and absence. The results of the validated Maxent models were then used to characterize the environmental conditions of areas predicted suitable for L. laoensis. Finally, we overlaid the resulting model with a map of current national protected areas in Laos to determine whether or not any land predicted to be suitable for this species is coincident with a national protected area. We found that both area under the curve (AUC) values and interview data provided strong support for the predictive power of these models, and we suggest that interview data could be used more widely in species distribution niche modeling. Our results further indicated that this species is mostly likely geographically restricted to high altitude regions (i.e., over 1,000 m elevation) in northern Laos and that only a minute fraction of suitable habitat is currently protected. This work thus emphasizes that increased protection efforts, including listing this species as endangered and the establishment of protected areas in the region predicted to be suitable for L. laoensis, are urgently needed. PMID:23555808

  5. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  6. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  7. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  8. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  9. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  10. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  11. A multifluid model extended for strong temperature nonequilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-08

    We present a multifluid model in which the material temperature is strongly affected by the degree of segregation of each material. In order to track temperatures of segregated form and mixed form of the same material, they are defined as different materials with their own energy. This extension makes it necessary to extend multifluid models to the case in which each form is defined as a separate material. Statistical variations associated with the morphology of the mixture have to be simplified. Simplifications introduced include combining all molecularly mixed species into a single composite material, which is treated as another segregated material. Relative motion within the composite material, diffusion, is represented by material velocity of each component in the composite material. Compression work, momentum and energy exchange, virtual mass forces, and dissipation of the unresolved kinetic energy have been generalized to the heterogeneous mixture in temperature nonequilibrium. The present model can be further simplified by combining all mixed forms of materials into a composite material. Molecular diffusion in this case is modeled by the Stefan-Maxwell equations.

  12. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  13. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  14. Describing a Strongly Correlated Model System with Density Functional Theory.

    Science.gov (United States)

    Kong, Jing; Proynov, Emil; Yu, Jianguo; Pachter, Ruth

    2017-07-06

    The linear chain of hydrogen atoms, a basic prototype for the transition from a metal to Mott insulator, is studied with a recent density functional theory model functional for nondynamic and strong correlation. The computed cohesive energy curve for the transition agrees well with accurate literature results. The variation of the electronic structure in this transition is characterized with a density functional descriptor that yields the atomic population of effectively localized electrons. These new methods are also applied to the study of the Peierls dimerization of the stretched even-spaced Mott insulator to a chain of H 2 molecules, a different insulator. The transitions among the two insulating states and the metallic state of the hydrogen chain system are depicted in a semiquantitative phase diagram. Overall, we demonstrate the capability of studying strongly correlated materials with a mean-field model at the fundamental level, in contrast to the general pessimistic view on such a feasibility.

  15. Gas and grain chemical composition in cold cores as predicted by the Nautilus three-phase model

    Science.gov (United States)

    Ruaud, Maxime; Wakelam, Valentine; Hersant, Franck

    2016-07-01

    We present an extended version of the two-phase gas-grain code NAUTILUS to the three-phase modelling of gas and grain chemistry of cold cores. In this model, both the mantle and the surface are considered as chemically active. We also take into account the competition among reaction, diffusion and evaporation. The model predictions are confronted to ice observations in the envelope of low-mass and massive young stellar objects as well as towards background stars. Modelled gas-phase abundances are compared to species observed towards TMC-1 (CP) and L134N dark clouds. We find that our model successfully reproduces the observed ice species. It is found that the reaction-diffusion competition strongly enhances reactions with barriers and more specifically reactions with H2, which is abundant on grains. This finding highlights the importance having a good approach to determine the abundance of H2 on grains. Consequently, it is found that the major N-bearing species on grains go from NH3 to N2 and HCN when the reaction-diffusion competition is taken into account. In the gas phase and before a few 105 yr, we find that the three-phase model does not have a strong impact on the observed species compared to the two-phase model. After this time, the computed abundances dramatically decrease due to the strong accretion on dust, which is not counterbalanced by the desorption less efficient than in the two-phase model. This strongly constrains the chemical age of cold cores to be of the order of few 105 yr.

  16. Continuum Lowering and Fermi-Surface Rising in Strongly Coupled and Degenerate Plasmas

    International Nuclear Information System (INIS)

    Hu, S. X.

    2017-01-01

    Here, continuum lowering is a well-known and important physics concept that describes the ionization potential depression (IPD) in plasmas caused by thermal-/pressure-induced ionization of outer-shell electrons. The existing IPD models are often used to characterize plasma conditions and to gauge opacity calculations. Recent precision measurements have revealed deficits in our understanding of continuum lowering in dense hot plasmas. However, these investigations have so far been limited to IPD in strongly coupled but nondegenerate plasmas. Here, we report a first-principles study of the K-edge shifting in both strongly coupled and fully degenerate carbon plasmas, with quantum molecular dynamics (QMD) calculations based on the all-electron density-functional theory (DFT). The resulted K-edge shifting versus plasma density, as a probe to the continuum lowering and the Fermi-surface rising, is found to be significantly different from predictions of existing IPD models. In contrast, a simple model of “single atom in box” (SAIB), developed in this work, accurately predicts K-edge locations as what ab-initio calculations provide.

  17. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  18. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  19. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  20. Progress in Space Weather Modeling and Observations Needed to Improve the Operational NAIRAS Model Aircraft Radiation Exposure Predictions

    Science.gov (United States)

    Mertens, C. J.; Kress, B. T.; Wiltberger, M. J.; Tobiska, W.; Xu, X.

    2011-12-01

    The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a prototype operational model for predicting commercial aircraft radiation exposure from galactic and solar cosmic rays. NAIRAS predictions are currently streaming live from the project's public website, and the exposure rate nowcast is also available on the SpaceWx smartphone app for iPhone, IPad, and Android. Cosmic rays are the primary source of human exposure to high linear energy transfer radiation at aircraft altitudes, which increases the risk of cancer and other adverse health effects. Thus, the NAIRAS model addresses an important national need with broad societal, public health and economic benefits. The processes responsible for the variability in the solar wind, interplanetary magnetic field, solar energetic particle spectrum, and the dynamical response of the magnetosphere to these space environment inputs, strongly influence the composition and energy distribution of the atmospheric ionizing radiation field. During the development of the NAIRAS model, new science questions were identified that must be addressed in order to obtain a more reliable and robust operational model of atmospheric radiation exposure. Addressing these science questions require improvements in both space weather modeling and observations. The focus of this talk is to present these science questions, the proposed methodologies for addressing these science questions, and the anticipated improvements to the operational predictions of atmospheric radiation exposure. The overarching goal of this work is to provide a decision support tool for the aviation industry that will enable an optimal balance to be achieved between minimizing health risks to passengers and aircrew while simultaneously minimizing costs to the airline companies.

  1. Models of the Strongly Lensed Quasar DES J0408-5354

    Energy Technology Data Exchange (ETDEWEB)

    Agnello, A.; et al.

    2017-02-01

    We present gravitational lens models of the multiply imaged quasar DES J0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with the aim of interpreting its remarkable quad-like configuration. We first model the DES single-epoch $grizY$ images as a superposition of a lens galaxy and four point-like objects, obtaining spectral energy distributions (SEDs) and relative positions for the objects. Three of the point sources (A,B,D) have SEDs compatible with the discovery quasar spectra, while the faintest point-like image (G2/C) shows significant reddening and a `grey' dimming of $\\approx0.8$mag. In order to understand the lens configuration, we fit different models to the relative positions of A,B,D. Models with just a single deflector predict a fourth image at the location of G2/C but considerably brighter and bluer. The addition of a small satellite galaxy ($R_{\\rm E}\\approx0.2$") in the lens plane near the position of G2/C suppresses the flux of the fourth image and can explain both the reddening and grey dimming. All models predict a main deflector with Einstein radius between $1.7"$ and $2.0",$ velocity dispersion $267-280$km/s and enclosed mass $\\approx 6\\times10^{11}M_{\\odot},$ even though higher resolution imaging data are needed to break residual degeneracies in model parameters. The longest time-delay (B-A) is estimated as $\\approx 85$ (resp. $\\approx125$) days by models with (resp. without) a perturber near G2/C. The configuration and predicted time-delays of J0408-5354 make it an excellent target for follow-up aimed at understanding the source quasar host galaxy and substructure in the lens, and measuring cosmological parameters. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.

  2. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  3. Semi-analytic models for the CANDELS survey: comparison of predictions for intrinsic galaxy properties

    International Nuclear Information System (INIS)

    Lu, Yu; Wechsler, Risa H.; Somerville, Rachel S.; Croton, Darren; Porter, Lauren; Primack, Joel; Moody, Chris; Behroozi, Peter S.; Ferguson, Henry C.; Koo, David C.; Guo, Yicheng; Safarzadeh, Mohammadtaher; White, Catherine E.; Finlator, Kristian; Castellano, Marco; Sommariva, Veronica

    2014-01-01

    We compare the predictions of three independently developed semi-analytic galaxy formation models (SAMs) that are being used to aid in the interpretation of results from the CANDELS survey. These models are each applied to the same set of halo merger trees extracted from the 'Bolshoi' high-resolution cosmological N-body simulation and are carefully tuned to match the local galaxy stellar mass function using the powerful method of Bayesian Inference coupled with Markov Chain Monte Carlo or by hand. The comparisons reveal that in spite of the significantly different parameterizations for star formation and feedback processes, the three models yield qualitatively similar predictions for the assembly histories of galaxy stellar mass and star formation over cosmic time. Comparing SAM predictions with existing estimates of the stellar mass function from z = 0-8, we show that the SAMs generally require strong outflows to suppress star formation in low-mass halos to match the present-day stellar mass function, as is the present common wisdom. However, all of the models considered produce predictions for the star formation rates (SFRs) and metallicities of low-mass galaxies that are inconsistent with existing data. The predictions for metallicity-stellar mass relations and their evolution clearly diverge between the models. We suggest that large differences in the metallicity relations and small differences in the stellar mass assembly histories of model galaxies stem from different assumptions for the outflow mass-loading factor produced by feedback. Importantly, while more accurate observational measurements for stellar mass, SFR and metallicity of galaxies at 1 < z < 5 will discriminate between models, the discrepancies between the constrained models and existing data of these observables have already revealed challenging problems in understanding star formation and its feedback in galaxy formation. The three sets of models are being used to construct catalogs

  4. Semi-analytic models for the CANDELS survey: comparison of predictions for intrinsic galaxy properties

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Yu; Wechsler, Risa H. [Kavli Institute for Particle Astrophysics and Cosmology, Physics Department, and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Somerville, Rachel S. [Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Croton, Darren [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, VIC 3122 (Australia); Porter, Lauren; Primack, Joel; Moody, Chris [Department of Physics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Behroozi, Peter S.; Ferguson, Henry C. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Koo, David C.; Guo, Yicheng [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Safarzadeh, Mohammadtaher; White, Catherine E. [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Finlator, Kristian [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen (Denmark); Castellano, Marco; Sommariva, Veronica, E-mail: luyu@stanford.edu, E-mail: rwechsler@stanford.edu [INAF-Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monteporzio (Italy)

    2014-11-10

    We compare the predictions of three independently developed semi-analytic galaxy formation models (SAMs) that are being used to aid in the interpretation of results from the CANDELS survey. These models are each applied to the same set of halo merger trees extracted from the 'Bolshoi' high-resolution cosmological N-body simulation and are carefully tuned to match the local galaxy stellar mass function using the powerful method of Bayesian Inference coupled with Markov Chain Monte Carlo or by hand. The comparisons reveal that in spite of the significantly different parameterizations for star formation and feedback processes, the three models yield qualitatively similar predictions for the assembly histories of galaxy stellar mass and star formation over cosmic time. Comparing SAM predictions with existing estimates of the stellar mass function from z = 0-8, we show that the SAMs generally require strong outflows to suppress star formation in low-mass halos to match the present-day stellar mass function, as is the present common wisdom. However, all of the models considered produce predictions for the star formation rates (SFRs) and metallicities of low-mass galaxies that are inconsistent with existing data. The predictions for metallicity-stellar mass relations and their evolution clearly diverge between the models. We suggest that large differences in the metallicity relations and small differences in the stellar mass assembly histories of model galaxies stem from different assumptions for the outflow mass-loading factor produced by feedback. Importantly, while more accurate observational measurements for stellar mass, SFR and metallicity of galaxies at 1 < z < 5 will discriminate between models, the discrepancies between the constrained models and existing data of these observables have already revealed challenging problems in understanding star formation and its feedback in galaxy formation. The three sets of models are being used to construct catalogs

  5. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Reranking candidate gene models with cross-species comparison for improved gene prediction

    Directory of Open Access Journals (Sweden)

    Pereira Fernando CN

    2008-10-01

    Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.

  7. The strong interaction in e+e- annihilation and deep inelastic scattering

    International Nuclear Information System (INIS)

    Samuelsson, J.

    1996-01-01

    Various aspects of strong interactions are considered. Correlation effects in the hadronization process in a string model are studied. A discrete approximation scheme to the perturbative QCD cascade in e + e - annihilation is formulated. The model, Discrete QCD, predicts a rather low phase space density of 'effective gluons'. This is related to the properties of the running coupling constant. It provides us with a simple tool for studies of the strong interaction. It is shown that it reproduces well-known properties of parton cascades. A new formalism for the Deep Inelastic Scattering (DIS) process is developed. The model which is called the Linked Dipole Chain Model provides an interpolation between regions of high Q 2 (DGLAP) and low x-moderate Q 2 (BFKL). It gives a unified treatment of the different interaction channels an a DIS process. 17 figs

  8. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  9. A multivariate model for predicting segmental body composition.

    Science.gov (United States)

    Tian, Simiao; Mioche, Laurence; Denis, Jean-Baptiste; Morio, Béatrice

    2013-12-01

    The aims of the present study were to propose a multivariate model for predicting simultaneously body, trunk and appendicular fat and lean masses from easily measured variables and to compare its predictive capacity with that of the available univariate models that predict body fat percentage (BF%). The dual-energy X-ray absorptiometry (DXA) dataset (52% men and 48% women) with White, Black and Hispanic ethnicities (1999-2004, National Health and Nutrition Examination Survey) was randomly divided into three sub-datasets: a training dataset (TRD), a test dataset (TED); a validation dataset (VAD), comprising 3835, 1917 and 1917 subjects. For each sex, several multivariate prediction models were fitted from the TRD using age, weight, height and possibly waist circumference. The most accurate model was selected from the TED and then applied to the VAD and a French DXA dataset (French DB) (526 men and 529 women) to assess the prediction accuracy in comparison with that of five published univariate models, for which adjusted formulas were re-estimated using the TRD. Waist circumference was found to improve the prediction accuracy, especially in men. For BF%, the standard error of prediction (SEP) values were 3.26 (3.75) % for men and 3.47 (3.95)% for women in the VAD (French DB), as good as those of the adjusted univariate models. Moreover, the SEP values for the prediction of body and appendicular lean masses ranged from 1.39 to 2.75 kg for both the sexes. The prediction accuracy was best for age < 65 years, BMI < 30 kg/m2 and the Hispanic ethnicity. The application of our multivariate model to large populations could be useful to address various public health issues.

  10. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Study of thermodynamic and transport properties of strongly interacting matter in a color string percolation model at RHIC

    International Nuclear Information System (INIS)

    Sahoo, Pragati; Tiwari, Swatantra Kumar; De, Sudipan; Sahoo, Raghunath

    2017-01-01

    The main perspectives of Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory are to study the properties of the strongly interacting matter and to explore the conjectured Quantum Chromodynamics (QCD) phase diagram. Lattice QCD (lQCD) predicts a smooth crossover at vanishing baryon chemical potential (μ B ) and other QCD based theoretical models predicts first order phase transition at large μB. Searching of the Critical Point in the QCD phase diagram, finding the evidence and nature of phase transition, studying the properties of the matter formed in nuclear collisions as a function of √sNN are the main goals of RHIC. To investigate the nature of the matter produced at heavy-ion collisions, the thermodynamical and transport quantities like: energy density, shear viscosity etc. are studied. It is expected that the ratio of shear viscosity (η) to entropy density (s) would exhibit a minimum value near the QCD critical point

  12. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  13. Predictive factors of early moderate/severe ovarian hyperstimulation syndrome in non-polycystic ovarian syndrome patients: a statistical model.

    Science.gov (United States)

    Ashrafi, Mahnaz; Bahmanabadi, Akram; Akhond, Mohammad Reza; Arabipoor, Arezoo

    2015-11-01

    To evaluate demographic, medical history and clinical cycle characteristics of infertile non-polycystic ovary syndrome (NPCOS) women with the purpose of investigating their associations with the prevalence of moderate-to-severe OHSS. In this retrospective study, among 7073 in vitro fertilization and/or intracytoplasmic sperm injection (IVF/ICSI) cycles, 86 cases of NPCO patients who developed moderate-to-severe OHSS while being treated with IVF/ICSI cycles were analyzed during the period of January 2008 to December 2010 at Royan Institute. To review the OHSS risk factors, 172 NPCOS patients without developing OHSS, treated at the same period of time, were selected randomly by computer as control group. We used multiple logistic regression in a backward manner to build a prediction model. The regression analysis revealed that the variables, including age [odds ratio (OR) 0.9, confidence interval (CI) 0.81-0.99], antral follicles count (OR 4.3, CI 2.7-6.9), infertility cause (tubal factor, OR 11.5, CI 1.1-51.3), hypothyroidism (OR 3.8, CI 1.5-9.4) and positive history of ovarian surgery (OR 0.2, CI 0.05-0.9) were the most important predictors of OHSS. The regression model had an area under curve of 0.94, presenting an allowable discriminative performance that was equal with two strong predictive variables, including the number of follicles and serum estradiol level on human chorionic gonadotropin day. The predictive regression model based on primary characteristics of NPCOS patients had equal specificity in comparison with two mentioned strong predictive variables. Therefore, it may be beneficial to apply this model before the beginning of ovarian stimulation protocol.

  14. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  15. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  16. Prediction of residential radon exposure of the whole Swiss population: comparison of model-based predictions with measurement-based predictions.

    Science.gov (United States)

    Hauri, D D; Huss, A; Zimmermann, F; Kuehni, C E; Röösli, M

    2013-10-01

    Radon plays an important role for human exposure to natural sources of ionizing radiation. The aim of this article is to compare two approaches to estimate mean radon exposure in the Swiss population: model-based predictions at individual level and measurement-based predictions based on measurements aggregated at municipality level. A nationwide model was used to predict radon levels in each household and for each individual based on the corresponding tectonic unit, building age, building type, soil texture, degree of urbanization, and floor. Measurement-based predictions were carried out within a health impact assessment on residential radon and lung cancer. Mean measured radon levels were corrected for the average floor distribution and weighted with population size of each municipality. Model-based predictions yielded a mean radon exposure of the Swiss population of 84.1 Bq/m(3) . Measurement-based predictions yielded an average exposure of 78 Bq/m(3) . This study demonstrates that the model- and the measurement-based predictions provided similar results. The advantage of the measurement-based approach is its simplicity, which is sufficient for assessing exposure distribution in a population. The model-based approach allows predicting radon levels at specific sites, which is needed in an epidemiological study, and the results do not depend on how the measurement sites have been selected. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  18. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  19. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  20. Long-wave model for strongly anisotropic growth of a crystal step.

    Science.gov (United States)

    Khenner, Mikhail

    2013-08-01

    A continuum model for the dynamics of a single step with the strongly anisotropic line energy is formulated and analyzed. The step grows by attachment of adatoms from the lower terrace, onto which atoms adsorb from a vapor phase or from a molecular beam, and the desorption is nonnegligible (the "one-sided" model). Via a multiscale expansion, we derived a long-wave, strongly nonlinear, and strongly anisotropic evolution PDE for the step profile. Written in terms of the step slope, the PDE can be represented in a form similar to a convective Cahn-Hilliard equation. We performed the linear stability analysis and computed the nonlinear dynamics. Linear stability depends on whether the stiffness is minimum or maximum in the direction of the step growth. It also depends nontrivially on the combination of the anisotropy strength parameter and the atomic flux from the terrace to the step. Computations show formation and coarsening of a hill-and-valley structure superimposed onto a long-wavelength profile, which independently coarsens. Coarsening laws for the hill-and-valley structure are computed for two principal orientations of a maximum step stiffness, the increasing anisotropy strength, and the varying atomic flux.

  1. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  2. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  3. A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  4. A comparative study on the forming limit diagram prediction between Marciniak-Kuczynski model and modified maximum force criterion by using the evolving non-associated Hill48 plasticity model

    Science.gov (United States)

    Shen, Fuhui; Lian, Junhe; Münstermann, Sebastian

    2018-05-01

    Experimental and numerical investigations on the forming limit diagram (FLD) of a ferritic stainless steel were performed in this study. The FLD of this material was obtained by Nakajima tests. Both the Marciniak-Kuczynski (MK) model and the modified maximum force criterion (MMFC) were used for the theoretical prediction of the FLD. From the results of uniaxial tensile tests along different loading directions with respect to the rolling direction, strong anisotropic plastic behaviour was observed in the investigated steel. A recently proposed anisotropic evolving non-associated Hill48 (enHill48) plasticity model, which was developed from the conventional Hill48 model based on the non-associated flow rule with evolving anisotropic parameters, was adopted to describe the anisotropic hardening behaviour of the investigated material. In the previous study, the model was coupled with the MMFC for FLD prediction. In the current study, the enHill48 was further coupled with the MK model. By comparing the predicted forming limit curves with the experimental results, the influences of anisotropy in terms of flow rule and evolving features on the forming limit prediction were revealed and analysed. In addition, the forming limit predictive performances of the MK and the MMFC models in conjunction with the enHill48 plasticity model were compared and evaluated.

  5. Risk predictive modelling for diabetes and cardiovascular disease.

    Science.gov (United States)

    Kengne, Andre Pascal; Masconi, Katya; Mbanya, Vivian Nchanchou; Lekoubou, Alain; Echouffo-Tcheugui, Justin Basile; Matsha, Tandi E

    2014-02-01

    Absolute risk models or clinical prediction models have been incorporated in guidelines, and are increasingly advocated as tools to assist risk stratification and guide prevention and treatments decisions relating to common health conditions such as cardiovascular disease (CVD) and diabetes mellitus. We have reviewed the historical development and principles of prediction research, including their statistical underpinning, as well as implications for routine practice, with a focus on predictive modelling for CVD and diabetes. Predictive modelling for CVD risk, which has developed over the last five decades, has been largely influenced by the Framingham Heart Study investigators, while it is only ∼20 years ago that similar efforts were started in the field of diabetes. Identification of predictive factors is an important preliminary step which provides the knowledge base on potential predictors to be tested for inclusion during the statistical derivation of the final model. The derived models must then be tested both on the development sample (internal validation) and on other populations in different settings (external validation). Updating procedures (e.g. recalibration) should be used to improve the performance of models that fail the tests of external validation. Ultimately, the effect of introducing validated models in routine practice on the process and outcomes of care as well as its cost-effectiveness should be tested in impact studies before wide dissemination of models beyond the research context. Several predictions models have been developed for CVD or diabetes, but very few have been externally validated or tested in impact studies, and their comparative performance has yet to be fully assessed. A shift of focus from developing new CVD or diabetes prediction models to validating the existing ones will improve their adoption in routine practice.

  6. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  7. Survival prediction model for postoperative hepatocellular carcinoma patients.

    Science.gov (United States)

    Ren, Zhihui; He, Shasha; Fan, Xiaotang; He, Fangping; Sang, Wei; Bao, Yongxing; Ren, Weixin; Zhao, Jinming; Ji, Xuewen; Wen, Hao

    2017-09-01

    This study is to establish a predictive index (PI) model of 5-year survival rate for patients with hepatocellular carcinoma (HCC) after radical resection and to evaluate its prediction sensitivity, specificity, and accuracy.Patients underwent HCC surgical resection were enrolled and randomly divided into prediction model group (101 patients) and model evaluation group (100 patients). Cox regression model was used for univariate and multivariate survival analysis. A PI model was established based on multivariate analysis and receiver operating characteristic (ROC) curve was drawn accordingly. The area under ROC (AUROC) and PI cutoff value was identified.Multiple Cox regression analysis of prediction model group showed that neutrophil to lymphocyte ratio, histological grade, microvascular invasion, positive resection margin, number of tumor, and postoperative transcatheter arterial chemoembolization treatment were the independent predictors for the 5-year survival rate for HCC patients. The model was PI = 0.377 × NLR + 0.554 × HG + 0.927 × PRM + 0.778 × MVI + 0.740 × NT - 0.831 × transcatheter arterial chemoembolization (TACE). In the prediction model group, AUROC was 0.832 and the PI cutoff value was 3.38. The sensitivity, specificity, and accuracy were 78.0%, 80%, and 79.2%, respectively. In model evaluation group, AUROC was 0.822, and the PI cutoff value was well corresponded to the prediction model group with sensitivity, specificity, and accuracy of 85.0%, 83.3%, and 84.0%, respectively.The PI model can quantify the mortality risk of hepatitis B related HCC with high sensitivity, specificity, and accuracy.

  8. Predictive modeling of pedestal structure in KSTAR using EPED model

    Energy Technology Data Exchange (ETDEWEB)

    Han, Hyunsun; Kim, J. Y. [National Fusion Research Institute, Daejeon 305-806 (Korea, Republic of); Kwon, Ohjin [Department of Physics, Daegu University, Gyeongbuk 712-714 (Korea, Republic of)

    2013-10-15

    A predictive calculation is given for the structure of edge pedestal in the H-mode plasma of the KSTAR (Korea Superconducting Tokamak Advanced Research) device using the EPED model. Particularly, the dependence of pedestal width and height on various plasma parameters is studied in detail. The two codes, ELITE and HELENA, are utilized for the stability analysis of the peeling-ballooning and kinetic ballooning modes, respectively. Summarizing the main results, the pedestal slope and height have a strong dependence on plasma current, rapidly increasing with it, while the pedestal width is almost independent of it. The plasma density or collisionality gives initially a mild stabilization, increasing the pedestal slope and height, but above some threshold value its effect turns to a destabilization, reducing the pedestal width and height. Among several plasma shape parameters, the triangularity gives the most dominant effect, rapidly increasing the pedestal width and height, while the effect of elongation and squareness appears to be relatively weak. Implication of these edge results, particularly in relation to the global plasma performance, is discussed.

  9. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  10. Analog quantum simulation of the Rabi model in the ultra-strong coupling regime.

    Science.gov (United States)

    Braumüller, Jochen; Marthaler, Michael; Schneider, Andre; Stehli, Alexander; Rotzinger, Hannes; Weides, Martin; Ustinov, Alexey V

    2017-10-03

    The quantum Rabi model describes the fundamental mechanism of light-matter interaction. It consists of a two-level atom or qubit coupled to a quantized harmonic mode via a transversal interaction. In the weak coupling regime, it reduces to the well-known Jaynes-Cummings model by applying a rotating wave approximation. The rotating wave approximation breaks down in the ultra-strong coupling regime, where the effective coupling strength g is comparable to the energy ω of the bosonic mode, and remarkable features in the system dynamics are revealed. Here we demonstrate an analog quantum simulation of an effective quantum Rabi model in the ultra-strong coupling regime, achieving a relative coupling ratio of g/ω ~ 0.6. The quantum hardware of the simulator is a superconducting circuit embedded in a cQED setup. We observe fast and periodic quantum state collapses and revivals of the initial qubit state, being the most distinct signature of the synthesized model.An analog quantum simulation scheme has been explored with a quantum hardware based on a superconducting circuit. Here the authors investigate the time evolution of the quantum Rabi model at ultra-strong coupling conditions, which is synthesized by slowing down the system dynamics in an effective frame.

  11. A solution of the strong CP problem in models with scalars

    International Nuclear Information System (INIS)

    Dimopoulos, S.

    1979-01-01

    A possible solution to the strong CP problem within the context of a Weinberg-Salam model with two Higgs fields coupled in a Peccei-Quinn symmetric fashion is pointed out. This is done by extending the colour group to a bigger simple group which is broken at some very high energy. The model contains a heavy axion. No old or new U(1) problem re-emerges. (Auth.)

  12. Hidden Semi-Markov Models for Predictive Maintenance

    Directory of Open Access Journals (Sweden)

    Francesco Cartella

    2015-01-01

    Full Text Available Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs with (i no constraints on the state duration density function and (ii being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL of the machine is calculated.

  13. Predicting cyberbullying perpetration in emerging adults: A theoretical test of the Barlett Gentile Cyberbullying Model.

    Science.gov (United States)

    Barlett, Christopher; Chamberlin, Kristina; Witkower, Zachary

    2017-04-01

    The Barlett and Gentile Cyberbullying Model (BGCM) is a learning-based theory that posits the importance of positive cyberbullying attitudes predicting subsequent cyberbullying perpetration. Furthermore, the tenants of the BGCM state that cyberbullying attitude are likely to form when the online aggressor believes that the online environment allows individuals of all physical sizes to harm others and they are perceived as anonymous. Past work has tested parts of the BGCM; no study has used longitudinal methods to examine this model fully. The current study (N = 161) employed a three-wave longitudinal design to test the BGCM. Participants (age range: 18-24) completed measures of the belief that physical strength is irrelevant online and anonymity perceptions at Wave 1, cyberbullying attitudes at Wave 2, and cyberbullying perpetration at Wave 3. Results showed strong support for the BGCM: anonymity perceptions and the belief that physical attributes are irrelevant online at Wave 1 predicted Wave 2 cyberbullying attitudes, which predicted subsequent Wave 3 cyberbullying perpetration. These results support the BGCM and are the first to show empirical support for this model. Aggr. Behav. 43:147-154, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Quantitative accuracy of the simplified strong ion equation to predict serum pH in dogs.

    Science.gov (United States)

    Cave, N J; Koo, S T

    2015-01-01

    Electrochemical approach to the assessment of acid-base states should provide a better mechanistic explanation of the metabolic component than methods that consider only pH and carbon dioxide. Simplified strong ion equation (SSIE), using published dog-specific values, would predict the measured serum pH of diseased dogs. Ten dogs, hospitalized for various reasons. Prospective study of a convenience sample of a consecutive series of dogs admitted to the Massey University Veterinary Teaching Hospital (MUVTH), from which serum biochemistry and blood gas analyses were performed at the same time. Serum pH was calculated (Hcal+) using the SSIE, and published values for the concentration and dissociation constant for the nonvolatile weak acids (Atot and Ka ), and subsequently Hcal+ was compared with the dog's actual pH (Hmeasured+). To determine the source of discordance between Hcal+ and Hmeasured+, the calculations were repeated using a series of substituted values for Atot and Ka . The Hcal+ did not approximate the Hmeasured+ for any dog (P = 0.499, r(2) = 0.068), and was consistently more basic. Substituted values Atot and Ka did not significantly improve the accuracy (r(2) = 0.169 to <0.001). Substituting the effective SID (Atot-[HCO3-]) produced a strong association between Hcal+ and Hmeasured+ (r(2) = 0.977). Using the simplified strong ion equation and the published values for Atot and Ka does not appear to provide a quantitative explanation for the acid-base status of dogs. Efficacy of substituting the effective SID in the simplified strong ion equation suggests the error lies in calculating the SID. Copyright © 2015 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  15. Monte Carlo and analytical model predictions of leakage neutron exposures from passively scattered proton therapy

    International Nuclear Information System (INIS)

    Pérez-Andújar, Angélica; Zhang, Rui; Newhauser, Wayne

    2013-01-01

    Purpose: Stray neutron radiation is of concern after radiation therapy, especially in children, because of the high risk it might carry for secondary cancers. Several previous studies predicted the stray neutron exposure from proton therapy, mostly using Monte Carlo simulations. Promising attempts to develop analytical models have also been reported, but these were limited to only a few proton beam energies. The purpose of this study was to develop an analytical model to predict leakage neutron equivalent dose from passively scattered proton beams in the 100-250-MeV interval.Methods: To develop and validate the analytical model, the authors used values of equivalent dose per therapeutic absorbed dose (H/D) predicted with Monte Carlo simulations. The authors also characterized the behavior of the mean neutron radiation-weighting factor, w R , as a function of depth in a water phantom and distance from the beam central axis.Results: The simulated and analytical predictions agreed well. On average, the percentage difference between the analytical model and the Monte Carlo simulations was 10% for the energies and positions studied. The authors found that w R was highest at the shallowest depth and decreased with depth until around 10 cm, where it started to increase slowly with depth. This was consistent among all energies.Conclusion: Simple analytical methods are promising alternatives to complex and slow Monte Carlo simulations to predict H/D values. The authors' results also provide improved understanding of the behavior of w R which strongly depends on depth, but is nearly independent of lateral distance from the beam central axis

  16. Modeling and Control of CSTR using Model based Neural Network Predictive Control

    OpenAIRE

    Shrivastava, Piyush

    2012-01-01

    This paper presents a predictive control strategy based on neural network model of the plant is applied to Continuous Stirred Tank Reactor (CSTR). This system is a highly nonlinear process; therefore, a nonlinear predictive method, e.g., neural network predictive control, can be a better match to govern the system dynamics. In the paper, the NN model and the way in which it can be used to predict the behavior of the CSTR process over a certain prediction horizon are described, and some commen...

  17. Consensus models to predict endocrine disruption for all ...

    Science.gov (United States)

    Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte

  18. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  19. The moduli and gravitino (non)-problems in models with strongly stabilized moduli

    International Nuclear Information System (INIS)

    Evans, Jason L.; Olive, Keith A.; Garcia, Marcos A.G.

    2014-01-01

    In gravity mediated models and in particular in models with strongly stabilized moduli, there is a natural hierarchy between gaugino masses, the gravitino mass and moduli masses: m 1/2 << m 3/2 << m φ . Given this hierarchy, we show that 1) moduli problems associated with excess entropy production from moduli decay and 2) problems associated with moduli/gravitino decays to neutralinos are non-existent. Placed in an inflationary context, we show that the amplitude of moduli oscillations are severely limited by strong stabilization. Moduli oscillations may then never come to dominate the energy density of the Universe. As a consequence, moduli decay to gravitinos and their subsequent decay to neutralinos need not overpopulate the cold dark matter density

  20. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  1. Preclinical models used for immunogenicity prediction of therapeutic proteins.

    Science.gov (United States)

    Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim

    2013-07-01

    All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.

  2. A Grey NGM(1,1, k) Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Science.gov (United States)

    Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling

    2014-01-01

    Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span. PMID:25054174

  3. Relativistic strings and dual models of strong interactions

    International Nuclear Information System (INIS)

    Marinov, M.S.

    1977-01-01

    The theory of strong interactions,based on the model depicting a hardon as a one-dimentional elastic relativistic system(''string'') is considered. The relationship between this model and the concepts of quarks and partons is discussed. Presented are the principal results relating to the Veneziano dual theory, which may be considered as the consequence of the string model, and to its modifications. The classical string theory is described in detail. Attention is focused on questions of importance to the construction of the quantum theory - the Hamilton mechanisms and conformal symmetry. Quantization is described, and it is shown that it is not contradictory only in the 26-dimentional space and with a special requirement imposed on the spectrum of states. The theory of a string with a distributed spin is considered. The spin is introduced with the aid of the Grassman algebra formalism. In this case quantization is possible only in the 10-dimentional space. The strings interact by their ruptures and gluings. A method for calculating the interaction amplitudes is indicated

  4. Non-linear Model Predictive Control for cooling strings of superconducting magnets using superfluid helium

    CERN Document Server

    AUTHOR|(SzGeCERN)673023; Blanco Viñuela, Enrique

    In each of eight arcs of the 27 km circumference Large Hadron Collider (LHC), 2.5 km long strings of super-conducting magnets are cooled with superfluid Helium II at 1.9 K. The temperature stabilisation is a challenging control problem due to complex non-linear dynamics of the magnets temperature and presence of multiple operational constraints. Strong nonlinearities and variable dead-times of the dynamics originate at strongly heat-flux dependent effective heat conductivity of superfluid that varies three orders of magnitude over the range of possible operational conditions. In order to improve the temperature stabilisation, a proof of concept on-line economic output-feedback Non-linear Model Predictive Controller (NMPC) is presented in this thesis. The controller is based on a novel complex first-principles distributed parameters numerical model of the temperature dynamics over a 214 m long sub-sector of the LHC that is characterized by very low computational cost of simulation needed in real-time optimizat...

  5. Microscopic modeling of photoluminescence of strongly disordered semiconductors

    International Nuclear Information System (INIS)

    Bozsoki, P.; Kira, M.; Hoyer, W.; Meier, T.; Varga, I.; Thomas, P.; Koch, S.W.

    2007-01-01

    A microscopic theory for the luminescence of ordered semiconductors is modified to describe photoluminescence of strongly disordered semiconductors. The approach includes both diagonal disorder and the many-body Coulomb interaction. As a case study, the light emission of a correlated plasma is investigated numerically for a one-dimensional two-band tight-binding model. The band structure of the underlying ordered system is assumed to correspond to either a direct or an indirect semiconductor. In particular, luminescence and absorption spectra are computed for various levels of disorder and sample temperature to determine thermodynamic relations, the Stokes shift, and the radiative lifetime distribution

  6. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  7. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  8. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  9. Revised predictive equations for salt intrusion modelling in estuaries

    NARCIS (Netherlands)

    Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.

    2015-01-01

    For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion

  10. Preprocedural Prediction Model for Contrast-Induced Nephropathy Patients.

    Science.gov (United States)

    Yin, Wen-Jun; Yi, Yi-Hu; Guan, Xiao-Feng; Zhou, Ling-Yun; Wang, Jiang-Lin; Li, Dai-Yang; Zuo, Xiao-Cong

    2017-02-03

    Several models have been developed for prediction of contrast-induced nephropathy (CIN); however, they only contain patients receiving intra-arterial contrast media for coronary angiographic procedures, which represent a small proportion of all contrast procedures. In addition, most of them evaluate radiological interventional procedure-related variables. So it is necessary for us to develop a model for prediction of CIN before radiological procedures among patients administered contrast media. A total of 8800 patients undergoing contrast administration were randomly assigned in a 4:1 ratio to development and validation data sets. CIN was defined as an increase of 25% and/or 0.5 mg/dL in serum creatinine within 72 hours above the baseline value. Preprocedural clinical variables were used to develop the prediction model from the training data set by the machine learning method of random forest, and 5-fold cross-validation was used to evaluate the prediction accuracies of the model. Finally we tested this model in the validation data set. The incidence of CIN was 13.38%. We built a prediction model with 13 preprocedural variables selected from 83 variables. The model obtained an area under the receiver-operating characteristic (ROC) curve (AUC) of 0.907 and gave prediction accuracy of 80.8%, sensitivity of 82.7%, specificity of 78.8%, and Matthews correlation coefficient of 61.5%. For the first time, 3 new factors are included in the model: the decreased sodium concentration, the INR value, and the preprocedural glucose level. The newly established model shows excellent predictive ability of CIN development and thereby provides preventative measures for CIN. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  11. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  12. A Stochastic mesoscopic model for predicting the globular grain structure and solute redistribution in cast alloys at low superheat

    International Nuclear Information System (INIS)

    Nastac, Laurentiu; El Kaddah, Nagy

    2012-01-01

    It is well known that casting at low superheat has a strong influence on the solidification morphology and macro- and microstructures of the cast alloy. This paper describes a stochastic mesoscopic solidification model for predicting the grain structure and segregation in cast alloy at low superheat. This model was applied to predict the globular solidification morphology and size as well as solute redistribution of Al in cast Mg AZ31B alloy at superheat of 5°C produced by the Magnetic Suspension Melting (MSM) process, which is an integrated containerless induction melting and casting process. The castings produced at this low superheat have fine globular grain structure, with an average grain size of 80 μm, which is about 3 times smaller than that obtained by conventional casting techniques. The stochastic model was found to reasonably predict the observed grain structure and Al microsegregation. This makes the model a useful tool for controlling the structure of cast magnesium alloys.

  13. Screening important inputs in models with strong interaction properties

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Campolongo, Francesca; Cariboni, Jessica

    2009-01-01

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  14. Screening important inputs in models with strong interaction properties

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy); Campolongo, Francesca [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)], E-mail: francesca.campolongo@jrc.it; Cariboni, Jessica [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)

    2009-07-15

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  15. On autostability of almost prime models relative to strong constructivizations

    International Nuclear Information System (INIS)

    Goncharov, Sergey S

    2011-01-01

    Questions of autostability and algorithmic dimension of models go back to papers by A.I. Malcev and by A. Froehlich and J.C. Shepherdson in which the effect of the existence of computable presentations which are non-equivalent from the viewpoint of their algorithmic properties was first discovered. Today there are many papers by various authors devoted to investigations of such questions. The present paper deals with the question of inheritance of the properties of autostability and non-autostability relative to strong constructivizations under elementary extensions for almost prime models. Bibliography: 37 titles.

  16. Strong expectations cancel locality effects: evidence from Hindi.

    Directory of Open Access Journals (Sweden)

    Samar Husain

    Full Text Available Expectation-driven facilitation (Hale, 2001; Levy, 2008 and locality-driven retrieval difficulty (Gibson, 1998, 2000; Lewis & Vasishth, 2005 are widely recognized to be two critical factors in incremental sentence processing; there is accumulating evidence that both can influence processing difficulty. However, it is unclear whether and how expectations and memory interact. We first confirm a key prediction of the expectation account: a Hindi self-paced reading study shows that when an expectation for an upcoming part of speech is dashed, building a rarer structure consumes more processing time than building a less rare structure. This is a strong validation of the expectation-based account. In a second study, we show that when expectation is strong, i.e., when a particular verb is predicted, strong facilitation effects are seen when the appearance of the verb is delayed; however, when expectation is weak, i.e., when only the part of speech "verb" is predicted but a particular verb is not predicted, the facilitation disappears and a tendency towards a locality effect is seen. The interaction seen between expectation strength and distance shows that strong expectations cancel locality effects, and that weak expectations allow locality effects to emerge.

  17. Strong expectations cancel locality effects: evidence from Hindi.

    Science.gov (United States)

    Husain, Samar; Vasishth, Shravan; Srinivasan, Narayanan

    2014-01-01

    Expectation-driven facilitation (Hale, 2001; Levy, 2008) and locality-driven retrieval difficulty (Gibson, 1998, 2000; Lewis & Vasishth, 2005) are widely recognized to be two critical factors in incremental sentence processing; there is accumulating evidence that both can influence processing difficulty. However, it is unclear whether and how expectations and memory interact. We first confirm a key prediction of the expectation account: a Hindi self-paced reading study shows that when an expectation for an upcoming part of speech is dashed, building a rarer structure consumes more processing time than building a less rare structure. This is a strong validation of the expectation-based account. In a second study, we show that when expectation is strong, i.e., when a particular verb is predicted, strong facilitation effects are seen when the appearance of the verb is delayed; however, when expectation is weak, i.e., when only the part of speech "verb" is predicted but a particular verb is not predicted, the facilitation disappears and a tendency towards a locality effect is seen. The interaction seen between expectation strength and distance shows that strong expectations cancel locality effects, and that weak expectations allow locality effects to emerge.

  18. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  19. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  20. A theoretical model of strong and moderate El Niño regimes

    Science.gov (United States)

    Takahashi, Ken; Karamperidou, Christina; Dewitte, Boris

    2018-02-01

    The existence of two regimes for El Niño (EN) events, moderate and strong, has been previously shown in the GFDL CM2.1 climate model and also suggested in observations. The two regimes have been proposed to originate from the nonlinearity in the Bjerknes feedback, associated with a threshold in sea surface temperature (T_c ) that needs to be exceeded for deep atmospheric convection to occur in the eastern Pacific. However, although the recent 2015-16 EN event provides a new data point consistent with the sparse strong EN regime, it is not enough to statistically reject the null hypothesis of a unimodal distribution based on observations alone. Nevertheless, we consider the possibility suggestive enough to explore it with a simple theoretical model based on the nonlinear Bjerknes feedback. In this study, we implemented this nonlinear mechanism in the recharge-discharge (RD) ENSO model and show that it is sufficient to produce the two EN regimes, i.e. a bimodal distribution in peak surface temperature (T) during EN events. The only modification introduced to the original RD model is that the net damping is suppressed when T exceeds T_c , resulting in a weak nonlinearity in the system. Due to the damping, the model is globally stable and it requires stochastic forcing to maintain the variability. The sustained low-frequency component of the stochastic forcing plays a key role for the onset of strong EN events (i.e. for T>T_c ), at least as important as the precursor positive heat content anomaly (h). High-frequency forcing helps some EN events to exceed T_c , increasing the number of strong events, but the rectification effect is small and the overall number of EN events is little affected by this forcing. Using the Fokker-Planck equation, we show how the bimodal probability distribution of EN events arises from the nonlinear Bjerknes feedback and also propose that the increase in the net feedback with increasing T is a necessary condition for bimodality in the RD

  1. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  2. Strongly first-order electroweak phase transition and classical scale invariance

    Science.gov (United States)

    Farzinnia, Arsham; Ren, Jing

    2014-10-01

    In this work, we examine the possibility of realizing a strongly first-order electroweak phase transition within the minimal classically scale-invariant extension of the standard model (SM), previously proposed and analyzed as a potential solution to the hierarchy problem. By introducing one complex gauge-singlet scalar and three (weak scale) right-handed Majorana neutrinos, the scenario was successfully rendered capable of achieving a radiative breaking of the electroweak symmetry (by means of the Coleman-Weinberg mechanism), inducing nonzero masses for the SM neutrinos (via the seesaw mechanism), presenting a pseudoscalar dark matter candidate (protected by the CP symmetry of the potential), and predicting the existence of a second CP-even boson (with suppressed couplings to the SM content) in addition to the 125 GeV scalar. In the present treatment, we construct the full finite-temperature one-loop effective potential of the model, including the resummed thermal daisy loops, and demonstrate that finite-temperature effects induce a first-order electroweak phase transition. Requiring the thermally driven first-order phase transition to be sufficiently strong at the onset of the bubble nucleation (corresponding to nucleation temperatures TN˜100-200 GeV) further constrains the model's parameter space; in particular, an O(0.01) fraction of the dark matter in the Universe may be simultaneously accommodated with a strongly first-order electroweak phase transition. Moreover, such a phase transition disfavors right-handed Majorana neutrino masses above several hundreds of GeV, confines the pseudoscalar dark matter masses to ˜1-2 TeV, predicts the mass of the second CP-even scalar to be ˜100-300 GeV, and requires the mixing angle between the CP-even components of the SM doublet and the complex singlet to lie within the range 0.2≲sinω ≲0.4. The obtained results are displayed in comprehensive exclusion plots, identifying the viable regions of the parameter space

  3. Models based on ultraviolet spectroscopy, polyphenols, oligosaccharides and polysaccharides for prediction of wine astringency.

    Science.gov (United States)

    Boulet, Jean-Claude; Trarieux, Corinne; Souquet, Jean-Marc; Ducasse, Maris-Agnés; Caillé, Soline; Samson, Alain; Williams, Pascale; Doco, Thierry; Cheynier, Véronique

    2016-01-01

    Astringency elicited by tannins is usually assessed by tasting. Alternative methods involving tannin precipitation have been proposed, but they remain time-consuming. Our goal was to propose a faster method and investigate the links between wine composition and astringency. Red wines covering a wide range of astringency intensities, assessed by sensory analysis, were selected. Prediction models based on multiple linear regression (MLR) were built using UV spectrophotometry (190-400 nm) and chemical analysis (enological analysis, polyphenols, oligosaccharides and polysaccharides). Astringency intensity was strongly correlated (R(2) = 0.825) with tannin precipitation by bovine serum albumin (BSA). Wine absorbances at 230 nm (A230) proved more suitable for astringency prediction (R(2) = 0.705) than A280 (R(2) = 0.56) or tannin concentration estimated by phloroglucinolysis (R(2) = 0.59). Three variable models built with A230, oligosaccharides and polysaccharides presented high R(2) and low errors of cross-validation. These models confirmed that polysaccharides decrease astringency perception and indicated a positive relationship between oligosaccharides and astringency. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Prediction and analysis of near-road concentrations using a reduced-form emission/dispersion model

    Directory of Open Access Journals (Sweden)

    Kononowech Robert

    2010-06-01

    speeds and high emissions (e.g., weekday rush hour. The spatial and temporal variation among predicted concentrations was significant, and resulted in unusual distributional and correlation characteristics, including strong negative correlation for receptors on opposite sides of a road and the highest short-term concentrations on the "upwind" side of the road. Conclusions The case study findings can likely be generalized to many other locations, and they have important implications for epidemiological and other studies. The reduced-form model is intended for exposure assessment, risk assessment, epidemiological, geographical information systems, and other applications.

  5. Nonlinear dynamical modeling and prediction of the terrestrial magnetospheric activity

    International Nuclear Information System (INIS)

    Vassiliadis, D.

    1992-01-01

    The irregular activity of the magnetosphere results from its complex internal dynamics as well as the external influence of the solar wind. The dominating self-organization of the magnetospheric plasma gives rise to repetitive, large-scale coherent behavior manifested in phenomena such as the magnetic substorm. Based on the nonlinearity of the global dynamics this dissertation examines the magnetosphere as a nonlinear dynamical system using time series analysis techniques. Initially the magnetospheric activity is modeled in terms of an autonomous system. A dimension study shows that its observed time series is self-similar, but the correlation dimension is high. The implication of a large number of degrees of freedom is confirmed by other state space techniques such as Poincare sections and search for unstable periodic orbits. At the same time a stability study of the time series in terms of Lyapunov exponents suggests that the series is not chaotic. The absence of deterministic chaos is supported by the low predictive capability of the autonomous model. Rather than chaos, it is an external input which is largely responsible for the irregularity of the magnetospheric activity. In fact, the external driving is so strong that the above state space techniques give results for magnetospheric and solar wind time series that are at least qualitatively similar. Therefore the solar wind input has to be included in a low-dimensional nonautonomous model. Indeed it is shown that such a model can reproduce the observed magnetospheric behavior up to 80-90 percent. The characteristic coefficients of the model show little variation depending on the external disturbance. The impulse response is consistent with earlier results of linear prediction filters. The model can be easily extended to contain nonlinear features of the magnetospheric activity and in particular the loading-unloading behavior of substorms

  6. Development of the statistical ARIMA model: an application for predicting the upcoming of MJO index

    Science.gov (United States)

    Hermawan, Eddy; Nurani Ruchjana, Budi; Setiawan Abdullah, Atje; Gede Nyoman Mindra Jaya, I.; Berliana Sipayung, Sinta; Rustiana, Shailla

    2017-10-01

    This study is mainly concerned in development one of the most important equatorial atmospheric phenomena that we call as the Madden Julian Oscillation (MJO) which having strong impacts to the extreme rainfall anomalies over the Indonesian Maritime Continent (IMC). In this study, we focused to the big floods over Jakarta and surrounded area that suspecting caused by the impacts of MJO. We concentrated to develop the MJO index using the statistical model that we call as Box-Jenkis (ARIMA) ini 1996, 2002, and 2007, respectively. They are the RMM (Real Multivariate MJO) index as represented by RMM1 and RMM2, respectively. There are some steps to develop that model, starting from identification of data, estimated, determined model, before finally we applied that model for investigation some big floods that occurred at Jakarta in 1996, 2002, and 2007 respectively. We found the best of estimated model for the RMM1 and RMM2 prediction is ARIMA (2,1,2). Detailed steps how that model can be extracted and applying to predict the rainfall anomalies over Jakarta for 3 to 6 months later is discussed at this paper.

  7. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  8. Strongly coupled radiation from moving mirrors and holography in the Karch-Randall model

    International Nuclear Information System (INIS)

    Pujolas, Oriol

    2008-01-01

    Motivated by the puzzles in understanding how Black Holes evaporate into a strongly coupled Conformal Field Theory, we study particle creation by an accelerating mirror. We model the mirror as a gravitating Domain Wall and consider a CFT coupled to it through gravity, in asymptotically Anti de Sitter space. This problem (backreaction included) can be solved exactly at one loop. At strong coupling, this is dual to a Domain Wall localized on the brane in the Karch-Randall model, which can be fully solved as well. Hence, in this case one can see how the particle production is affected by A) strong coupling and B) its own backreaction. We find that A) the amount of CFT radiation at strong coupling is not suppressed relative to the weak coupling result; and B) once the boundary conditions in the AdS 5 bulk are appropriately mapped to the conditions for the CFT on the boundary of AdS 4 , the Karch-Randall model and the CFT side agree to leading order in the backreaction. This agreement holds even for a new class of self-consistent solutions (the 'Bootstrap' Domain Wall spacetimes) that have no classical limit. This provides a quite precise check of the holographic interpretation of the Karch-Randall model. We also comment on the massive gravity interpretation. As a byproduct, we show that relativistic Cosmic Strings (pure tension codimension 2 branes) in Anti de Sitter are repulsive and generate long-range tidal forces even at classical level. This is the phenomenon dual to particle production by Domain Walls.

  9. Joint statistics of strongly correlated neurons via dimensionality reduction

    International Nuclear Information System (INIS)

    Deniz, Taşkın; Rotter, Stefan

    2017-01-01

    The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input. (paper)

  10. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  11. Combined Prediction Model of Death Toll for Road Traffic Accidents Based on Independent and Dependent Variables

    Directory of Open Access Journals (Sweden)

    Feng Zhong-xiang

    2014-01-01

    Full Text Available In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.

  12. Combined prediction model of death toll for road traffic accidents based on independent and dependent variables.

    Science.gov (United States)

    Feng, Zhong-xiang; Lu, Shi-sheng; Zhang, Wei-hua; Zhang, Nan-nan

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.

  13. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  14. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  15. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  16. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  17. Ages and transit times as important diagnostics of model performance for predicting carbon dynamics in terrestrial vegetation models

    Science.gov (United States)

    Ceballos-Núñez, Verónika; Richardson, Andrew D.; Sierra, Carlos A.

    2018-03-01

    The global carbon cycle is strongly controlled by the source/sink strength of vegetation as well as the capacity of terrestrial ecosystems to retain this carbon. These dynamics, as well as processes such as the mixing of old and newly fixed carbon, have been studied using ecosystem models, but different assumptions regarding the carbon allocation strategies and other model structures may result in highly divergent model predictions. We assessed the influence of three different carbon allocation schemes on the C cycling in vegetation. First, we described each model with a set of ordinary differential equations. Second, we used published measurements of ecosystem C compartments from the Harvard Forest Environmental Measurement Site to find suitable parameters for the different model structures. And third, we calculated C stocks, release fluxes, radiocarbon values (based on the bomb spike), ages, and transit times. We obtained model simulations in accordance with the available data, but the time series of C in foliage and wood need to be complemented with other ecosystem compartments in order to reduce the high parameter collinearity that we observed, and reduce model equifinality. Although the simulated C stocks in ecosystem compartments were similar, the different model structures resulted in very different predictions of age and transit time distributions. In particular, the inclusion of two storage compartments resulted in the prediction of a system mean age that was 12-20 years older than in the models with one or no storage compartments. The age of carbon in the wood compartment of this model was also distributed towards older ages, whereas fast cycling compartments had an age distribution that did not exceed 5 years. As expected, models with C distributed towards older ages also had longer transit times. These results suggest that ages and transit times, which can be indirectly measured using isotope tracers, serve as important diagnostics of model structure

  18. Generating linear regression model to predict motor functions by use of laser range finder during TUG.

    Science.gov (United States)

    Adachi, Daiki; Nishiguchi, Shu; Fukutani, Naoto; Hotta, Takayuki; Tashiro, Yuto; Morino, Saori; Shirooka, Hidehiko; Nozaki, Yuma; Hirata, Hinako; Yamaguchi, Moe; Yorozu, Ayanori; Takahashi, Masaki; Aoyama, Tomoki

    2017-05-01

    The purpose of this study was to investigate which spatial and temporal parameters of the Timed Up and Go (TUG) test are associated with motor function in elderly individuals. This study included 99 community-dwelling women aged 72.9 ± 6.3 years. Step length, step width, single support time, variability of the aforementioned parameters, gait velocity, cadence, reaction time from starting signal to first step, and minimum distance between the foot and a marker placed to 3 in front of the chair were measured using our analysis system. The 10-m walk test, five times sit-to-stand (FTSTS) test, and one-leg standing (OLS) test were used to assess motor function. Stepwise multivariate linear regression analysis was used to determine which TUG test parameters were associated with each motor function test. Finally, we calculated a predictive model for each motor function test using each regression coefficient. In stepwise linear regression analysis, step length and cadence were significantly associated with the 10-m walk test, FTSTS and OLS test. Reaction time was associated with the FTSTS test, and step width was associated with the OLS test. Each predictive model showed a strong correlation with the 10-m walk test and OLS test (P motor function test. Moreover, the TUG test time regarded as the lower extremity function and mobility has strong predictive ability in each motor function test. Copyright © 2017 The Japanese Orthopaedic Association. Published by Elsevier B.V. All rights reserved.

  19. Physical and JIT Model Based Hybrid Modeling Approach for Building Thermal Load Prediction

    Science.gov (United States)

    Iino, Yutaka; Murai, Masahiko; Murayama, Dai; Motoyama, Ichiro

    Energy conservation in building fields is one of the key issues in environmental point of view as well as that of industrial, transportation and residential fields. The half of the total energy consumption in a building is occupied by HVAC (Heating, Ventilating and Air Conditioning) systems. In order to realize energy conservation of HVAC system, a thermal load prediction model for building is required. This paper propose a hybrid modeling approach with physical and Just-in-Time (JIT) model for building thermal load prediction. The proposed method has features and benefits such as, (1) it is applicable to the case in which past operation data for load prediction model learning is poor, (2) it has a self checking function, which always supervises if the data driven load prediction and the physical based one are consistent or not, so it can find if something is wrong in load prediction procedure, (3) it has ability to adjust load prediction in real-time against sudden change of model parameters and environmental conditions. The proposed method is evaluated with real operation data of an existing building, and the improvement of load prediction performance is illustrated.

  20. A predictive thermal dynamic model for parameter generation in the laser assisted direct write process

    International Nuclear Information System (INIS)

    Shang Shuo; Fearon, Eamonn; Wellburn, Dan; Sato, Taku; Edwardson, Stuart; Dearden, G; Watkins, K G

    2011-01-01

    The laser assisted direct write (LADW) method can be used to generate electrical circuitry on a substrate by depositing metallic ink and curing the ink thermally by a laser. Laser curing has emerged over recent years as a novel yet efficient alternative to oven curing. This method can be used in situ, over complicated 3D contours of large parts (e.g. aircraft wings) and selectively cure over heat sensitive substrates, with little or no thermal damage. In previous studies, empirical methods have been used to generate processing windows for this technique, relating to the several interdependent processing parameters on which the curing quality and efficiency strongly depend. Incorrect parameters can result in a track that is cured in some areas and uncured in others, or in damaged substrates. This paper addresses the strong need for a quantitative model which can systematically output the processing conditions for a given combination of ink, substrate and laser source; transforming the LADW technique from a purely empirical approach, to a simple, repeatable, mathematically sound, efficient and predictable process. The method comprises a novel and generic finite element model (FEM) that for the first time predicts the evolution of the thermal profile of the ink track during laser curing and thus generates a parametric map which indicates the most suitable combination of parameters for process optimization. Experimental data are compared with simulation results to verify the accuracy of the model.

  1. Spectral nudging in regional climate modelling: How strongly should we nudge?

    OpenAIRE

    Omrani , Hiba; Drobinski , Philippe; Dubos , Thomas

    2012-01-01

    International audience; Spectral nudging is a technique consisting in driving regional climate models (RCMs) on selected spatial scales corresponding to those produced by the driving global circulation model (GCM). This technique prevents large and unrealistic departures between the GCM driving fields and the RCM fields at the GCM spatial scales. Theoretically, the relaxation of the RCM towards the GCM should be infinitely strong provided thre are perfect large-scale fields. In practice, the ...

  2. Prediction of multiple resonance characteristics by an extended resistor-inductor-capacitor circuit model for plasmonic metamaterials absorbers in infrared.

    Science.gov (United States)

    Xu, Xiaolun; Li, Yongqian; Wang, Binbin; Zhou, Zili

    2015-10-01

    The resonance characteristics of plasmonic metamaterials absorbers (PMAs) are strongly dependent on geometric parameters. A resistor-inductor-capacitor (RLC) circuit model has been extended to predict the resonance wavelengths and the bandwidths of multiple magnetic polaritons modes in PMAs. For a typical metallic-dielectric-metallic structure absorber working in the infrared region, the developed model describes the correlation between the resonance characteristics and the dimensional sizes. In particular, the RLC model is suitable for not only the fundamental resonance mode, but also for the second- and third-order resonance modes. The prediction of the resonance characteristics agrees fairly well with those calculated by the finite-difference time-domain simulation and the experimental results. The developed RLC model enables the facilitation of designing multi-band PMAs for infrared radiation detectors and thermal emitters.

  3. Strong water absorption in the dayside emission spectrum of the planet HD 189733b.

    Science.gov (United States)

    Grillmair, Carl J; Burrows, Adam; Charbonneau, David; Armus, Lee; Stauffer, John; Meadows, Victoria; van Cleve, Jeffrey; von Braun, Kaspar; Levine, Deborah

    2008-12-11

    Recent observations of the extrasolar planet HD 189733b did not reveal the presence of water in the emission spectrum of the planet. Yet models of such 'hot-Jupiter' planets predict an abundance of atmospheric water vapour. Validating and constraining these models is crucial to understanding the physics and chemistry of planetary atmospheres in extreme environments. Indications of the presence of water in the atmosphere of HD 189733b have recently been found in transmission spectra, where the planet's atmosphere selectively absorbs the light of the parent star, and in broadband photometry. Here we report the detection of strong water absorption in a high-signal-to-noise, mid-infrared emission spectrum of the planet itself. We find both a strong downturn in the flux ratio below 10 microm and discrete spectral features that are characteristic of strong absorption by water vapour. The differences between these and previous observations are significant and admit the possibility that predicted planetary-scale dynamical weather structures may alter the emission spectrum over time. Models that match the observed spectrum and the broadband photometry suggest that heat redistribution from the dayside to the nightside is weak. Reconciling this with the high nightside temperature will require a better understanding of atmospheric circulation or possible additional energy sources.

  4. Logistic regression modelling: procedures and pitfalls in developing and interpreting prediction models

    Directory of Open Access Journals (Sweden)

    Nataša Šarlija

    2017-01-01

    Full Text Available This study sheds light on the most common issues related to applying logistic regression in prediction models for company growth. The purpose of the paper is 1 to provide a detailed demonstration of the steps in developing a growth prediction model based on logistic regression analysis, 2 to discuss common pitfalls and methodological errors in developing a model, and 3 to provide solutions and possible ways of overcoming these issues. Special attention is devoted to the question of satisfying logistic regression assumptions, selecting and defining dependent and independent variables, using classification tables and ROC curves, for reporting model strength, interpreting odds ratios as effect measures and evaluating performance of the prediction model. Development of a logistic regression model in this paper focuses on a prediction model of company growth. The analysis is based on predominantly financial data from a sample of 1471 small and medium-sized Croatian companies active between 2009 and 2014. The financial data is presented in the form of financial ratios divided into nine main groups depicting following areas of business: liquidity, leverage, activity, profitability, research and development, investing and export. The growth prediction model indicates aspects of a business critical for achieving high growth. In that respect, the contribution of this paper is twofold. First, methodological, in terms of pointing out pitfalls and potential solutions in logistic regression modelling, and secondly, theoretical, in terms of identifying factors responsible for high growth of small and medium-sized companies.

  5. Statistical Models for Predicting Threat Detection From Human Behavior

    Science.gov (United States)

    Kelley, Timothy; Amon, Mary J.; Bertenthal, Bennett I.

    2018-01-01

    attacks. Participant accuracy in identifying spoof and non-spoof websites was best captured using a model that included real-time indicators of decision-making behavior, as compared to two-factor and survey-based models. Findings validate three widely applicable measures of user behavior derived from mouse tracking recordings, which can be utilized in cyber security and user intervention research. Survey data alone are not as strong at predicting risky Internet behavior as models that incorporate real-time measures of user behavior, such as mouse tracking. PMID:29713296

  6. Statistical Models for Predicting Threat Detection From Human Behavior

    Directory of Open Access Journals (Sweden)

    Timothy Kelley

    2018-04-01

    during phishing attacks. Participant accuracy in identifying spoof and non-spoof websites was best captured using a model that included real-time indicators of decision-making behavior, as compared to two-factor and survey-based models. Findings validate three widely applicable measures of user behavior derived from mouse tracking recordings, which can be utilized in cyber security and user intervention research. Survey data alone are not as strong at predicting risky Internet behavior as models that incorporate real-time measures of user behavior, such as mouse tracking.

  7. Statistical Models for Predicting Threat Detection From Human Behavior.

    Science.gov (United States)

    Kelley, Timothy; Amon, Mary J; Bertenthal, Bennett I

    2018-01-01

    . Participant accuracy in identifying spoof and non-spoof websites was best captured using a model that included real-time indicators of decision-making behavior, as compared to two-factor and survey-based models. Findings validate three widely applicable measures of user behavior derived from mouse tracking recordings, which can be utilized in cyber security and user intervention research. Survey data alone are not as strong at predicting risky Internet behavior as models that incorporate real-time measures of user behavior, such as mouse tracking.

  8. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A; Giebel, G; Landberg, L [Risoe National Lab., Roskilde (Denmark); Madsen, H; Nielsen, H A [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  9. Attenuation relations of strong motion in Japan using site classification based on predominant period

    International Nuclear Information System (INIS)

    Toshimasa Takahashi; Akihiro Asano; Hidenobu Okada; Kojiro Irikura; Zhao, J.X.; Zhang Jian; Thio, H.K.; Somerville, P.G.; Yasuhiro Fukushima; Yoshimitsu Fukushima

    2005-01-01

    A spectral acceleration attenuation model for Japan is presented. The data set includes a very large number of strong ground motion records up to the end of 2003. Site class terms, instead of individual site correction terms, are used based on a recent study on site classification for strong motion recording stations in Japan. By using site class terms, tectonic source type effects are identified and accounted in the present model. Effects of faulting mechanism for crustal earthquakes are also accounted for. For crustal and interface earthquakes, a simple form of attenuation model is able to capture the main strong motion characteristics and achieves unbiased estimates. For subduction slab events, a simple distance modification factor is employed to achieve plausible and unbiased prediction. Effects of source depth, tectonic source type, and faulting mechanism for crustal earthquakes are significant. (authors)

  10. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  11. Short-Term Wind Speed Prediction Using EEMD-LSSVM Model

    Directory of Open Access Journals (Sweden)

    Aiqing Kang

    2017-01-01

    Full Text Available Hybrid Ensemble Empirical Mode Decomposition (EEMD and Least Square Support Vector Machine (LSSVM is proposed to improve short-term wind speed forecasting precision. The EEMD is firstly utilized to decompose the original wind speed time series into a set of subseries. Then the LSSVM models are established to forecast these subseries. Partial autocorrelation function is adopted to analyze the inner relationships between the historical wind speed series in order to determine input variables of LSSVM models for prediction of every subseries. Finally, the superposition principle is employed to sum the predicted values of every subseries as the final wind speed prediction. The performance of hybrid model is evaluated based on six metrics. Compared with LSSVM, Back Propagation Neural Networks (BP, Auto-Regressive Integrated Moving Average (ARIMA, combination of Empirical Mode Decomposition (EMD with LSSVM, and hybrid EEMD with ARIMA models, the wind speed forecasting results show that the proposed hybrid model outperforms these models in terms of six metrics. Furthermore, the scatter diagrams of predicted versus actual wind speed and histograms of prediction errors are presented to verify the superiority of the hybrid model in short-term wind speed prediction.

  12. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  13. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  14. WHY WE CANNOT PREDICT STRONG EARTHQUAKES IN THE EARTH’S CRUST

    Directory of Open Access Journals (Sweden)

    Iosif L. Gufeld

    2011-01-01

    Full Text Available In the past decade, earthquake disasters caused multiple fatalities and significant economic losses and challenged the modern civilization. The wellknown achievements and growing power of civilization are backstrapped when facing the Nature. The question arises, what hinders solving a problem of earthquake prediction, while longterm and continuous seismic monitoring systems are in place in many regions of the world. For instance, there was no forecast of the Japan Great Earthquake of March 11, 2011, despite the fact that monitoring conditions for its prediction were unique. Its focal zone was 100–200 km away from the monitoring network installed in the area of permanent seismic hazard, which is subject to nonstop and longterm seismic monitoring. Lesson should be learned from our common fiasco in forecasting, taking into account research results obtained during the past 50–60 years. It is now evident that we failed to identify precursors of the earthquakes. Prior to the earthquake occurrence, the observed local anomalies of various fields reflected other processes that were mistakenly viewed as processes of preparation for largescale faulting. For many years, geotectonic situations were analyzed on the basis of the physics of destruction of laboratory specimens, which was applied to the lithospheric conditions. Many researchers realize that such an approach is inaccurate. Nonetheless, persistent attempts are being undertaken with application of modern computation to detect anomalies of various fields, which may be interpreted as earthquake precursors. In our opinion, such illusory intentions were smashed by the Great Japan Earthquake (Figure 6. It is also obvious that sufficient attention has not been given yet to fundamental studies of seismic processes.This review presents the authors’ opinion concerning the origin of the seismic process and strong earthquakes, being part of the process. The authors realize that a wide discussion is

  15. Influence of Different Yield Loci on Failure Prediction with Damage Models

    Science.gov (United States)

    Heibel, S.; Nester, W.; Clausmeyer, T.; Tekkaya, A. E.

    2017-09-01

    Advanced high strength steels are widely used in the automotive industry to simultaneously improve crash performance and reduce the car body weight. A drawback of these multiphase steels is their sensitivity to damage effects and thus the reduction of ductility. For that reason the Forming Limit Curve is only partially suitable for this class of steels. An improvement in failure prediction can be obtained by using damage mechanics. The objective of this paper is to comparatively review the phenomenological damage model GISSMO and the Enhanced Lemaitre Damage Model. GISSMO is combined with three different yield loci, namely von Mises, Hill48 and Barlat2000 to investigate the influence of the choice of the plasticity description on damage modelling. The Enhanced Lemaitre Model is used with Hill48. An inverse parameter identification strategy for a DP1000 based on stress-strain curves and optical strain measurements of shear, uniaxial, notch and (equi-)biaxial tension tests is applied to calibrate the models. A strong dependency of fracture strains on the choice of yield locus can be observed. The identified models are validated on a cross-die cup showing ductile fracture with slight necking.

  16. Modeling the prediction of business intelligence system effectiveness.

    Science.gov (United States)

    Weng, Sung-Shun; Yang, Ming-Hsien; Koo, Tian-Lih; Hsiao, Pei-I

    2016-01-01

    Although business intelligence (BI) technologies are continually evolving, the capability to apply BI technologies has become an indispensable resource for enterprises running in today's complex, uncertain and dynamic business environment. This study performed pioneering work by constructing models and rules for the prediction of business intelligence system effectiveness (BISE) in relation to the implementation of BI solutions. For enterprises, effectively managing critical attributes that determine BISE to develop prediction models with a set of rules for self-evaluation of the effectiveness of BI solutions is necessary to improve BI implementation and ensure its success. The main study findings identified the critical prediction indicators of BISE that are important to forecasting BI performance and highlighted five classification and prediction rules of BISE derived from decision tree structures, as well as a refined regression prediction model with four critical prediction indicators constructed by logistic regression analysis that can enable enterprises to improve BISE while effectively managing BI solution implementation and catering to academics to whom theory is important.

  17. [Application of ARIMA model on prediction of malaria incidence].

    Science.gov (United States)

    Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai

    2016-01-29

    To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.

  18. Subaru Weak Lensing Measurements of Four Strong Lensing Clusters: Are Lensing Clusters Over-Concentrated?

    Energy Technology Data Exchange (ETDEWEB)

    Oguri, Masamune; Hennawi, Joseph F.; Gladders, Michael D.; Dahle, Haakon; Natarajan, Priyamvada; Dalal, Neal; Koester, Benjamin P.; Sharon, Keren; Bayliss, Matthew

    2009-01-29

    We derive radial mass profiles of four strong lensing selected clusters which show prominent giant arcs (Abell 1703, SDSS J1446+3032, SDSS J1531+3414, and SDSS J2111-0115), by combining detailed strong lens modeling with weak lensing shear measured from deep Subaru Suprime-cam images. Weak lensing signals are detected at high significance for all four clusters, whose redshifts range from z = 0.28 to 0.64. We demonstrate that adding strong lensing information with known arc redshifts significantly improves constraints on the mass density profile, compared to those obtained from weak lensing alone. While the mass profiles are well fitted by the universal form predicted in N-body simulations of the {Lambda}-dominated cold dark matter model, all four clusters appear to be slightly more centrally concentrated (the concentration parameters c{sub vir} {approx} 8) than theoretical predictions, even after accounting for the bias toward higher concentrations inherent in lensing selected samples. Our results are consistent with previous studies which similarly detected a concentration excess, and increases the total number of clusters studied with the combined strong and weak lensing technique to ten. Combining our sample with previous work, we find that clusters with larger Einstein radii are more anomalously concentrated. We also present a detailed model of the lensing cluster Abell 1703 with constraints from multiple image families, and find the dark matter inner density profile to be cuspy with the slope consistent with -1, in agreement with expectations.

  19. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  20. Prediction of lithium-ion battery capacity with metabolic grey model

    International Nuclear Information System (INIS)

    Chen, Lin; Lin, Weilong; Li, Junzi; Tian, Binbin; Pan, Haihong

    2016-01-01

    Given the popularity of Lithium-ion batteries in EVs (electric vehicles), predicting the capacity quickly and accurately throughout a battery's full life-time is still a challenging issue for ensuring the reliability of EVs. This paper proposes an approach in predicting the varied capacity with discharge cycles based on metabolic grey theory and consider issues from two perspectives: 1) three metabolic grey models will be presented, including MGM (metabolic grey model), MREGM (metabolic Residual-error grey model), and MMREGM (metabolic Markov-residual-error grey model); 2) the universality of these models will be explored under different conditions (such as various discharge rates and temperatures). Furthermore, the research findings in this paper demonstrate the excellent performance of the prediction depending on the three models; however, the precision of the MREGM model is inferior compared to the others. Therefore, we have obtained the conclusion in which the MGM model and the MMREGM model have excellent performances in predicting the capacity under a variety of load conditions, even using few data points for modeling. Also, the universality of the metabolic grey prediction theory is verified by predicting the capacity of batteries under different discharge rates and different temperatures. - Highlights: • The metabolic mechanism is introduced in a grey system for capacity prediction. • Three metabolic grey models are presented and studied. • The universality of these models under different conditions is assessed. • A few data points are required for predicting the capacity with these models.

  1. Comparison of joint modeling and landmarking for dynamic prediction under an illness-death model.

    Science.gov (United States)

    Suresh, Krithika; Taylor, Jeremy M G; Spratt, Daniel E; Daignault, Stephanie; Tsodikov, Alexander

    2017-11-01

    Dynamic prediction incorporates time-dependent marker information accrued during follow-up to improve personalized survival prediction probabilities. At any follow-up, or "landmark", time, the residual time distribution for an individual, conditional on their updated marker values, can be used to produce a dynamic prediction. To satisfy a consistency condition that links dynamic predictions at different time points, the residual time distribution must follow from a prediction function that models the joint distribution of the marker process and time to failure, such as a joint model. To circumvent the assumptions and computational burden associated with a joint model, approximate methods for dynamic prediction have been proposed. One such method is landmarking, which fits a Cox model at a sequence of landmark times, and thus is not a comprehensive probability model of the marker process and the event time. Considering an illness-death model, we derive the residual time distribution and demonstrate that the structure of the Cox model baseline hazard and covariate effects under the landmarking approach do not have simple form. We suggest some extensions of the landmark Cox model that should provide a better approximation. We compare the performance of the landmark models with joint models using simulation studies and cognitive aging data from the PAQUID study. We examine the predicted probabilities produced under both methods using data from a prostate cancer study, where metastatic clinical failure is a time-dependent covariate for predicting death following radiation therapy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. A Grey NGM(1,1,k Self-Memory Coupling Prediction Model for Energy Consumption Prediction

    Directory of Open Access Journals (Sweden)

    Xiaojun Guo

    2014-01-01

    Full Text Available Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1,k self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1,k model. The traditional grey model’s weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1,k self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.

  3. Improving Predictive Modeling in Pediatric Drug Development: Pharmacokinetics, Pharmacodynamics, and Mechanistic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Slikker, William; Young, John F.; Corley, Rick A.; Dorman, David C.; Conolly, Rory B.; Knudsen, Thomas; Erstad, Brian L.; Luecke, Richard H.; Faustman, Elaine M.; Timchalk, Chuck; Mattison, Donald R.

    2005-07-26

    A workshop was conducted on November 18?19, 2004, to address the issue of improving predictive models for drug delivery to developing humans. Although considerable progress has been made for adult humans, large gaps remain for predicting pharmacokinetic/pharmacodynamic (PK/PD) outcome in children because most adult models have not been tested during development. The goals of the meeting included a description of when, during development, infants/children become adultlike in handling drugs. The issue of incorporating the most recent advances into the predictive models was also addressed: both the use of imaging approaches and genomic information were considered. Disease state, as exemplified by obesity, was addressed as a modifier of drug pharmacokinetics and pharmacodynamics during development. Issues addressed in this workshop should be considered in the development of new predictive and mechanistic models of drug kinetics and dynamics in the developing human.

  4. Weak and strong chaos in Fermi-Pasta-Ulam models and beyond

    Science.gov (United States)

    Pettini, Marco; Casetti, Lapo; Cerruti-Sola, Monica; Franzosi, Roberto; Cohen, E. G. D.

    2005-03-01

    We briefly review some of the most relevant results that our group obtained in the past, while investigating the dynamics of the Fermi-Pasta-Ulam (FPU) models. The first result is the numerical evidence of the existence of two different kinds of transitions in the dynamics of the FPU models: (i) A stochasticity threshold (ST), characterized by a value of the energy per degree of freedom below which the overwhelming majority of the phase space trajectories are regular (vanishing Lyapunov exponents). It tends to vanish as the number N of degrees of freedom is increased. (ii) A strong stochasticity threshold (SST), characterized by a value of the energy per degree of freedom at which a crossover appears between two different power laws of the energy dependence of the largest Lyapunov exponent, which phenomenologically corresponds to the transition between weak and strong chaotic regimes. It is stable with N. The second result is the development of a Riemannian geometric theory to explain the origin of Hamiltonian chaos. Starting this theory has been motivated by the inadequacy of the approach based on homoclinic intersections to explain the origin of chaos in systems of arbitrarily large N, or arbitrarily far from quasi-integrability, or displaying a transition between weak and strong chaos. Finally, the third result stems from the search for the transition between weak and strong chaos in systems other than FPU. Actually, we found that a very sharp SST appears as the dynamical counterpart of a thermodynamic phase transition, which in turn has led, in the light of the Riemannian theory of chaos, to the development of a topological theory of phase transitions.

  5. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  6. GEMINI/GMOS SPECTROSCOPY OF 26 STRONG-LENSING-SELECTED GALAXY CLUSTER CORES

    International Nuclear Information System (INIS)

    Bayliss, Matthew B.; Gladders, Michael D.; Koester, Benjamin P.; Hennawi, Joseph F.; Sharon, Keren; Dahle, Haakon; Oguri, Masamune

    2011-01-01

    We present results from a spectroscopic program targeting 26 strong-lensing cluster cores that were visually identified in the Sloan Digital Sky Survey (SDSS) and the Second Red-Sequence Cluster Survey (RCS-2). The 26 galaxy cluster lenses span a redshift range of 0.2 Vir = 7.84 x 10 14 M sun h -1 0.7 , which is somewhat higher than predictions for strong-lensing-selected clusters in simulations. The disagreement is not significant considering the large uncertainty in our dynamical data, systematic uncertainties in the velocity dispersion calibration, and limitations of the theoretical modeling. Nevertheless our study represents an important first step toward characterizing large samples of clusters that are identified in a systematic way as systems exhibiting dramatic strong-lensing features.

  7. Effective model with strong Kitaev interactions for α -RuCl3

    Science.gov (United States)

    Suzuki, Takafumi; Suga, Sei-ichiro

    2018-04-01

    We use an exact numerical diagonalization method to calculate the dynamical spin structure factors of three ab initio models and one ab initio guided model for a honeycomb-lattice magnet α -RuCl3 . We also use thermal pure quantum states to calculate the temperature dependence of the heat capacity, the nearest-neighbor spin-spin correlation function, and the static spin structure factor. From the results obtained from these four effective models, we find that, even when the magnetic order is stabilized at low temperature, the intensity at the Γ point in the dynamical spin structure factors increases with increasing nearest-neighbor spin correlation. In addition, we find that the four models fail to explain heat-capacity measurements whereas two of the four models succeed in explaining inelastic-neutron-scattering experiments. In the four models, when temperature decreases, the heat capacity shows a prominent peak at a high temperature where the nearest-neighbor spin-spin correlation function increases. However, the peak temperature in heat capacity is too low in comparison with that observed experimentally. To address these discrepancies, we propose an effective model that includes strong ferromagnetic Kitaev coupling, and we show that this model quantitatively reproduces both inelastic-neutron-scattering experiments and heat-capacity measurements. To further examine the adequacy of the proposed model, we calculate the field dependence of the polarized terahertz spectra, which reproduces the experimental results: the spin-gapped excitation survives up to an onset field where the magnetic order disappears and the response in the high-field region is almost linear. Based on these numerical results, we argue that the low-energy magnetic excitation in α -RuCl3 is mainly characterized by interactions such as off-diagonal interactions and weak Heisenberg interactions between nearest-neighbor pairs, rather than by the strong Kitaev interactions.

  8. Flash Flood Prediction by Coupling KINEROS2 and HEC-RAS Models for Tropical Regions of Northern Vietnam

    Directory of Open Access Journals (Sweden)

    Hong Quang Nguyen

    2015-11-01

    Full Text Available Northern Vietnam is a region prone to heavy flash flooding events. These often have devastating effects on the environment, cause economic damage and, in the worst case scenario, cost human lives. As their frequency and severity are likely to increase in the future, procedures have to be established to cope with this threat. As the prediction of potential flash floods represents one crucial element in this circumstance, we will present an approach that combines the two models KINEROS2 and HEC-RAS in order to accurately predict their occurrence. We used a documented event on 23 June 2011 in the Nam Khat and the larger adjacent Nam Kim watershed to calibrate the coupled model approach. Afterward, we evaluated the performance of the coupled models in predicting flow velocity (FV, water levels (WL, discharge (Q and streamflow power (P during the 3–5 days following the event, using two different precipitation datasets from the global spectral model (GSM and the high resolution model (HRM. Our results show that the estimated Q and WL closely matched observed data with a Nash–Sutcliffe simulation efficiency coefficient (NSE of around 0.93 and a coefficient of determination (R2 at above 0.96. The resulting analyses reveal strong relationships between river geometry and FV, WL and P. Although there were some minor errors in forecast results, the model-predicted Q and WL corresponded well to the gauged data.

  9. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    Science.gov (United States)

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  10. Robust predictions of the interacting boson model

    International Nuclear Information System (INIS)

    Casten, R.F.; Koeln Univ.

    1994-01-01

    While most recognized for its symmetries and algebraic structure, the IBA model has other less-well-known but equally intrinsic properties which give unavoidable, parameter-free predictions. These predictions concern central aspects of low-energy nuclear collective structure. This paper outlines these ''robust'' predictions and compares them with the data

  11. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  12. Predictive modeling of liquid-sodium thermal–hydraulics experiments and computations

    International Nuclear Information System (INIS)

    Arslan, Erkan; Cacuci, Dan G.

    2014-01-01

    Highlights: • We applied the predictive modeling method of Cacuci and Ionescu-Bujor (2010). • We assimilated data from sodium flow experiments. • We used computational fluid dynamics simulations of sodium experiments. • The predictive modeling method greatly reduced uncertainties in predicted results. - Abstract: This work applies the predictive modeling procedure formulated by Cacuci and Ionescu-Bujor (2010) to assimilate data from liquid-sodium thermal–hydraulics experiments in order to reduce systematically the uncertainties in the predictions of computational fluid dynamics (CFD) simulations. The predicted CFD-results for the best-estimate model parameters and results describing sodium-flow velocities and temperature distributions are shown to be significantly more precise than the original computations and experiments, in that the predicted uncertainties for the best-estimate results and model parameters are significantly smaller than both the originally computed and the experimental uncertainties

  13. Alignment and prediction of cis-regulatory modules based on a probabilistic model of evolution.

    Directory of Open Access Journals (Sweden)

    Xin He

    2009-03-01

    Full Text Available Cross-species comparison has emerged as a powerful paradigm for predicting cis-regulatory modules (CRMs and understanding their evolution. The comparison requires reliable sequence alignment, which remains a challenging task for less conserved noncoding sequences. Furthermore, the existing models of DNA sequence evolution generally do not explicitly treat the special properties of CRM sequences. To address these limitations, we propose a model of CRM evolution that captures different modes of evolution of functional transcription factor binding sites (TFBSs and the background sequences. A particularly novel aspect of our work is a probabilistic model of gains and losses of TFBSs, a process being recognized as an important part of regulatory sequence evolution. We present a computational framework that uses this model to solve the problems of CRM alignment and prediction. Our alignment method is similar to existing methods of statistical alignment but uses the conserved binding sites to improve alignment. Our CRM prediction method deals with the inherent uncertainties of binding site annotations and sequence alignment in a probabilistic framework. In simulated as well as real data, we demonstrate that our program is able to improve both alignment and prediction of CRM sequences over several state-of-the-art methods. Finally, we used alignments produced by our program to study binding site conservation in genome-wide binding data of key transcription factors in the Drosophila blastoderm, with two intriguing results: (i the factor-bound sequences are under strong evolutionary constraints even if their neighboring genes are not expressed in the blastoderm and (ii binding sites in distal bound sequences (relative to transcription start sites tend to be more conserved than those in proximal regions. Our approach is implemented as software, EMMA (Evolutionary Model-based cis-regulatory Module Analysis, ready to be applied in a broad biological context.

  14. In silico modeling to predict drug-induced phospholipidosis

    International Nuclear Information System (INIS)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G.; Sadrieh, Nakissa

    2013-01-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL

  15. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  16. Effective modelling for predictive analytics in data science ...

    African Journals Online (AJOL)

    Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.

  17. Solid phase evolution in the Biosphere 2 hillslope experiment as predicted by modeling of hydrologic and geochemical fluxes

    Directory of Open Access Journals (Sweden)

    K. Dontsova

    2009-12-01

    Full Text Available A reactive transport geochemical modeling study was conducted to help predict the mineral transformations occurring over a ten year time-scale that are expected to impact soil hydraulic properties in the Biosphere 2 (B2 synthetic hillslope experiment. The modeling sought to predict the rate and extent of weathering of a granular basalt (selected for hillslope construction as a function of climatic drivers, and to assess the feedback effects of such weathering processes on the hydraulic properties of the hillslope. Flow vectors were imported from HYDRUS into a reactive transport code, CrunchFlow2007, which was then used to model mineral weathering coupled to reactive solute transport. Associated particle size evolution was translated into changes in saturated hydraulic conductivity using Rosetta software. We found that flow characteristics, including velocity and saturation, strongly influenced the predicted extent of incongruent mineral weathering and neo-phase precipitation on the hillslope. Results were also highly sensitive to specific surface areas of the soil media, consistent with surface reaction controls on dissolution. Effects of fluid flow on weathering resulted in significant differences in the prediction of soil particle size distributions, which should feedback to alter hillslope hydraulic conductivities.

  18. Understanding predictability and exploration in human mobility

    DEFF Research Database (Denmark)

    Cuttone, Andrea; Jørgensen, Sune Lehmann; González, Marta C.

    2018-01-01

    Predictive models for human mobility have important applications in many fields including traffic control, ubiquitous computing, and contextual advertisement. The predictive performance of models in literature varies quite broadly, from over 90% to under 40%. In this work we study which underlying...... strong influence on the accuracy of prediction. Finally we reveal that the exploration of new locations is an important factor in human mobility, and we measure that on average 20-25% of transitions are to new places, and approx. 70% of locations are visited only once. We discuss how these mechanisms...... are important factors limiting our ability to predict human mobility....

  19. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  20. Development and validation of a prediction model for loss of physical function in elderly hemodialysis patients.

    Science.gov (United States)

    Fukuma, Shingo; Shimizu, Sayaka; Shintani, Ayumi; Kamitani, Tsukasa; Akizawa, Tadao; Fukuhara, Shunichi

    2017-09-05

    Among aging hemodialysis patients, loss of physical function has become a major issue. We developed and validated a model of predicting loss of physical function among elderly hemodialysis patients. We conducted a cohort study involving maintenance hemodialysis patients  ≥65 years of age from the Dialysis Outcomes and Practice Pattern Study in Japan. The derivation cohort included 593 early phase (1996-2004) patients and the temporal validation cohort included 447 late-phase (2005-12) patients. The main outcome was the incidence of loss of physical function, defined as the 12-item Short Form Health Survey physical function score decreasing to 0 within a year. Using backward stepwise logistic regression by Akaike's Information Criteria, six predictors (age, gender, dementia, mental health, moderate activity and ascending stairs) were selected for the final model. Points were assigned based on the regression coefficients and the total score was calculated by summing the points for each predictor. In total, 65 (11.0%) and 53 (11.9%) hemodialysis patients lost their physical function within 1 year in the derivation and validation cohorts, respectively. This model has good predictive performance quantified by both discrimination and calibration. The proportion of the loss of physical function increased sequentially through low-, middle-, and high-score categories based on the model (2.5%, 11.7% and 22.3% in the validation cohort, respectively). The loss of physical function was strongly associated with 1-year mortality [adjusted odds ratio 2.48 (95% confidence interval 1.26-4.91)]. We developed and validated a risk prediction model with good predictive performance for loss of physical function in elderly hemodialysis patients. Our simple prediction model may help physicians and patients make more informed decisions for healthy longevity. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA.

  1. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  2. Real-Time Aircraft Cosmic Ray Radiation Exposure Predictions from the NAIRAS Model

    Science.gov (United States)

    Mertens, C. J.; Tobiska, W.; Kress, B. T.; Xu, X.

    2012-12-01

    The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a prototype operational model for predicting commercial aircraft radiation exposure from galactic and solar cosmic rays. NAIRAS predictions are currently streaming live from the project's public website, and the exposure rate nowcast is also available on the SpaceWx smartphone app for iPhone, IPad, and Android. Cosmic rays are the primary source of human exposure to high linear energy transfer radiation at aircraft altitudes, which increases the risk of cancer and other adverse health effects. Thus, the NAIRAS model addresses an important national need with broad societal, public health and economic benefits. There is also interest in extending NAIRAS to the LEO environment to address radiation hazard issues for the emerging commercial spaceflight industry. The processes responsible for the variability in the solar wind, interplanetary magnetic field, solar energetic particle spectrum, and the dynamical response of the magnetosphere to these space environment inputs, strongly influence the composition and energy distribution of the atmospheric ionizing radiation field. Real-time observations are required at a variety of locations within the geospace environment. The NAIRAS model is driven by real-time input data from ground-, atmospheric-, and space-based platforms. During the development of the NAIRAS model, new science questions and observational data gaps were identified that must be addressed in order to obtain a more reliable and robust operational model of atmospheric radiation exposure. The focus of this talk is to present the current capabilities of the NAIRAS model, discuss future developments in aviation radiation modeling and instrumentation, and propose strategies and methodologies of bridging known gaps in current modeling and observational capabilities.

  3. On the Predictiveness of Single-Field Inflationary Models

    CERN Document Server

    Burgess, C.P.; Trott, Michael

    2014-01-01

    We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...

  4. Predictive modeling of neuroanatomic structures for brain atrophy detection

    Science.gov (United States)

    Hu, Xintao; Guo, Lei; Nie, Jingxin; Li, Kaiming; Liu, Tianming

    2010-03-01

    In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.

  5. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Science.gov (United States)

    King, Michael; Marston, Louise; Švab, Igor; Maaroos, Heidi-Ingrid; Geerlings, Mirjam I; Xavier, Miguel; Benjamin, Vicente; Torres-Gonzalez, Francisco; Bellon-Saameno, Juan Angel; Rotar, Danica; Aluoja, Anu; Saldivia, Sandra; Correa, Bernardo; Nazareth, Irwin

    2011-01-01

    Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL) for the development of hazardous drinking in safe drinkers. A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women. 69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873). The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51). External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846) and Hedge's g of 0.68 (95% CI 0.57, 0.78). The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  6. Development and validation of a risk model for prediction of hazardous alcohol consumption in general practice attendees: the predictAL study.

    Directory of Open Access Journals (Sweden)

    Michael King

    Full Text Available Little is known about the risk of progression to hazardous alcohol use in people currently drinking at safe limits. We aimed to develop a prediction model (predictAL for the development of hazardous drinking in safe drinkers.A prospective cohort study of adult general practice attendees in six European countries and Chile followed up over 6 months. We recruited 10,045 attendees between April 2003 to February 2005. 6193 European and 2462 Chilean attendees recorded AUDIT scores below 8 in men and 5 in women at recruitment and were used in modelling risk. 38 risk factors were measured to construct a risk model for the development of hazardous drinking using stepwise logistic regression. The model was corrected for over fitting and tested in an external population. The main outcome was hazardous drinking defined by an AUDIT score ≥8 in men and ≥5 in women.69.0% of attendees were recruited, of whom 89.5% participated again after six months. The risk factors in the final predictAL model were sex, age, country, baseline AUDIT score, panic syndrome and lifetime alcohol problem. The predictAL model's average c-index across all six European countries was 0.839 (95% CI 0.805, 0.873. The Hedge's g effect size for the difference in log odds of predicted probability between safe drinkers in Europe who subsequently developed hazardous alcohol use and those who did not was 1.38 (95% CI 1.25, 1.51. External validation of the algorithm in Chilean safe drinkers resulted in a c-index of 0.781 (95% CI 0.717, 0.846 and Hedge's g of 0.68 (95% CI 0.57, 0.78.The predictAL risk model for development of hazardous consumption in safe drinkers compares favourably with risk algorithms for disorders in other medical settings and can be a useful first step in prevention of alcohol misuse.

  7. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Directory of Open Access Journals (Sweden)

    Bang Wool Eom

    Full Text Available Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea.Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope.During a median of 11.4 years of follow-up, 19,465 (1.4% and 5,579 (0.7% newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women.In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  8. A discriminant analysis prediction model of non-syndromic cleft lip with or without cleft palate based on risk factors.

    Science.gov (United States)

    Li, Huixia; Luo, Miyang; Luo, Jiayou; Zheng, Jianfei; Zeng, Rong; Du, Qiyun; Fang, Junqun; Ouyang, Na

    2016-11-23

    A risk prediction model of non-syndromic cleft lip with or without cleft palate (NSCL/P) was established by a discriminant analysis to predict the individual risk of NSCL/P in pregnant women. A hospital-based case-control study was conducted with 113 cases of NSCL/P and 226 controls without NSCL/P. The cases and the controls were obtained from 52 birth defects' surveillance hospitals in Hunan Province, China. A questionnaire was administered in person to collect the variables relevant to NSCL/P by face to face interviews. Logistic regression models were used to analyze the influencing factors of NSCL/P, and a stepwise Fisher discriminant analysis was subsequently used to construct the prediction model. In the univariate analysis, 13 influencing factors were related to NSCL/P, of which the following 8 influencing factors as predictors determined the discriminant prediction model: family income, maternal occupational hazards exposure, premarital medical examination, housing renovation, milk/soymilk intake in the first trimester of pregnancy, paternal occupational hazards exposure, paternal strong tea drinking, and family history of NSCL/P. The model had statistical significance (lambda = 0.772, chi-square = 86.044, df = 8, P Self-verification showed that 83.8 % of the participants were correctly predicted to be NSCL/P cases or controls with a sensitivity of 74.3 % and a specificity of 88.5 %. The area under the receiver operating characteristic curve (AUC) was 0.846. The prediction model that was established using the risk factors of NSCL/P can be useful for predicting the risk of NSCL/P. Further research is needed to improve the model, and confirm the validity and reliability of the model.

  9. Predictive modeling of coupled multi-physics systems: I. Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2014-01-01

    Highlights: • We developed “predictive modeling of coupled multi-physics systems (PMCMPS)”. • PMCMPS reduces predicted uncertainties in predicted model responses and parameters. • PMCMPS treats efficiently very large coupled systems. - Abstract: This work presents an innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS).” This methodology takes into account fully the coupling terms between the systems but requires only the computational resources that would be needed to perform predictive modeling on each system separately. The PMCMPS methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution based on a priori known mean values and uncertainties characterizing the parameters and responses for both multi-physics models. This “maximum entropy”-approximate a priori distribution is combined, using Bayes’ theorem, with the “likelihood” provided by the multi-physics simulation models. Subsequently, the posterior distribution thus obtained is evaluated using the saddle-point method to obtain analytical expressions for the optimally predicted values for the multi-physics models parameters and responses along with corresponding reduced uncertainties. Noteworthy, the predictive modeling methodology for the coupled systems is constructed such that the systems can be considered sequentially rather than simultaneously, while preserving exactly the same results as if the systems were treated simultaneously. Consequently, very large coupled systems, which could perhaps exceed available computational resources if treated simultaneously, can be treated with the PMCMPS methodology presented in this work sequentially and without any loss of generality or information, requiring just the resources that would be needed if the systems were treated sequentially

  10. Comparison of the models of financial distress prediction

    Directory of Open Access Journals (Sweden)

    Jiří Omelka

    2013-01-01

    Full Text Available Prediction of the financial distress is generally supposed as approximation if a business entity is closed on bankruptcy or at least on serious financial problems. Financial distress is defined as such a situation when a company is not able to satisfy its liabilities in any forms, or when its liabilities are higher than its assets. Classification of financial situation of business entities represents a multidisciplinary scientific issue that uses not only the economic theoretical bases but interacts to the statistical, respectively to econometric approaches as well.The first models of financial distress prediction have originated in the sixties of the 20th century. One of the most known is the Altman’s model followed by a range of others which are constructed on more or less conformable bases. In many existing models it is possible to find common elements which could be marked as elementary indicators of potential financial distress of a company. The objective of this article is, based on the comparison of existing models of prediction of financial distress, to define the set of basic indicators of company’s financial distress at conjoined identification of their critical aspects. The sample defined this way will be a background for future research focused on determination of one-dimensional model of financial distress prediction which would subsequently become a basis for construction of multi-dimensional prediction model.

  11. Decision Making in Reference to Model of Marketing Predictive Analytics – Theory and Practice

    Directory of Open Access Journals (Sweden)

    Piotr Tarka

    2014-03-01

    Full Text Available Purpose: The objective of this paper is to describe concepts and assumptions of predictive marketing analytics in reference to decision making. In particular, we highlight issues pertaining to the importance of data and the modern approach to data analysis and processing with the purpose of solving real marketing problems that companies encounter in business. Methodology: In this paper authors provide two study cases showing how, and to what extent predictive marketing analytics work can be useful in practice e.g., investigation of the marketing environment. The two cases are based on organizations operating mainly on Web site domain. The fi rst part of this article, begins a discussion with the explanation of a general idea of predictive marketing analytics. The second part runs through opportunities it creates for companies in the process of building strong competitive advantage in the market. The paper article ends with a brief comparison of predictive analytics versus traditional marketing-mix analysis. Findings: Analytics play an extremely important role in the current process of business management based on planning, organizing, implementing and controlling marketing activities. Predictive analytics provides the actual and current picture of the external environment. They also explain what problems are faced with the company in business activities. Analytics tailor marketing solutions to the right time and place at minimum costs. In fact they control the effi ciency and simultaneously increases the effectiveness of the firm. Practical implications: Based on the study cases comparing two enterprises carrying business activities in different areas, one can say that predictive analytics has far more been embraces extensively than classical marketing-mix analyses. The predictive approach yields greater speed of data collection and analysis, stronger predictive accuracy, better obtained competitor data, and more transparent models where one can

  12. Topoclimatic modeling for minimum temperature prediction at a regional scale in the Central Valley of Chile

    International Nuclear Information System (INIS)

    Santibáñez, F.; Morales, L.; Fuente, J. de la; Cellier, P.; Huete, A.

    1997-01-01

    Spring frost may strongly affect fruit production in the Central Valley of Chile. Minimum temperatures are spatially variable owing to topography and soil conditions. A methodology for forecasting minimum temperature at a regional scale in the Central Valley of Chile, integrating spatial variability of temperature under radiative frost conditions, has been developed. It uses simultaneously a model for forecasting minimum temperatures at a reference station using air temperature and humidity measured at 6 pm, and topoclimatic models, based on satellite infra-red imagery (NOAA/AVHRR) and a digital elevation model, to extend the prediction at a regional scale. The methodological developments were integrated in a geographic information system for geo referencing of a meteorological station with satellite imagery and modeled output. This approach proved to be a useful tool for short range (12 h) minimum temperature prediction by generating thermal images over the Central Valley of Chile. It may also be used as a tool for frost risk assessment, in order to adapt production to local climatological conditions. (author)

  13. A model for predicting lung cancer response to therapy

    International Nuclear Information System (INIS)

    Seibert, Rebecca M.; Ramsey, Chester R.; Hines, J. Wesley; Kupelian, Patrick A.; Langen, Katja M.; Meeks, Sanford L.; Scaperoth, Daniel D.

    2007-01-01

    Purpose: Volumetric computed tomography (CT) images acquired by image-guided radiation therapy (IGRT) systems can be used to measure tumor response over the course of treatment. Predictive adaptive therapy is a novel treatment technique that uses volumetric IGRT data to actively predict the future tumor response to therapy during the first few weeks of IGRT treatment. The goal of this study was to develop and test a model for predicting lung tumor response during IGRT treatment using serial megavoltage CT (MVCT). Methods and Materials: Tumor responses were measured for 20 lung cancer lesions in 17 patients that were imaged and treated with helical tomotherapy with doses ranging from 2.0 to 2.5 Gy per fraction. Five patients were treated with concurrent chemotherapy, and 1 patient was treated with neoadjuvant chemotherapy. Tumor response to treatment was retrospectively measured by contouring 480 serial MVCT images acquired before treatment. A nonparametric, memory-based locally weight regression (LWR) model was developed for predicting tumor response using the retrospective tumor response data. This model predicts future tumor volumes and the associated confidence intervals based on limited observations during the first 2 weeks of treatment. The predictive accuracy of the model was tested using a leave-one-out cross-validation technique with the measured tumor responses. Results: The predictive algorithm was used to compare predicted verse-measured tumor volume response for all 20 lesions. The average error for the predictions of the final tumor volume was 12%, with the true volumes always bounded by the 95% confidence interval. The greatest model uncertainty occurred near the middle of the course of treatment, in which the tumor response relationships were more complex, the model has less information, and the predictors were more varied. The optimal days for measuring the tumor response on the MVCT images were on elapsed Days 1, 2, 5, 9, 11, 12, 17, and 18 during

  14. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  15. The strong interaction in e{sup +}e{sup -} annihilation and deep inelastic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Samuelsson, J

    1996-01-01

    Various aspects of strong interactions are considered. Correlation effects in the hadronization process in a string model are studied. A discrete approximation scheme to the perturbative QCD cascade in e{sup +}e{sup -} annihilation is formulated. The model, Discrete QCD, predicts a rather low phase space density of `effective gluons`. This is related to the properties of the running coupling constant. It provides us with a simple tool for studies of the strong interaction. It is shown that it reproduces well-known properties of parton cascades. A new formalism for the Deep Inelastic Scattering (DIS) process is developed. The model which is called the Linked Dipole Chain Model provides an interpolation between regions of high Q{sup 2} (DGLAP) and low x-moderate Q{sup 2} (BFKL). It gives a unified treatment of the different interaction channels an a DIS process. 17 figs.

  16. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  17. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  18. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  19. Modeling pitting growth data and predicting degradation trend

    International Nuclear Information System (INIS)

    Viglasky, T.; Awad, R.; Zeng, Z.; Riznic, J.

    2007-01-01

    A non-statistical modeling approach to predict material degradation is presented in this paper. In this approach, the original data series is processed using Accumulated Generating Operation (AGO). With the aid of the AGO which weakens the random fluctuation embedded in the data series, an approximately exponential curve is established. The generated data series described by the exponential curve is then modeled by a differential equation. The coefficients of the differential equation can be deduced by approximate difference formula based on least-squares algorithm. By solving the differential equation and processing an inverse AGO, a predictive model can be obtained. As this approach is not established on the basis of statistics, the prediction can be performed with a limited amount of data. Implementation of this approach is demonstrated by predicting the pitting growth rate in specimens and wear trend in steam generator tubes. The analysis results indicate that this approach provides a powerful tool with reasonable precision to predict material degradation. (author)

  20. Risk Prediction Models for Incident Heart Failure: A Systematic Review of Methodology and Model Performance.

    Science.gov (United States)

    Sahle, Berhe W; Owen, Alice J; Chin, Ken Lee; Reid, Christopher M

    2017-09-01

    Numerous models predicting the risk of incident heart failure (HF) have been developed; however, evidence of their methodological rigor and reporting remains unclear. This study critically appraises the methods underpinning incident HF risk prediction models. EMBASE and PubMed were searched for articles published between 1990 and June 2016 that reported at least 1 multivariable model for prediction of HF. Model development information, including study design, variable coding, missing data, and predictor selection, was extracted. Nineteen studies reporting 40 risk prediction models were included. Existing models have acceptable discriminative ability (C-statistics > 0.70), although only 6 models were externally validated. Candidate variable selection was based on statistical significance from a univariate screening in 11 models, whereas it was unclear in 12 models. Continuous predictors were retained in 16 models, whereas it was unclear how continuous variables were handled in 16 models. Missing values were excluded in 19 of 23 models that reported missing data, and the number of events per variable was models. Only 2 models presented recommended regression equations. There was significant heterogeneity in discriminative ability of models with respect to age (P prediction models that had sufficient discriminative ability, although few are externally validated. Methods not recommended for the conduct and reporting of risk prediction modeling were frequently used, and resulting algorithms should be applied with caution. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Mathematical model for dissolved oxygen prediction in Cirata ...

    African Journals Online (AJOL)

    This paper presents the implementation and performance of mathematical model to predict theconcentration of dissolved oxygen in Cirata Reservoir, West Java by using Artificial Neural Network (ANN). The simulation program was created using Visual Studio 2012 C# software with ANN model implemented in it. Prediction ...

  2. The strong-weak coupling symmetry in 2D Φ4 field models

    Directory of Open Access Journals (Sweden)

    B.N.Shalaev

    2005-01-01

    Full Text Available It is found that the exact beta-function β(g of the continuous 2D gΦ4 model possesses two types of dual symmetries, these being the Kramers-Wannier (KW duality symmetry and the strong-weak (SW coupling symmetry f(g, or S-duality. All these transformations are explicitly constructed. The S-duality transformation f(g is shown to connect domains of weak and strong couplings, i.e. above and below g*. Basically it means that there is a tempting possibility to compute multiloop Feynman diagrams for the β-function using high-temperature lattice expansions. The regular scheme developed is found to be strongly unstable. Approximate values of the renormalized coupling constant g* found from duality symmetry equations are in an agreement with available numerical results.

  3. Risk Prediction Model for Severe Postoperative Complication in Bariatric Surgery.

    Science.gov (United States)

    Stenberg, Erik; Cao, Yang; Szabo, Eva; Näslund, Erik; Näslund, Ingmar; Ottosson, Johan

    2018-01-12

    Factors associated with risk for adverse outcome are important considerations in the preoperative assessment of patients for bariatric surgery. As yet, prediction models based on preoperative risk factors have not been able to predict adverse outcome sufficiently. This study aimed to identify preoperative risk factors and to construct a risk prediction model based on these. Patients who underwent a bariatric surgical procedure in Sweden between 2010 and 2014 were identified from the Scandinavian Obesity Surgery Registry (SOReg). Associations between preoperative potential risk factors and severe postoperative complications were analysed using a logistic regression model. A multivariate model for risk prediction was created and validated in the SOReg for patients who underwent bariatric surgery in Sweden, 2015. Revision surgery (standardized OR 1.19, 95% confidence interval (CI) 1.14-0.24, p prediction model. Despite high specificity, the sensitivity of the model was low. Revision surgery, high age, low BMI, large waist circumference, and dyspepsia/GERD were associated with an increased risk for severe postoperative complication. The prediction model based on these factors, however, had a sensitivity that was too low to predict risk in the individual patient case.

  4. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  5. Compensatory versus noncompensatory models for predicting consumer preferences

    Directory of Open Access Journals (Sweden)

    Anja Dieckmann

    2009-04-01

    Full Text Available Standard preference models in consumer research assume that people weigh and add all attributes of the available options to derive a decision, while there is growing evidence for the use of simplifying heuristics. Recently, a greedoid algorithm has been developed (Yee, Dahan, Hauser and Orlin, 2007; Kohli and Jedidi, 2007 to model lexicographic heuristics from preference data. We compare predictive accuracies of the greedoid approach and standard conjoint analysis in an online study with a rating and a ranking task. The lexicographic model derived from the greedoid algorithm was better at predicting ranking compared to rating data, but overall, it achieved lower predictive accuracy for hold-out data than the compensatory model estimated by conjoint analysis. However, a considerable minority of participants was better predicted by lexicographic strategies. We conclude that the new algorithm will not replace standard tools for analyzing preferences, but can boost the study of situational and individual differences in preferential choice processes.

  6. Prediction models for successful external cephalic version: a systematic review.

    Science.gov (United States)

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein

    2015-12-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.

  7. Predictive QSAR Models for the Toxicity of Disinfection Byproducts

    Directory of Open Access Journals (Sweden)

    Litang Qin

    2017-10-01

    Full Text Available Several hundred disinfection byproducts (DBPs in drinking water have been identified, and are known to have potentially adverse health effects. There are toxicological data gaps for most DBPs, and the predictive method may provide an effective way to address this. The development of an in-silico model of toxicology endpoints of DBPs is rarely studied. The main aim of the present study is to develop predictive quantitative structure–activity relationship (QSAR models for the reactive toxicities of 50 DBPs in the five bioassays of X-Microtox, GSH+, GSH−, DNA+ and DNA−. All-subset regression was used to select the optimal descriptors, and multiple linear-regression models were built. The developed QSAR models for five endpoints satisfied the internal and external validation criteria: coefficient of determination (R2 > 0.7, explained variance in leave-one-out prediction (Q2LOO and in leave-many-out prediction (Q2LMO > 0.6, variance explained in external prediction (Q2F1, Q2F2, and Q2F3 > 0.7, and concordance correlation coefficient (CCC > 0.85. The application domains and the meaning of the selective descriptors for the QSAR models were discussed. The obtained QSAR models can be used in predicting the toxicities of the 50 DBPs.

  8. Predictive QSAR Models for the Toxicity of Disinfection Byproducts.

    Science.gov (United States)

    Qin, Litang; Zhang, Xin; Chen, Yuhan; Mo, Lingyun; Zeng, Honghu; Liang, Yanpeng

    2017-10-09

    Several hundred disinfection byproducts (DBPs) in drinking water have been identified, and are known to have potentially adverse health effects. There are toxicological data gaps for most DBPs, and the predictive method may provide an effective way to address this. The development of an in-silico model of toxicology endpoints of DBPs is rarely studied. The main aim of the present study is to develop predictive quantitative structure-activity relationship (QSAR) models for the reactive toxicities of 50 DBPs in the five bioassays of X-Microtox, GSH+, GSH-, DNA+ and DNA-. All-subset regression was used to select the optimal descriptors, and multiple linear-regression models were built. The developed QSAR models for five endpoints satisfied the internal and external validation criteria: coefficient of determination ( R ²) > 0.7, explained variance in leave-one-out prediction ( Q ² LOO ) and in leave-many-out prediction ( Q ² LMO ) > 0.6, variance explained in external prediction ( Q ² F1 , Q ² F2 , and Q ² F3 ) > 0.7, and concordance correlation coefficient ( CCC ) > 0.85. The application domains and the meaning of the selective descriptors for the QSAR models were discussed. The obtained QSAR models can be used in predicting the toxicities of the 50 DBPs.

  9. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  10. A predictive pilot model for STOL aircraft landing

    Science.gov (United States)

    Kleinman, D. L.; Killingsworth, W. R.

    1974-01-01

    An optimal control approach has been used to model pilot performance during STOL flare and landing. The model is used to predict pilot landing performance for three STOL configurations, each having a different level of automatic control augmentation. Model predictions are compared with flight simulator data. It is concluded that the model can be effective design tool for studying analytically the effects of display modifications, different stability augmentation systems, and proposed changes in the landing area geometry.

  11. PSO-MISMO modeling strategy for multistep-ahead time series prediction.

    Science.gov (United States)

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi

    2014-05-01

    Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.

  12. Comparison of pause predictions of two sequence-dependent transcription models

    International Nuclear Information System (INIS)

    Bai, Lu; Wang, Michelle D

    2010-01-01

    Two recent theoretical models, Bai et al (2004, 2007) and Tadigotla et al (2006), formulated thermodynamic explanations of sequence-dependent transcription pausing by RNA polymerase (RNAP). The two models differ in some basic assumptions and therefore make different yet overlapping predictions for pause locations, and different predictions on pause kinetics and mechanisms. Here we present a comprehensive comparison of the two models. We show that while they have comparable predictive power of pause locations at low NTP concentrations, the Bai et al model is more accurate than Tadigotla et al at higher NTP concentrations. The pausing kinetics predicted by Bai et al is also consistent with time-course transcription reactions, while Tadigotla et al is unsuited for this type of kinetic prediction. More importantly, the two models in general predict different pausing mechanisms even for the same pausing sites, and the Bai et al model provides an explanation more consistent with recent single molecule observations

  13. Landscape capability models as a tool to predict fine-scale forest bird occupancy and abundance

    Science.gov (United States)

    Loman, Zachary G.; DeLuca, William; Harrison, Daniel J.; Loftin, Cynthia S.; Rolek, Brian W.; Wood, Petra B.

    2018-01-01

    ContextSpecies-specific models of landscape capability (LC) can inform landscape conservation design. Landscape capability is “the ability of the landscape to provide the environment […] and the local resources […] needed for survival and reproduction […] in sufficient quantity, quality and accessibility to meet the life history requirements of individuals and local populations.” Landscape capability incorporates species’ life histories, ecologies, and distributions to model habitat for current and future landscapes and climates as a proactive strategy for conservation planning.ObjectivesWe tested the ability of a set of LC models to explain variation in point occupancy and abundance for seven bird species representative of spruce-fir, mixed conifer-hardwood, and riparian and wooded wetland macrohabitats.MethodsWe compiled point count data sets used for biological inventory, species monitoring, and field studies across the northeastern United States to create an independent validation data set. Our validation explicitly accounted for underestimation in validation data using joint distance and time removal sampling.ResultsBlackpoll warbler (Setophaga striata), wood thrush (Hylocichla mustelina), and Louisiana (Parkesia motacilla) and northern waterthrush (P. noveboracensis) models were validated as predicting variation in abundance, although this varied from not biologically meaningful (1%) to strongly meaningful (59%). We verified all seven species models [including ovenbird (Seiurus aurocapilla), blackburnian (Setophaga fusca) and cerulean warbler (Setophaga cerulea)], as all were positively related to occupancy data.ConclusionsLC models represent a useful tool for conservation planning owing to their predictive ability over a regional extent. As improved remote-sensed data become available, LC layers are updated, which will improve predictions.

  14. A new theory of plant-microbe nutrient competition resolves inconsistencies between observations and model predictions.

    Science.gov (United States)

    Zhu, Qing; Riley, William J; Tang, Jinyun

    2017-04-01

    Terrestrial plants assimilate anthropogenic CO 2 through photosynthesis and synthesizing new tissues. However, sustaining these processes requires plants to compete with microbes for soil nutrients, which therefore calls for an appropriate understanding and modeling of nutrient competition mechanisms in Earth System Models (ESMs). Here, we survey existing plant-microbe competition theories and their implementations in ESMs. We found no consensus regarding the representation of nutrient competition and that observational and theoretical support for current implementations are weak. To reconcile this situation, we applied the Equilibrium Chemistry Approximation (ECA) theory to plant-microbe nitrogen competition in a detailed grassland 15 N tracer study and found that competition theories in current ESMs fail to capture observed patterns and the ECA prediction simplifies the complex nature of nutrient competition and quantitatively matches the 15 N observations. Since plant carbon dynamics are strongly modulated by soil nutrient acquisition, we conclude that (1) predicted nutrient limitation effects on terrestrial carbon accumulation by existing ESMs may be biased and (2) our ECA-based approach may improve predictions by mechanistically representing plant-microbe nutrient competition. © 2016 by the Ecological Society of America.

  15. Hirshfeld atom refinement for modelling strong hydrogen bonds.

    Science.gov (United States)

    Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon

    2014-09-01

    High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.

  16. Questioning the Faith - Models and Prediction in Stream Restoration (Invited)

    Science.gov (United States)

    Wilcock, P.

    2013-12-01

    River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.

  17. Qualitative and quantitative guidelines for the comparison of environmental model predictions

    International Nuclear Information System (INIS)

    Scott, M.

    1995-03-01

    The question of how to assess or compare predictions from a number of models is one of concern in the validation of models, in understanding the effects of different models and model parameterizations on model output, and ultimately in assessing model reliability. Comparison of model predictions with observed data is the basic tool of model validation while comparison of predictions amongst different models provides one measure of model credibility. The guidance provided here is intended to provide qualitative and quantitative approaches (including graphical and statistical techniques) to such comparisons for use within the BIOMOVS II project. It is hoped that others may find it useful. It contains little technical information on the actual methods but several references are provided for the interested reader. The guidelines are illustrated on data from the VAMP CB scenario. Unfortunately, these data do not permit all of the possible approaches to be demonstrated since predicted uncertainties were not provided. The questions considered are concerned with a) intercomparison of model predictions and b) comparison of model predictions with the observed data. A series of examples illustrating some of the different types of data structure and some possible analyses have been constructed. A bibliography of references on model validation is provided. It is important to note that the results of the various techniques discussed here, whether qualitative or quantitative, should not be considered in isolation. Overall model performance must also include an evaluation of model structure and formulation, i.e. conceptual model uncertainties, and results for performance measures must be interpreted in this context. Consider a number of models which are used to provide predictions of a number of quantities at a number of time points. In the case of the VAMP CB scenario, the results include predictions of total deposition of Cs-137 and time dependent concentrations in various

  18. Strong to fragile transition in a model of liquid silica

    OpenAIRE

    Barrat, Jean-Louis; Badro, James; Gillet, Philippe

    1996-01-01

    The transport properties of an ionic model for liquid silica at high temperatures and pressure are investigated using molecular dynamics simulations. With increasing pressure, a clear change from "strong" to "fragile" behaviour (according to Angell's classification of glass-forming liquids) is observed, albeit only on the small viscosity range that can be explored in MD simulations.. This change is related to structural changes, from an almost perfect four-fold coordination to an imperfect fi...

  19. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  20. Predictive models for PEM-electrolyzer performance using adaptive neuro-fuzzy inference systems

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Steffen [University of Tasmania, Hobart 7001, Tasmania (Australia); Karri, Vishy [Australian College of Kuwait (Kuwait)

    2010-09-15

    Predictive models were built using neural network based Adaptive Neuro-Fuzzy Inference Systems for hydrogen flow rate, electrolyzer system-efficiency and stack-efficiency respectively. A comprehensive experimental database forms the foundation for the predictive models. It is argued that, due to the high costs associated with the hydrogen measuring equipment; these reliable predictive models can be implemented as virtual sensors. These models can also be used on-line for monitoring and safety of hydrogen equipment. The quantitative accuracy of the predictive models is appraised using statistical techniques. These mathematical models are found to be reliable predictive tools with an excellent accuracy of {+-}3% compared with experimental values. The predictive nature of these models did not show any significant bias to either over prediction or under prediction. These predictive models, built on a sound mathematical and quantitative basis, can be seen as a step towards establishing hydrogen performance prediction models as generic virtual sensors for wider safety and monitoring applications. (author)

  1. State-space prediction model for chaotic time series

    Science.gov (United States)

    Alparslan, A. K.; Sayar, M.; Atilgan, A. R.

    1998-08-01

    A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.

  2. Nonlinear Brillouin amplification of finite-duration seeds in the strong coupling regime

    International Nuclear Information System (INIS)

    Lehmann, G.; Spatschek, K. H.

    2013-01-01

    Parametric plasma processes received renewed interest in the context of generating ultra-intense and ultra-short laser pulses up to the exawatt-zetawatt regime. Both Raman as well as Brillouin amplifications of seed pulses were proposed. Here, we investigate Brillouin processes in the one-dimensional (1D) backscattering geometry with the help of numerical simulations. For optimal seed amplification, Brillouin scattering is considered in the so called strong coupling (sc) regime. Special emphasis lies on the dependence of the amplification process on the finite duration of the initial seed pulses. First, the standard plane-wave instability predictions are generalized to pulse models, and the changes of initial seed pulse forms due to parametric instabilities are investigated. Three-wave-interaction results are compared to predictions by a new (kinetic) Vlasov code. The calculations are then extended to the nonlinear region with pump depletion. Generation of different seed layers is interpreted by self-similar solutions of the three-wave interaction model. Similar to Raman amplification, shadowing of the rear layers by the leading layers of the seed occurs. The shadowing is more pronounced for initially broad seed pulses. The effect is quantified for Brillouin amplification. Kinetic Vlasov simulations agree with the three-wave interaction predictions and thereby affirm the universal validity of self-similar layer formation during Brillouin seed amplification in the strong coupling regime

  3. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  4. A Strong Self-adaptivity Localization Algorithm Based on Gray Prediction Model for Mobile Nodes%一种使用灰度预测模型的强自适应性移动节点定位算法

    Institute of Scientific and Technical Information of China (English)

    单志龙; 刘兰辉; 张迎胜; 黄广雄

    2014-01-01

    定位技术是无线传感器网络的关键技术,而关于移动节点的定位又是其中的技术难点。该文针对移动节点定位问题提出基于灰度预测模型的强自适应性移动节点定位算法(GPLA)。算法在基于蒙特卡罗定位思想的基础上,利用灰度预测模型进行运动预测,精确采样区域,用估计距离进行滤波,提高采样粒子的有效性,通过限制性的线性交叉操作来生成新粒子,从而加快样本生成,减少采样次数,提高算法效率。仿真实验中,该算法在通信半径、锚节点密度、样本大小等条件变化的情况下,表现出较好的性能与较强的自适应性。%Localization of sensor nodes is an important issue in Wireless Sensor Networks (WSNs), and positioning of the mobile nodes is one of the difficulties. To deal with this issue, a strong self-adaptive Localization Algorithm based on Gray Prediction model for mobile nodes (GPLA) is proposed. On the background of Monte Carlo Localization Algoritm, gray prediction model is used in GPLA, which can accurate sampling area is used to predict nodes motion situation. In filtering process, estimated distance is taken to improve the validity of the sample particles. Finally, restrictive linear crossover is used to generate new particles, which can accelerate the sampling process, reduce the times of sampling and heighten the efficiency of GPLA. Simulation results show that the algorithm has excellent performance and strong self-adaptivity in different communication radius, anchor node, sample size, and other conditions.

  5. Future recovery of acidified lakes in southern Norway predicted by the MAGIC model

    Directory of Open Access Journals (Sweden)

    R. F. Wright

    2003-01-01

    Full Text Available The acidification model MAGIC was used to predict recovery of small lakes in southernmost Norway to future reduction of acid deposition. A set of 60 small headwater lakes was sampled annually from either 1986 (35 lakes or 1995 (25 lakes. Future acid deposition was assumed to follow implementation of current agreed legislation, including the Gothenburg protocol. Three scenarios of future N retention were used. Calibration of the sites to the observed time trends (1990–1999 as well as to one point in time considerably increased the robustness of the predictions. The modelled decline in SO4* concentrations in the lakes over the period 1986–2001 matched the observed decline closely. This strongly suggests that soil processes such as SO4 adsorption/desorption and S reduction/oxidation do not delay the response of runoff by more than a few years. The slope of time trends in ANC over the period of observations was less steep than that observed, perhaps because the entire soil column does not interact actively with the soilwater that emerges as runoff. The lakes showed widely differing time trends in NO3 concentrations over the period 1986–2000. The observed trends were not simulated by any of the three N scenarios. A model based on the C/N ratio in soil was insufficient to account for N retention and leaching at these sites. The large differences in modelled NO3, however, produced only minor differences in ANC between the three scenarios. In the year 2050, the difference was only about 5 μeq l-1. Future climate change entailing warming and increased precipitation could also increase NO3 loss to surface waters. SO4* concentrations in the lakes were predicted to decrease in parallel with the future decreases in S deposition. Fully 80% of the expected decline to year 2025, however, had already occurred by the year 2000. Similarly, ANC concentrations were predicted to increase in the future, but again about 67% of the expected change has already

  6. RAMAN LIGHT SCATTERING IN PSEUDOSPIN-ELECTRON MODEL AT STRONG PSEUDOSPIN-ELECTRON INTERACTION

    Directory of Open Access Journals (Sweden)

    T.S.Mysakovych

    2004-01-01

    Full Text Available Anharmonic phonon contributions to Raman scattering in locally anharmonic crystal systems in the framework of the pseudospin-electron model with tunneling splitting of levels are investigated. The case of strong pseudospin-electron coupling is considered. Pseudospin and electron contributions to scattering are taken into account. Frequency dependences of Raman scattering intensity for different values of model parameters and for different polarization of scattering and incident light are investigated.

  7. Cure modeling in real-time prediction: How much does it help?

    Science.gov (United States)

    Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F

    2017-08-01

    Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Evaluation of burst pressure prediction models for line pipes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)

    2012-01-15

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.

  9. Evaluation of burst pressure prediction models for line pipes

    International Nuclear Information System (INIS)

    Zhu, Xian-Kui; Leis, Brian N.

    2012-01-01

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487–492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: ► This paper evaluates different burst pressure prediction models for line pipes. ► The existing models are categorized into two major groups of Tresca and von Mises solutions. ► Prediction quality of each model is assessed statistically using a large full-scale burst test database. ► The Zhu-Leis solution is identified as the best predictive model.

  10. A predictive model for diagnosing stroke-related apraxia of speech.

    Science.gov (United States)

    Ballard, Kirrie J; Azizi, Lamiae; Duffy, Joseph R; McNeil, Malcolm R; Halaki, Mark; O'Dwyer, Nicholas; Layfield, Claire; Scholl, Dominique I; Vogel, Adam P; Robin, Donald A

    2016-01-29

    Diagnosis of the speech motor planning/programming disorder, apraxia of speech (AOS), has proven challenging, largely due to its common co-occurrence with the language-based impairment of aphasia. Currently, diagnosis is based on perceptually identifying and rating the severity of several speech features. It is not known whether all, or a subset of the features, are required for a positive diagnosis. The purpose of this study was to assess predictor variables for the presence of AOS after left-hemisphere stroke, with the goal of increasing diagnostic objectivity and efficiency. This population-based case-control study involved a sample of 72 cases, using the outcome measure of expert judgment on presence of AOS and including a large number of independently collected candidate predictors representing behavioral measures of linguistic, cognitive, nonspeech oral motor, and speech motor ability. We constructed a predictive model using multiple imputation to deal with missing data; the Least Absolute Shrinkage and Selection Operator (Lasso) technique for variable selection to define the most relevant predictors, and bootstrapping to check the model stability and quantify the optimism of the developed model. Two measures were sufficient to distinguish between participants with AOS plus aphasia and those with aphasia alone, (1) a measure of speech errors with words of increasing length and (2) a measure of relative vowel duration in three-syllable words with weak-strong stress pattern (e.g., banana, potato). The model has high discriminative ability to distinguish between cases with and without AOS (c-index=0.93) and good agreement between observed and predicted probabilities (calibration slope=0.94). Some caution is warranted, given the relatively small sample specific to left-hemisphere stroke, and the limitations of imputing missing data. These two speech measures are straightforward to collect and analyse, facilitating use in research and clinical settings. Copyright

  11. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  12. Energy dependence of jet-structures and determination of the strong coupling constant αsub(s) in e+e- annihilation with the CELLO detector

    International Nuclear Information System (INIS)

    Hopp, G.

    1985-07-01

    We considered multihadronic events and we studied the energy dependence of the jet-structure of those events. We confirmed the existence of 3-jet and 4-jet events in high energy data as predicted by QCD. In parallel we checked the energy dependence of different jet-measures which is predicted by the fragmentation models. We determined the strong coupling constant αsub(s) using different methods and we found a strong model dependence of the αsub(s) determination in second order QCD. The study of the particle density between the jet-axes resulted in a light preference for the LUND-String model as compared to models with independent jet-fragmentation. (orig.) [de

  13. Solute transport in crystalline rocks at Äspö — II: Blind predictions, inverse modelling and lessons learnt from test STT1

    Science.gov (United States)

    Jakob, Andreas; Mazurek, Martin; Heer, Walter

    2003-03-01

    Based on the results from detailed structural and petrological characterisation and on up-scaled laboratory values for sorption and diffusion, blind predictions were made for the STT1 dipole tracer test performed in the Swedish Äspö Hard Rock Laboratory. The tracers used were nonsorbing, such as uranine and tritiated water, weakly sorbing 22Na +, 85Sr 2+, 47Ca 2+and more strongly sorbing 86Rb +, 133Ba 2+, 137Cs +. Our model consists of two parts: (1) a flow part based on a 2D-streamtube formalism accounting for the natural background flow field and with an underlying homogeneous and isotropic transmissivity field and (2) a transport part in terms of the dual porosity medium approach which is linked to the flow part by the flow porosity. The calibration of the model was done using the data from one single uranine breakthrough (PDT3). The study clearly showed that matrix diffusion into a highly porous material, fault gouge, had to be included in our model evidenced by the characteristic shape of the breakthrough curve and in line with geological observations. After the disclosure of the measurements, it turned out that, in spite of the simplicity of our model, the prediction for the nonsorbing and weakly sorbing tracers was fairly good. The blind prediction for the more strongly sorbing tracers was in general less accurate. The reason for the good predictions is deemed to be the result of the choice of a model structure strongly based on geological observation. The breakthrough curves were inversely modelled to determine in situ values for the transport parameters and to draw consequences on the model structure applied. For good fits, only one additional fracture family in contact with cataclasite had to be taken into account, but no new transport mechanisms had to be invoked. The in situ values for the effective diffusion coefficient for fault gouge are a factor of 2-15 larger than the laboratory data. For cataclasite, both data sets have values comparable to

  14. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  15. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  16. An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.

    Science.gov (United States)

    Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P

    2017-05-22

    PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age

  17. Comparison between model-predicted tumor oxygenation dynamics and vascular-/flow-related Doppler indices.

    Science.gov (United States)

    Belfatto, Antonella; Vidal Urbinati, Ailyn M; Ciardo, Delia; Franchi, Dorella; Cattani, Federica; Lazzari, Roberta; Jereczek-Fossa, Barbara A; Orecchia, Roberto; Baroni, Guido; Cerveri, Pietro

    2017-05-01

    Mathematical modeling is a powerful and flexible method to investigate complex phenomena. It discloses the possibility of reproducing expensive as well as invasive experiments in a safe environment with limited costs. This makes it suitable to mimic tumor evolution and response to radiotherapy although the reliability of the results remains an issue. Complexity reduction is therefore a critical aspect in order to be able to compare model outcomes to clinical data. Among the factors affecting treatment efficacy, tumor oxygenation is known to play a key role in radiotherapy response. In this work, we aim at relating the oxygenation dynamics, predicted by a macroscale model trained on tumor volumetric data of uterine cervical cancer patients, to vascularization and blood flux indices assessed on Ultrasound Doppler images. We propose a macroscale model of tumor evolution based on three dynamics, namely active portion, necrotic portion, and oxygenation. The model parameters were assessed on the volume size of seven cervical cancer patients administered with 28 fractions of intensity modulated radiation therapy (IMRT) (1.8 Gy/fraction). For each patient, five Doppler ultrasound tests were acquired before, during, and after the treatment. The lesion was manually contoured by an expert physician using 4D View ® (General Electric Company - Fairfield, Connecticut, United States), which automatically provided the overall tumor volume size along with three vascularization and/or blood flow indices. Volume data only were fed to the model for training purpose, while the predicted oxygenation was compared a posteriori to the measured Doppler indices. The model was able to fit the tumor volume evolution within 8% error (range: 3-8%). A strong correlation between the intrapatient longitudinal indices from Doppler measurements and oxygen predicted by the model (about 90% or above) was found in three cases. Two patients showed an average correlation value (50-70%) and the remaining

  18. A predictive model for swallowing dysfunction after curative radiotherapy in head and neck cancer

    International Nuclear Information System (INIS)

    Langendijk, Johannes A.; Doornaert, Patricia; Rietveld, Derek H.F.; Verdonck-de Leeuw, Irma M.; Rene Leemans, C.; Slotman, Ben J.

    2009-01-01

    Introduction: Recently, we found that swallowing dysfunction after curative (chemo) radiation (CH) RT has a strong negative impact on health-related quality of life (HRQoL), even more than xerostomia. The purpose of this study was to design a predictive model for swallowing dysfunction after curative radiotherapy or chemoradiation. Materials and methods: A prospective study was performed including 529 patients with head and neck squamous cell carcinoma (HNSCC) treated with curative (CH) RT. In all patients, acute and late radiation-induced morbidity (RTOG Acute and Late Morbidity Scoring System) was scored prospectively. To design the model univariate and multivariate logistic regression analyses were carried out with grade 2 or higher RTOG swallowing dysfunction at 6 months as the primary (SWALL 6months ) endpoint. The model was validated by comparing the predicted and observed complication rates and by testing if the model also predicted acute dysphagia and late dysphagia at later time points (12, 18 and 24 months). Results: After univariate and multivariate logistic regression analyses, the following factors turned out to be independent prognostic factors for SWALL 6months : T3-T4, bilateral neck irradiation, weight loss prior to radiation, oropharyngeal and nasopharyngeal tumours, accelerated radiotherapy and concomitant chemoradiation. By summation of the regression coefficients derived from the multivariate model, the Total Dysphagia Risk Score (TDRS) could be calculated. In the logistic regression model, the TDRS was significantly associated with SWALL 6months ((p 6months was 5%, 24% and 46% in case of low-, intermediate- and high-risk patients, respectively. These observed percentages were within the 95% confidence intervals of the predicted values. The TDRS risk group classification was also significantly associated with acute dysphagia (P < 0.001 at all time points) and with late swallowing dysfunction at 12, 18 and 24 months (p < 0.001 at all time points

  19. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    Science.gov (United States)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  20. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  1. Evaluation of Thermodynamic Models for Predicting Phase Equilibria of CO2 + Impurity Binary Mixture

    Science.gov (United States)

    Shin, Byeong Soo; Rho, Won Gu; You, Seong-Sik; Kang, Jeong Won; Lee, Chul Soo

    2018-03-01

    For the design and operation of CO2 capture and storage (CCS) processes, equation of state (EoS) models are used for phase equilibrium calculations. Reliability of an EoS model plays a crucial role, and many variations of EoS models have been reported and continue to be published. The prediction of phase equilibria for CO2 mixtures containing SO2, N2, NO, H2, O2, CH4, H2S, Ar, and H2O is important for CO2 transportation because the captured gas normally contains small amounts of impurities even though it is purified in advance. For the design of pipelines in deep sea or arctic conditions, flow assurance and safety are considered priority issues, and highly reliable calculations are required. In this work, predictive Soave-Redlich-Kwong, cubic plus association, Groupe Européen de Recherches Gazières (GERG-2008), perturbed-chain statistical associating fluid theory, and non-random lattice fluids hydrogen bond EoS models were compared regarding performance in calculating phase equilibria of CO2-impurity binary mixtures and with the collected literature data. No single EoS could cover the entire range of systems considered in this study. Weaknesses and strong points of each EoS model were analyzed, and recommendations are given as guidelines for safe design and operation of CCS processes.

  2. Auditing predictive models : a case study in crop growth

    NARCIS (Netherlands)

    Metselaar, K.

    1999-01-01

    Methods were developed to assess and quantify the predictive quality of simulation models, with the intent to contribute to evaluation of model studies by non-scientists. In a case study, two models of different complexity, LINTUL and SUCROS87, were used to predict yield of forage maize

  3. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  4. Numerical weather prediction (NWP) and hybrid ARMA/ANN model to predict global radiation

    International Nuclear Information System (INIS)

    Voyant, Cyril; Muselli, Marc; Paoli, Christophe; Nivet, Marie-Laure

    2012-01-01

    We propose in this paper an original technique to predict global radiation using a hybrid ARMA/ANN model and data issued from a numerical weather prediction model (NWP). We particularly look at the multi-layer perceptron (MLP). After optimizing our architecture with NWP and endogenous data previously made stationary and using an innovative pre-input layer selection method, we combined it to an ARMA model from a rule based on the analysis of hourly data series. This model has been used to forecast the hourly global radiation for five places in Mediterranean area. Our technique outperforms classical models for all the places. The nRMSE for our hybrid model MLP/ARMA is 14.9% compared to 26.2% for the naïve persistence predictor. Note that in the standalone ANN case the nRMSE is 18.4%. Finally, in order to discuss the reliability of the forecaster outputs, a complementary study concerning the confidence interval of each prediction is proposed. -- Highlights: ► Time series forecasting with hybrid method based on the use of ALADIN numerical weather model, ANN and ARMA. ► Innovative pre-input layer selection method. ► Combination of optimized MLP and ARMA model obtained from a rule based on the analysis of hourly data series. ► Stationarity process (method and control) for the global radiation time series.

  5. Thermodynamics of strongly interacting system from reparametrized Polyakov-Nambu-Jona-Lasinio model

    International Nuclear Information System (INIS)

    Bhattacharyya, Abhijit; Ghosh, Sanjay K.; Maity, Soumitra; Raha, Sibaji; Ray, Rajarshi; Saha, Kinkar; Upadhaya, Sudipa

    2017-01-01

    The Polyakov-Nambu-Jona-Lasinio model has been quite successful in describing various qualitative features of observables for strongly interacting matter, that are measurable in heavy-ion collision experiments. The question still remains on the quantitative uncertainties in the model results. Such an estimation is possible only by contrasting these results with those obtained from rst principles using the lattice QCD framework. Recently a variety of lattice QCD data were reported in the realistic continuum limit. Here we make a first attempt at reparametrizing the model so as to reproduce these lattice data

  6. An intermittency model for predicting roughness induced transition

    Science.gov (United States)

    Ge, Xuan; Durbin, Paul

    2014-11-01

    An extended model for roughness-induced transition is proposed based on an intermittency transport equation for RANS modeling formulated in local variables. To predict roughness effects in the fully turbulent boundary layer, published boundary conditions for k and ω are used, which depend on the equivalent sand grain roughness height, and account for the effective displacement of wall distance origin. Similarly in our approach, wall distance in the transition model for smooth surfaces is modified by an effective origin, which depends on roughness. Flat plate test cases are computed to show that the proposed model is able to predict the transition onset in agreement with a data correlation of transition location versus roughness height, Reynolds number, and inlet turbulence intensity. Experimental data for a turbine cascade are compared with the predicted results to validate the applicability of the proposed model. Supported by NSF Award Number 1228195.

  7. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  8. Modeling of Complex Life Cycle Prediction Based on Cell Division

    Directory of Open Access Journals (Sweden)

    Fucheng Zhang

    2017-01-01

    Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.

  9. Prediction models and control algorithms for predictive applications of setback temperature in cooling systems

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Yoon, Younju; Jeon, Young-Hoon; Kim, Sooyoung

    2017-01-01

    Highlights: • Initial ANN model was developed for predicting the time to the setback temperature. • Initial model was optimized for producing accurate output. • Optimized model proved its prediction accuracy. • ANN-based algorithms were developed and tested their performance. • ANN-based algorithms presented superior thermal comfort or energy efficiency. - Abstract: In this study, a temperature control algorithm was developed to apply a setback temperature predictively for the cooling system of a residential building during occupied periods by residents. An artificial neural network (ANN) model was developed to determine the required time for increasing the current indoor temperature to the setback temperature. This study involved three phases: development of the initial ANN-based prediction model, optimization and testing of the initial model, and development and testing of three control algorithms. The development and performance testing of the model and algorithm were conducted using TRNSYS and MATLAB. Through the development and optimization process, the final ANN model employed indoor temperature and the temperature difference between the current and target setback temperature as two input neurons. The optimal number of hidden layers, number of neurons, learning rate, and moment were determined to be 4, 9, 0.6, and 0.9, respectively. The tangent–sigmoid and pure-linear transfer function was used in the hidden and output neurons, respectively. The ANN model used 100 training data sets with sliding-window method for data management. Levenberg-Marquart training method was employed for model training. The optimized model had a prediction accuracy of 0.9097 root mean square errors when compared with the simulated results. Employing the ANN model, ANN-based algorithms maintained indoor temperatures better within target ranges. Compared to the conventional algorithm, the ANN-based algorithms reduced the duration of time, in which the indoor temperature

  10. Error analysis in predictive modelling demonstrated on mould data.

    Science.gov (United States)

    Baranyi, József; Csernus, Olívia; Beczner, Judit

    2014-01-17

    The purpose of this paper was to develop a predictive model for the effect of temperature and water activity on the growth rate of Aspergillus niger and to determine the sources of the error when the model is used for prediction. Parallel mould growth curves, derived from the same spore batch, were generated and fitted to determine their growth rate. The variances of replicate ln(growth-rate) estimates were used to quantify the experimental variability, inherent to the method of determining the growth rate. The environmental variability was quantified by the variance of the respective means of replicates. The idea is analogous to the "within group" and "between groups" variability concepts of ANOVA procedures. A (secondary) model, with temperature and water activity as explanatory variables, was fitted to the natural logarithm of the growth rates determined by the primary model. The model error and the experimental and environmental errors were ranked according to their contribution to the total error of prediction. Our method can readily be applied to analysing the error structure of predictive models of bacterial growth models, too. © 2013.

  11. Predicting Power Outages Using Multi-Model Ensemble Forecasts

    Science.gov (United States)

    Cerrai, D.; Anagnostou, E. N.; Yang, J.; Astitha, M.

    2017-12-01

    Power outages affect every year millions of people in the United States, affecting the economy and conditioning the everyday life. An Outage Prediction Model (OPM) has been developed at the University of Connecticut for helping utilities to quickly restore outages and to limit their adverse consequences on the population. The OPM, operational since 2015, combines several non-parametric machine learning (ML) models that use historical weather storm simulations and high-resolution weather forecasts, satellite remote sensing data, and infrastructure and land cover data to predict the number and spatial distribution of power outages. A new methodology, developed for improving the outage model performances by combining weather- and soil-related variables using three different weather models (WRF 3.7, WRF 3.8 and RAMS/ICLAMS), will be presented in this study. First, we will present a performance evaluation of each model variable, by comparing historical weather analyses with station data or reanalysis over the entire storm data set. Hence, each variable of the new outage model version is extracted from the best performing weather model for that variable, and sensitivity tests are performed for investigating the most efficient variable combination for outage prediction purposes. Despite that the final variables combination is extracted from different weather models, this ensemble based on multi-weather forcing and multi-statistical model power outage prediction outperforms the currently operational OPM version that is based on a single weather forcing variable (WRF 3.7), because each model component is the closest to the actual atmospheric state.

  12. Acute Myocardial Infarction Readmission Risk Prediction Models: A Systematic Review of Model Performance.

    Science.gov (United States)

    Smith, Lauren N; Makam, Anil N; Darden, Douglas; Mayo, Helen; Das, Sandeep R; Halm, Ethan A; Nguyen, Oanh Kieu

    2018-01-01

    Hospitals are subject to federal financial penalties for excessive 30-day hospital readmissions for acute myocardial infarction (AMI). Prospectively identifying patients hospitalized with AMI at high risk for readmission could help prevent 30-day readmissions by enabling targeted interventions. However, the performance of AMI-specific readmission risk prediction models is unknown. We systematically searched the published literature through March 2017 for studies of risk prediction models for 30-day hospital readmission among adults with AMI. We identified 11 studies of 18 unique risk prediction models across diverse settings primarily in the United States, of which 16 models were specific to AMI. The median overall observed all-cause 30-day readmission rate across studies was 16.3% (range, 10.6%-21.0%). Six models were based on administrative data; 4 on electronic health record data; 3 on clinical hospital data; and 5 on cardiac registry data. Models included 7 to 37 predictors, of which demographics, comorbidities, and utilization metrics were the most frequently included domains. Most models, including the Centers for Medicare and Medicaid Services AMI administrative model, had modest discrimination (median C statistic, 0.65; range, 0.53-0.79). Of the 16 reported AMI-specific models, only 8 models were assessed in a validation cohort, limiting generalizability. Observed risk-stratified readmission rates ranged from 3.0% among the lowest-risk individuals to 43.0% among the highest-risk individuals, suggesting good risk stratification across all models. Current AMI-specific readmission risk prediction models have modest predictive ability and uncertain generalizability given methodological limitations. No existing models provide actionable information in real time to enable early identification and risk-stratification of patients with AMI before hospital discharge, a functionality needed to optimize the potential effectiveness of readmission reduction interventions

  13. On the importance of paleoclimate modelling for improving predictions of future climate change

    Directory of Open Access Journals (Sweden)

    J. C. Hargreaves

    2009-12-01

    Full Text Available We use an ensemble of runs from the MIROC3.2 AGCM with slab-ocean to explore the extent to which mid-Holocene simulations are relevant to predictions of future climate change. The results are compared with similar analyses for the Last Glacial Maximum (LGM and pre-industrial control climate. We suggest that the paleoclimate epochs can provide some independent validation of the models that is also relevant for future predictions. Considering the paleoclimate epochs, we find that the stronger global forcing and hence larger climate change at the LGM makes this likely to be the more powerful one for estimating the large-scale changes that are anticipated due to anthropogenic forcing. The phenomena in the mid-Holocene simulations which are most strongly correlated with future changes (i.e., the mid to high northern latitude land temperature and monsoon precipitation do, however, coincide with areas where the LGM results are not correlated with future changes, and these are also areas where the paleodata indicate significant climate changes have occurred. Thus, these regions and phenomena for the mid-Holocene may be useful for model improvement and validation.

  14. Models of the strongly lensed quasar DES J0408-5354

    Science.gov (United States)

    Agnello, A.; Lin, H.; Buckley-Geer, L.; Treu, T.; Bonvin, V.; Courbin, F.; Lemon, C.; Morishita, T.; Amara, A.; Auger, M. W.; Birrer, S.; Chan, J.; Collett, T.; More, A.; Fassnacht, C. D.; Frieman, J.; Marshall, P. J.; McMahon, R. G.; Meylan, G.; Suyu, S. H.; Castander, F.; Finley, D.; Howell, A.; Kochanek, C.; Makler, M.; Martini, P.; Morgan, N.; Nord, B.; Ostrovski, F.; Schechter, P.; Tucker, D.; Wechsler, R.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Dietrich, J. P.; Eifler, T. F.; Flaugher, B.; Fosalba, P.; García-Bellido, J.; Gaztanaga, E.; Gill, M. S.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kuehn, K.; Kuropatkin, N.; Li, T. S.; Lima, M.; Maia, M. A. G.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Ogando, R. L. C.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Smith, R. C.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.

    2017-12-01

    We present detailed modelling of the recently discovered, quadruply lensed quasar J0408-5354, with the aim of interpreting its remarkable configuration: besides three quasar images (A,B,D) around the main deflector (G1), a fourth image (C) is significantly reddened and dimmed by a perturber (G2) which is not detected in the Dark Energy Survey imaging data. From lens models incorporating (dust-corrected) flux ratios, we find a perturber Einstein radius 0.04 arcsec ≲ RE, G2 ≲ 0.2 arcsec and enclosed mass Mp(RE, G2) ≲ 1.0 × 1010 M⊙. The main deflector has stellar mass log _{10}(M_{\\star }/M_{⊙})=11.49^{+0.46}_{-0.32}, a projected mass Mp(RE, G1) ≈ 6 × 1011 M⊙ within its Einstein radius RE, G1 = (1.85 ± 0.15) arcsec and predicted velocity dispersion 267-280 km s-1. Follow-up images from a companion monitoring campaign show additional components, including a candidate second source at a redshift between the quasar and G1. Models with free perturbers, and dust-corrected and delay-corrected flux ratios, are also explored. The predicted time-delays (ΔtAB = (135.0 ± 12.6) d, ΔtBD = (21.0 ± 3.5) d) roughly agree with those measured, but better imaging is required for proper modelling and comparison. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.

  15. Predicting Biological Information Flow in a Model Oxygen Minimum Zone

    Science.gov (United States)

    Louca, S.; Hawley, A. K.; Katsev, S.; Beltran, M. T.; Bhatia, M. P.; Michiels, C.; Capelle, D.; Lavik, G.; Doebeli, M.; Crowe, S.; Hallam, S. J.

    2016-02-01

    Microbial activity drives marine biochemical fluxes and nutrient cycling at global scales. Geochemical measurements as well as molecular techniques such as metagenomics, metatranscriptomics and metaproteomics provide great insight into microbial activity. However, an integration of molecular and geochemical data into mechanistic biogeochemical models is still lacking. Recent work suggests that microbial metabolic pathways are, at the ecosystem level, strongly shaped by stoichiometric and energetic constraints. Hence, models rooted in fluxes of matter and energy may yield a holistic understanding of biogeochemistry. Furthermore, such pathway-centric models would allow a direct consolidation with meta'omic data. Here we present a pathway-centric biogeochemical model for the seasonal oxygen minimum zone in Saanich Inlet, a fjord off the coast of Vancouver Island. The model considers key dissimilatory nitrogen and sulfur fluxes, as well as the population dynamics of the genes that mediate them. By assuming a direct translation of biocatalyzed energy fluxes to biosynthesis rates, we make predictions about the distribution and activity of the corresponding genes. A comparison of the model to molecular measurements indicates that the model explains observed DNA, RNA, protein and cell depth profiles. This suggests that microbial activity in marine ecosystems such as oxygen minimum zones is well described by DNA abundance, which, in conjunction with geochemical constraints, determines pathway expression and process rates. Our work further demonstrates how meta'omic data can be mechanistically linked to environmental redox conditions and biogeochemical processes.

  16. Mean Bias in Seasonal Forecast Model and ENSO Prediction Error.

    Science.gov (United States)

    Kim, Seon Tae; Jeong, Hye-In; Jin, Fei-Fei

    2017-07-20

    This study uses retrospective forecasts made using an APEC Climate Center seasonal forecast model to investigate the cause of errors in predicting the amplitude of El Niño Southern Oscillation (ENSO)-driven sea surface temperature variability. When utilizing Bjerknes coupled stability (BJ) index analysis, enhanced errors in ENSO amplitude with forecast lead times are found to be well represented by those in the growth rate estimated by the BJ index. ENSO amplitude forecast errors are most strongly associated with the errors in both the thermocline slope response and surface wind response to forcing over the tropical Pacific, leading to errors in thermocline feedback. This study concludes that upper ocean temperature bias in the equatorial Pacific, which becomes more intense with increasing lead times, is a possible cause of forecast errors in the thermocline feedback and thus in ENSO amplitude.

  17. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re-search...... of prediction models, it was observed that different models have different capabilities and also no single model is suitable under all situations. The idea behind EPS (ensemble prediction systems) is to take advantage of the unique features of each subsystem to detain diverse patterns that exist in the dataset...

  18. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  19. Domestic appliances energy optimization with model predictive control

    International Nuclear Information System (INIS)

    Rodrigues, E.M.G.; Godina, R.; Pouresmaeil, E.; Ferreira, J.R.; Catalão, J.P.S.

    2017-01-01

    Highlights: • An alternative power management control for home appliances that require thermal regulation is presented. • A Model Predictive Control scheme is assessed and its performance studied and compared to the thermostat. • Problem formulation is explored through tuning weights with the aim of reducing energetic consumption and cost. • A modulation scheme of a two-level Model Predictive Control signal as an interface block is presented. • The implementation costs in home appliances with thermal regulation requirements are reduced. - Abstract: A vital element in making a sustainable world is correctly managing the energy in the domestic sector. Thus, this sector evidently stands as a key one for to be addressed in terms of climate change goals. Increasingly, people are aware of electricity savings by turning off the equipment that is not been used, or connect electrical loads just outside the on-peak hours. However, these few efforts are not enough to reduce the global energy consumption, which is increasing. Much of the reduction was due to technological improvements, however with the advancing of the years new types of control arise. Domestic appliances with the purpose of heating and cooling rely on thermostatic regulation technique. The study in this paper is focused on the subject of an alternative power management control for home appliances that require thermal regulation. In this paper a Model Predictive Control scheme is assessed and its performance studied and compared to the thermostat with the aim of minimizing the cooling energy consumption through the minimization of the energy cost while satisfying the adequate temperature range for the human comfort. In addition, the Model Predictive Control problem formulation is explored through tuning weights with the aim of reducing energetic consumption and cost. For this purpose, the typical consumption of a 24 h period of a summer day was simulated a three-level tariff scheme was used. The new

  20. The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun

    2017-01-01

    Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.

  1. A state-based probabilistic model for tumor respiratory motion prediction

    International Nuclear Information System (INIS)

    Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei

    2010-01-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more

  2. SHMF: Interest Prediction Model with Social Hub Matrix Factorization

    Directory of Open Access Journals (Sweden)

    Chaoyuan Cui

    2017-01-01

    Full Text Available With the development of social networks, microblog has become the major social communication tool. There is a lot of valuable information such as personal preference, public opinion, and marketing in microblog. Consequently, research on user interest prediction in microblog has a positive practical significance. In fact, how to extract information associated with user interest orientation from the constantly updated blog posts is not so easy. Existing prediction approaches based on probabilistic factor analysis use blog posts published by user to predict user interest. However, these methods are not very effective for the users who post less but browse more. In this paper, we propose a new prediction model, which is called SHMF, using social hub matrix factorization. SHMF constructs the interest prediction model by combining the information of blogs posts published by both user and direct neighbors in user’s social hub. Our proposed model predicts user interest by integrating user’s historical behavior and temporal factor as well as user’s friendships, thus achieving accurate forecasts of user’s future interests. The experimental results on Sina Weibo show the efficiency and effectiveness of our proposed model.

  3. Development of Interpretable Predictive Models for BPH and Prostate Cancer.

    Science.gov (United States)

    Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, J A

    2015-01-01

    Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. Statistical dependence with PC and BPH was found for prostate volume (P-value BPH prediction. PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced.

  4. Modeling a multivariable reactor and on-line model predictive control.

    Science.gov (United States)

    Yu, D W; Yu, D L

    2005-10-01

    A nonlinear first principle model is developed for a laboratory-scaled multivariable chemical reactor rig in this paper and the on-line model predictive control (MPC) is implemented to the rig. The reactor has three variables-temperature, pH, and dissolved oxygen with nonlinear dynamics-and is therefore used as a pilot system for the biochemical industry. A nonlinear discrete-time model is derived for each of the three output variables and their model parameters are estimated from the real data using an adaptive optimization method. The developed model is used in a nonlinear MPC scheme. An accurate multistep-ahead prediction is obtained for MPC, where the extended Kalman filter is used to estimate system unknown states. The on-line control is implemented and a satisfactory tracking performance is achieved. The MPC is compared with three decentralized PID controllers and the advantage of the nonlinear MPC over the PID is clearly shown.

  5. Plant water potential improves prediction of empirical stomatal models.

    Directory of Open Access Journals (Sweden)

    William R L Anderegg

    Full Text Available Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.

  6. eTOXlab, an open source modeling framework for implementing predictive models in production environments.

    Science.gov (United States)

    Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel

    2015-01-01

    Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by e

  7. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    DEFF Research Database (Denmark)

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    By comparison with the solution of the time-dependent Schrodinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  8. Real estate value prediction using multivariate regression models

    Science.gov (United States)

    Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav

    2017-11-01

    The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.

  9. A COMPARISON BETWEEN THREE PREDICTIVE MODELS OF COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    DUMITRU CIOBANU

    2013-12-01

    Full Text Available Time series prediction is an open problem and many researchers are trying to find new predictive methods and improvements for the existing ones. Lately methods based on neural networks are used extensively for time series prediction. Also, support vector machines have solved some of the problems faced by neural networks and they began to be widely used for time series prediction. The main drawback of those two methods is that they are global models and in the case of a chaotic time series it is unlikely to find such model. In this paper it is presented a comparison between three predictive from computational intelligence field one based on neural networks one based on support vector machine and another based on chaos theory. We show that the model based on chaos theory is an alternative to the other two methods.

  10. New tips for structure prediction by comparative modeling

    Science.gov (United States)

    Rayan, Anwar

    2009-01-01

    Comparative modelling is utilized to predict the 3-dimensional conformation of a given protein (target) based on its sequence alignment to experimentally determined protein structure (template). The use of such technique is already rewarding and increasingly widespread in biological research and drug development. The accuracy of the predictions as commonly accepted depends on the score of sequence identity of the target protein to the template. To assess the relationship between sequence identity and model quality, we carried out an analysis of a set of 4753 sequence and structure alignments. Throughout this research, the model accuracy was measured by root mean square deviations of Cα atoms of the target-template structures. Surprisingly, the results show that sequence identity of the target protein to the template is not a good descriptor to predict the accuracy of the 3-D structure model. However, in a large number of cases, comparative modelling with lower sequence identity of target to template proteins led to more accurate 3-D structure model. As a consequence of this study, we suggest new tips for improving the quality of omparative models, particularly for models whose target-template sequence identity is below 50%. PMID:19255646

  11. Complex versus simple models: ion-channel cardiac toxicity prediction.

    Science.gov (United States)

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  12. Complex versus simple models: ion-channel cardiac toxicity prediction

    Directory of Open Access Journals (Sweden)

    Hitesh B. Mistry

    2018-02-01

    Full Text Available There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model Bnet was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the Bnet model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  13. Tuning SISO offset-free Model Predictive Control based on ARX models

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2012-01-01

    , the proposed controller is simple to tune as it has only one free tuning parameter. These two features are advantageous in predictive process control as they simplify industrial commissioning of MPC. Disturbance rejection and offset-free control is important in industrial process control. To achieve offset......In this paper, we present a tuning methodology for a simple offset-free SISO Model Predictive Controller (MPC) based on autoregressive models with exogenous inputs (ARX models). ARX models simplify system identification as they can be identified from data using convex optimization. Furthermore......-free control in face of unknown disturbances or model-plant mismatch, integrators must be introduced in either the estimator or the regulator. Traditionally, offset-free control is achieved using Brownian disturbance models in the estimator. In this paper we achieve offset-free control by extending the noise...

  14. Copula based prediction models: an application to an aortic regurgitation study

    Directory of Open Access Journals (Sweden)

    Shoukri Mohamed M

    2007-06-01

    Full Text Available Abstract Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction; p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808. From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0

  15. Chemical structure-based predictive model for methanogenic anaerobic biodegradation potential.

    Science.gov (United States)

    Meylan, William; Boethling, Robert; Aronson, Dallas; Howard, Philip; Tunkel, Jay

    2007-09-01

    Many screening-level models exist for predicting aerobic biodegradation potential from chemical structure, but anaerobic biodegradation generally has been ignored by modelers. We used a fragment contribution approach to develop a model for predicting biodegradation potential under methanogenic anaerobic conditions. The new model has 37 fragments (substructures) and classifies a substance as either fast or slow, relative to the potential to be biodegraded in the "serum bottle" anaerobic biodegradation screening test (Organization for Economic Cooperation and Development Guideline 311). The model correctly classified 90, 77, and 91% of the chemicals in the training set (n = 169) and two independent validation sets (n = 35 and 23), respectively. Accuracy of predictions of fast and slow degradation was equal for training-set chemicals, but fast-degradation predictions were less accurate than slow-degradation predictions for the validation sets. Analysis of the signs of the fragment coefficients for this and the other (aerobic) Biowin models suggests that in the context of simple group contribution models, the majority of positive and negative structural influences on ultimate degradation are the same for aerobic and methanogenic anaerobic biodegradation.

  16. Strongly interacting Fermi gases

    Directory of Open Access Journals (Sweden)

    Bakr W.

    2013-08-01

    Full Text Available Strongly interacting gases of ultracold fermions have become an amazingly rich test-bed for many-body theories of fermionic matter. Here we present our recent experiments on these systems. Firstly, we discuss high-precision measurements on the thermodynamics of a strongly interacting Fermi gas across the superfluid transition. The onset of superfluidity is directly observed in the compressibility, the chemical potential, the entropy, and the heat capacity. Our measurements provide benchmarks for current many-body theories on strongly interacting fermions. Secondly, we have studied the evolution of fermion pairing from three to two dimensions in these gases, relating to the physics of layered superconductors. In the presence of p-wave interactions, Fermi gases are predicted to display toplogical superfluidity carrying Majorana edge states. Two possible avenues in this direction are discussed, our creation and direct observation of spin-orbit coupling in Fermi gases and the creation of fermionic molecules of 23Na 40K that will feature strong dipolar interactions in their absolute ground state.

  17. Short-term wind power prediction based on LSSVM–GSA model

    International Nuclear Information System (INIS)

    Yuan, Xiaohui; Chen, Chen; Yuan, Yanbin; Huang, Yuehua; Tan, Qingxiong

    2015-01-01

    Highlights: • A hybrid model is developed for short-term wind power prediction. • The model is based on LSSVM and gravitational search algorithm. • Gravitational search algorithm is used to optimize parameters of LSSVM. • Effect of different kernel function of LSSVM on wind power prediction is discussed. • Comparative studies show that prediction accuracy of wind power is improved. - Abstract: Wind power forecasting can improve the economical and technical integration of wind energy into the existing electricity grid. Due to its intermittency and randomness, it is hard to forecast wind power accurately. For the purpose of utilizing wind power to the utmost extent, it is very important to make an accurate prediction of the output power of a wind farm under the premise of guaranteeing the security and the stability of the operation of the power system. In this paper, a hybrid model (LSSVM–GSA) based on the least squares support vector machine (LSSVM) and gravitational search algorithm (GSA) is proposed to forecast the short-term wind power. As the kernel function and the related parameters of the LSSVM have a great influence on the performance of the prediction model, the paper establishes LSSVM model based on different kernel functions for short-term wind power prediction. And then an optimal kernel function is determined and the parameters of the LSSVM model are optimized by using GSA. Compared with the Back Propagation (BP) neural network and support vector machine (SVM) model, the simulation results show that the hybrid LSSVM–GSA model based on exponential radial basis kernel function and GSA has higher accuracy for short-term wind power prediction. Therefore, the proposed LSSVM–GSA is a better model for short-term wind power prediction

  18. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  19. Testing process predictions of models of risky choice: a quantitative model comparison approach

    Science.gov (United States)

    Pachur, Thorsten; Hertwig, Ralph; Gigerenzer, Gerd; Brandstätter, Eduard

    2013-01-01

    This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or non-linear functions thereof) and the separate evaluation of risky options (expectation models). Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models). We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter et al., 2006), and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up) and direction of search (i.e., gamble-wise vs. reason-wise). In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly); acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988) called “similarity.” In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies. PMID:24151472

  20. Testing Process Predictions of Models of Risky Choice: A Quantitative Model Comparison Approach

    Directory of Open Access Journals (Sweden)

    Thorsten ePachur

    2013-09-01

    Full Text Available This article presents a quantitative model comparison contrasting the process predictions of two prominent views on risky choice. One view assumes a trade-off between probabilities and outcomes (or nonlinear functions thereof and the separate evaluation of risky options (expectation models. Another view assumes that risky choice is based on comparative evaluation, limited search, aspiration levels, and the forgoing of trade-offs (heuristic models. We derived quantitative process predictions for a generic expectation model and for a specific heuristic model, namely the priority heuristic (Brandstätter, Gigerenzer, & Hertwig, 2006, and tested them in two experiments. The focus was on two key features of the cognitive process: acquisition frequencies (i.e., how frequently individual reasons are looked up and direction of search (i.e., gamble-wise vs. reason-wise. In Experiment 1, the priority heuristic predicted direction of search better than the expectation model (although neither model predicted the acquisition process perfectly; acquisition frequencies, however, were inconsistent with both models. Additional analyses revealed that these frequencies were primarily a function of what Rubinstein (1988 called similarity. In Experiment 2, the quantitative model comparison approach showed that people seemed to rely more on the priority heuristic in difficult problems, but to make more trade-offs in easy problems. This finding suggests that risky choice may be based on a mental toolbox of strategies.

  1. Individualized prediction of perineural invasion in colorectal cancer: development and validation of a radiomics prediction model.

    Science.gov (United States)

    Huang, Yanqi; He, Lan; Dong, Di; Yang, Caiyun; Liang, Cuishan; Chen, Xin; Ma, Zelan; Huang, Xiaomei; Yao, Su; Liang, Changhong; Tian, Jie; Liu, Zaiyi

    2018-02-01

    To develop and validate a radiomics prediction model for individualized prediction of perineural invasion (PNI) in colorectal cancer (CRC). After computed tomography (CT) radiomics features extraction, a radiomics signature was constructed in derivation cohort (346 CRC patients). A prediction model was developed to integrate the radiomics signature and clinical candidate predictors [age, sex, tumor location, and carcinoembryonic antigen (CEA) level]. Apparent prediction performance was assessed. After internal validation, independent temporal validation (separate from the cohort used to build the model) was then conducted in 217 CRC patients. The final model was converted to an easy-to-use nomogram. The developed radiomics nomogram that integrated the radiomics signature and CEA level showed good calibration and discrimination performance [Harrell's concordance index (c-index): 0.817; 95% confidence interval (95% CI): 0.811-0.823]. Application of the nomogram in validation cohort gave a comparable calibration and discrimination (c-index: 0.803; 95% CI: 0.794-0.812). Integrating the radiomics signature and CEA level into a radiomics prediction model enables easy and effective risk assessment of PNI in CRC. This stratification of patients according to their PNI status may provide a basis for individualized auxiliary treatment.

  2. Approximating prediction uncertainty for random forest regression models

    Science.gov (United States)

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  3. Deep Flare Net (DeFN) Model for Solar Flare Prediction

    Science.gov (United States)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Ishii, M.

    2018-05-01

    We developed a solar flare prediction model using a deep neural network (DNN) named Deep Flare Net (DeFN). This model can calculate the probability of flares occurring in the following 24 hr in each active region, which is used to determine the most likely maximum classes of flares via a binary classification (e.g., ≥M class versus statistically predict flares, the DeFN model was trained to optimize the skill score, i.e., the true skill statistic (TSS). As a result, we succeeded in predicting flares with TSS = 0.80 for ≥M-class flares and TSS = 0.63 for ≥C-class flares. Note that in usual DNN models, the prediction process is a black box. However, in the DeFN model, the features are manually selected, and it is possible to analyze which features are effective for prediction after evaluation.

  4. Effect of dipole polarizability on positron binding by strongly polar molecules

    International Nuclear Information System (INIS)

    Gribakin, G F; Swann, A R

    2015-01-01

    A model for positron binding to polar molecules is considered by combining the dipole potential outside the molecule with a strongly repulsive core of a given radius. Using existing experimental data on binding energies leads to unphysically small core radii for all of the molecules studied. This suggests that electron–positron correlations neglected in the simple model play a large role in determining the binding energy. We account for these by including the polarization potential via perturbation theory and non-perturbatively. The perturbative model makes reliable predictions of binding energies for a range of polar organic molecules and hydrogen cyanide. The model also agrees with the linear dependence of the binding energies on the polarizability inferred from the experimental data (Danielson et al 2009 J. Phys. B: At. Mol. Opt. Phys. 42 235203). The effective core radii, however, remain unphysically small for most molecules. Treating molecular polarization non-perturbatively leads to physically meaningful core radii for all of the molecules studied and enables even more accurate predictions of binding energies to be made for nearly all of the molecules considered. (paper)

  5. Hidden markov model for the prediction of transmembrane proteins using MATLAB.

    Science.gov (United States)

    Chaturvedi, Navaneet; Shanker, Sudhanshu; Singh, Vinay Kumar; Sinha, Dhiraj; Pandey, Paras Nath

    2011-01-01

    Since membranous proteins play a key role in drug targeting therefore transmembrane proteins prediction is active and challenging area of biological sciences. Location based prediction of transmembrane proteins are significant for functional annotation of protein sequences. Hidden markov model based method was widely applied for transmembrane topology prediction. Here we have presented a revised and a better understanding model than an existing one for transmembrane protein prediction. Scripting on MATLAB was built and compiled for parameter estimation of model and applied this model on amino acid sequence to know the transmembrane and its adjacent locations. Estimated model of transmembrane topology was based on TMHMM model architecture. Only 7 super states are defined in the given dataset, which were converted to 96 states on the basis of their length in sequence. Accuracy of the prediction of model was observed about 74 %, is a good enough in the area of transmembrane topology prediction. Therefore we have concluded the hidden markov model plays crucial role in transmembrane helices prediction on MATLAB platform and it could also be useful for drug discovery strategy. The database is available for free at bioinfonavneet@gmail.comvinaysingh@bhu.ac.in.

  6. Field-theoretic methods in strongly-coupled models of general gauge mediation

    International Nuclear Information System (INIS)

    Fortin, Jean-François; Stergiou, Andreas

    2013-01-01

    An often-exploited feature of the operator product expansion (OPE) is that it incorporates a splitting of ultraviolet and infrared physics. In this paper we use this feature of the OPE to perform simple, approximate computations of soft masses in gauge-mediated supersymmetry breaking. The approximation amounts to truncating the OPEs for hidden-sector current–current operator products. Our method yields visible-sector superpartner spectra in terms of vacuum expectation values of a few hidden-sector IR elementary fields. We manage to obtain reasonable approximations to soft masses, even when the hidden sector is strongly coupled. We demonstrate our techniques in several examples, including a new framework where supersymmetry breaking arises both from a hidden sector and dynamically. Our results suggest that strongly-coupled models of supersymmetry breaking are naturally split

  7. Nonlinear charge reduction effect in strongly coupled plasmas

    International Nuclear Information System (INIS)

    Sarmah, D; Tessarotto, M; Salimullah, M

    2006-01-01

    The charge reduction effect, produced by the nonlinear Debye screening of high-Z charges occurring in strongly coupled plasmas, is investigated. An analytic asymptotic expression is obtained for the charge reduction factor (f c ) which determines the Debye-Hueckel potential generated by a charged test particle. Its relevant parametric dependencies are analysed and shown to predict a strong charge reduction effect in strongly coupled plasmas

  8. Bayesian Age-Period-Cohort Modeling and Prediction - BAMP

    Directory of Open Access Journals (Sweden)

    Volker J. Schmid

    2007-10-01

    Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.

  9. Strong Stellar-driven Outflows Shape the Evolution of Galaxies at Cosmic Dawn

    Energy Technology Data Exchange (ETDEWEB)

    Fontanot, Fabio; De Lucia, Gabriella [INAF—Astronomical Observatory of Trieste, via G.B. Tiepolo 11, I-34143 Trieste (Italy); Hirschmann, Michaela [Sorbonne Universités, UPMC-CNRS, UMR7095, Institut d’Astrophysique de Paris, F-75014 Paris (France)

    2017-06-20

    We study galaxy mass assembly and cosmic star formation rate (SFR) at high redshift (z ≳ 4), by comparing data from multiwavelength surveys with predictions from the GAlaxy Evolution and Assembly (gaea) model. gaea implements a stellar feedback scheme partially based on cosmological hydrodynamical simulations, which features strong stellar-driven outflows and mass-dependent timescales for the re-accretion of ejected gas. In previous work, we have shown that this scheme is able to correctly reproduce the evolution of the galaxy stellar mass function (GSMF) up to z ∼ 3. We contrast model predictions with both rest-frame ultraviolet (UV) and optical luminosity functions (LFs), which are mostly sensitive to the SFR and stellar mass, respectively. We show that gaea is able to reproduce the shape and redshift evolution of both sets of LFs. We study the impact of dust on the predicted LFs, and we find that the required level of dust attenuation is in qualitative agreement with recent estimates based on the UV continuum slope. The consistency between data and model predictions holds for the redshift evolution of the physical quantities well beyond the redshift range considered for the calibration of the original model. In particular, we show that gaea is able to recover the evolution of the GSMF up to z ∼ 7 and the cosmic SFR density up to z ∼ 10.

  10. Strong Stellar-driven Outflows Shape the Evolution of Galaxies at Cosmic Dawn

    International Nuclear Information System (INIS)

    Fontanot, Fabio; De Lucia, Gabriella; Hirschmann, Michaela

    2017-01-01

    We study galaxy mass assembly and cosmic star formation rate (SFR) at high redshift (z ≳ 4), by comparing data from multiwavelength surveys with predictions from the GAlaxy Evolution and Assembly (gaea) model. gaea implements a stellar feedback scheme partially based on cosmological hydrodynamical simulations, which features strong stellar-driven outflows and mass-dependent timescales for the re-accretion of ejected gas. In previous work, we have shown that this scheme is able to correctly reproduce the evolution of the galaxy stellar mass function (GSMF) up to z ∼ 3. We contrast model predictions with both rest-frame ultraviolet (UV) and optical luminosity functions (LFs), which are mostly sensitive to the SFR and stellar mass, respectively. We show that gaea is able to reproduce the shape and redshift evolution of both sets of LFs. We study the impact of dust on the predicted LFs, and we find that the required level of dust attenuation is in qualitative agreement with recent estimates based on the UV continuum slope. The consistency between data and model predictions holds for the redshift evolution of the physical quantities well beyond the redshift range considered for the calibration of the original model. In particular, we show that gaea is able to recover the evolution of the GSMF up to z ∼ 7 and the cosmic SFR density up to z ∼ 10.

  11. Model predictive control of a wind turbine modelled in Simpack

    International Nuclear Information System (INIS)

    Jassmann, U; Matzke, D; Reiter, M; Abel, D; Berroth, J; Schelenz, R; Jacobs, G

    2014-01-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine

  12. Model predictive control of a wind turbine modelled in Simpack

    Science.gov (United States)

    Jassmann, U.; Berroth, J.; Matzke, D.; Schelenz, R.; Reiter, M.; Jacobs, G.; Abel, D.

    2014-06-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine to

  13. The North American Multi-Model Ensemble (NMME): Phase-1 Seasonal to Interannual Prediction, Phase-2 Toward Developing Intra-Seasonal Prediction

    Science.gov (United States)

    Kirtman, Ben P.; Min, Dughong; Infanti, Johnna M.; Kinter, James L., III; Paolino, Daniel A.; Zhang, Qin; vandenDool, Huug; Saha, Suranjana; Mendez, Malaquias Pena; Becker, Emily; hide

    2013-01-01

    The recent US National Academies report "Assessment of Intraseasonal to Interannual Climate Prediction and Predictability" was unequivocal in recommending the need for the development of a North American Multi-Model Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users. The multi-model ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation, and has proven to produce better prediction quality (on average) then any single model ensemble. This multi-model approach is the basis for several international collaborative prediction research efforts, an operational European system and there are numerous examples of how this multi-model ensemble approach yields superior forecasts compared to any single model. Based on two NOAA Climate Test Bed (CTB) NMME workshops (February 18, and April 8, 2011) a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data is readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (http://origin.cpc.ncep.noaa.gov/products/people/wd51yf/NMME/index.html). Moreover, the NMME forecast are already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, presents an overview of the multi-model forecast quality, and the complementary skill associated with individual models.

  14. Prediction of crack propagation and arrest in X100 natural gas transmission pipelines with a strain rate dependent damage model (SRDD). Part 2: Large scale pipe models with gas depressurisation

    International Nuclear Information System (INIS)

    Oikonomidis, F.; Shterenlikht, A.; Truman, C.E.

    2014-01-01

    Part 1 of this paper described a specimen for the measurement of high strain rate flow and fracture properties of pipe material and for tuning a strain rate dependent damage model (SRDD). In part 2 the tuned SRDD model is used for the simulation of axial crack propagation and arrest in X100 natural gas pipelines. Linear pressure drop model was adopted behind the crack tip, and an exponential gas depressurisation model was used ahead of the crack tip. The model correctly predicted the crack initiation (burst) pressure, the crack speed and the crack arrest length. Strain rates between 1000 s −1 and 3000 s −1 immediately ahead of the crack tip are predicted, giving a strong indication that a strain rate material model is required for the structural integrity assessment of the natural gas pipelines. The models predict the stress triaxiality of about 0.65 for at least 1 m ahead of the crack tip, gradually dropping to 0.5 at distances of about 5–7 m ahead of the crack tip. Finally, the models predicted a linear drop in crack tip opening angle (CTOA) from about 11−12° at the onset of crack propagation down to 7−8° at crack arrest. Only the lower of these values agree with those reported in the literature for quasi-static measurements. This discrepancy might indicate substantial strain rate dependence in CTOA. - Highlights: • Finite element simulations of 3 burst tests of X100 pipes are detailed. • Strain rate dependent damage model, tuned on small scale X100 samples, was used. • The models correctly predict burst pressure, crack speed and crack arrest length. • The model predicts a crack length dependent critical CTOA. • The strain rate dependent damage model is verified as mesh independent

  15. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    Science.gov (United States)

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is effect-site concentrations tested ranged from 1 to 76 ng/mL and from 5 to 80 ng/mL for midazolam and alfentanil, respectively. Midazolam and alfentanil had synergistic effects in colonoscopy and EGD, but additivity was observed in the intersession group. Adequate prediction rates were 84% to 85% in the intersession group, 84% to 88% during colonoscopy, and 82% to 87% during EGD. The reduced Greco and Fixed alfentanil concentration required for 50% of the patients to achieve targeted response Hierarchy models performed better with comparable predictive strength. The reduced Greco model had the lowest AICc with strong correlation in all 3 phases of endoscopy. Dynamic, rather than fixed, γ and γalf in the Hierarchy model improved model fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures

  16. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  17. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    Gastrointestinal (GI) drug absorption is a complex process determined by formulation, physicochemical and biopharmaceutical factors, and GI physiology. Physiologically based in silico absorption models have emerged as a widely used and promising supplement to traditional in vitro assays and preclinical in vivo studies. However, there remains a lack of comparative studies between different models. The aim of this study was to explore the strengths and limitations of the in silico absorption models Simcyp 13.1, GastroPlus 8.0, and GI-Sim 4.1, with respect to their performance in predicting human intestinal drug absorption. This was achieved by adopting an a priori modeling approach and using well-defined input data for 12 drugs associated with incomplete GI absorption and related challenges in predicting the extent of absorption. This approach better mimics the real situation during formulation development where predictive in silico models would be beneficial. Plasma concentration-time profiles for 44 oral drug administrations were calculated by convolution of model-predicted absorption-time profiles and reported pharmacokinetic parameters. Model performance was evaluated by comparing the predicted plasma concentration-time profiles, Cmax, tmax, and exposure (AUC) with observations from clinical studies. The overall prediction accuracies for AUC, given as the absolute average fold error (AAFE) values, were 2.2, 1.6, and 1.3 for Simcyp, GastroPlus, and GI-Sim, respectively. The corresponding AAFE values for Cmax were 2.2, 1.6, and 1.3, respectively, and those for tmax were 1.7, 1.5, and 1.4, respectively. Simcyp was associated with underprediction of AUC and Cmax; the accuracy decreased with decreasing predicted fabs. A tendency for underprediction was also observed for GastroPlus, but there was no correlation with predicted fabs. There were no obvious trends for over- or underprediction for GI-Sim. The models performed similarly in capturing dependencies on dose and

  18. Embryo quality predictive models based on cumulus cells gene expression

    Directory of Open Access Journals (Sweden)

    Devjak R

    2016-06-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.

  19. Clayey landslide initiation and acceleration strongly modulated by soil swelling

    Science.gov (United States)

    Schulz, William; Smith, Joel B.; Wang, Gonghui; Jiang, Yao; Roering, Joshua J.

    2018-01-01

    Largely unknown mechanisms restrain motion of clay-rich, slow-moving landslides that are widespread worldwide and rarely accelerate catastrophically. We studied a clayey, slow-moving landslide typical of thousands in northern California, USA, to decipher hydrologic-mechanical interactions that modulate landslide dynamics. Similar to some other studies, observed pore-water pressures correlated poorly with landslide reactivation and speed. In situ and laboratory measurements strongly suggested that variable pressure along the landslide's lateral shear boundaries resulting from seasonal soil expansion and contraction modulated its reactivation and speed. Slope-stability modeling suggested that the landslide's observed behavior could be predicted by including transient swell pressure as a resistance term, whereas modeling considering only transient hydrologic conditions predicted movement 5–6 months prior to when it was observed. All clayey soils swell to some degree; hence, our findings suggest that swell pressure likely modulates motion of many landslides and should be considered to improve forecasts of clayey landslide initiation and mobility.

  20. Model Predictive Control of a Wave Energy Converter

    DEFF Research Database (Denmark)

    Andersen, Palle; Pedersen, Tom Søndergård; Nielsen, Kirsten Mølgaard

    2015-01-01

    In this paper reactive control and Model Predictive Control (MPC) for a Wave Energy Converter (WEC) are compared. The analysis is based on a WEC from Wave Star A/S designed as a point absorber. The model predictive controller uses wave models based on the dominating sea states combined with a model...... connecting undisturbed wave sequences to sequences of torque. Losses in the conversion from mechanical to electrical power are taken into account in two ways. Conventional reactive controllers are tuned for each sea state with the assumption that the converter has the same efficiency back and forth. MPC...