WorldWideScience

Sample records for model predicts strong

  1. Prediction of strong earthquake motions on rock surface using evolutionary process models

    International Nuclear Information System (INIS)

    Kameda, H.; Sugito, M.

    1984-01-01

    Stochastic process models are developed for prediction of strong earthquake motions for engineering design purposes. Earthquake motions with nonstationary frequency content are modeled by using the concept of evolutionary processes. Discussion is focused on the earthquake motions on bed rocks which are important for construction of nuclear power plants in seismic regions. On this basis, two earthquake motion prediction models are developed, one (EMP-IB Model) for prediction with given magnitude and epicentral distance, and the other (EMP-IIB Model) to account for the successive fault ruptures and the site location relative to the fault of great earthquakes. (Author) [pt

  2. Sequence-Specific Model for Peptide Retention Time Prediction in Strong Cation Exchange Chromatography.

    Science.gov (United States)

    Gussakovsky, Daniel; Neustaeter, Haley; Spicer, Victor; Krokhin, Oleg V

    2017-11-07

    The development of a peptide retention prediction model for strong cation exchange (SCX) separation on a Polysulfoethyl A column is reported. Off-line 2D LC-MS/MS analysis (SCX-RPLC) of S. cerevisiae whole cell lysate was used to generate a retention dataset of ∼30 000 peptides, sufficient for identifying the major sequence-specific features of peptide retention mechanisms in SCX. In contrast to RPLC/hydrophilic interaction liquid chromatography (HILIC) separation modes, where retention is driven by hydrophobic/hydrophilic contributions of all individual residues, SCX interactions depend mainly on peptide charge (number of basic residues at acidic pH) and size. An additive model (incorporating the contributions of all 20 residues into the peptide retention) combined with a peptide length correction produces a 0.976 R 2 value prediction accuracy, significantly higher than the additive models for either HILIC or RPLC. Position-dependent effects on peptide retention for different residues were driven by the spatial orientation of tryptic peptides upon interaction with the negatively charged surface functional groups. The positively charged N-termini serve as a primary point of interaction. For example, basic residues (Arg, His, Lys) increase peptide retention when located closer to the N-terminus. We also found that hydrophobic interactions, which could lead to a mixed-mode separation mechanism, are largely suppressed at 20-30% of acetonitrile in the eluent. The accuracy of the final Sequence-Specific Retention Calculator (SSRCalc) SCX model (∼0.99 R 2 value) exceeds all previously reported predictors for peptide LC separations. This also provides a solid platform for method development in 2D LC-MS protocols in proteomics and peptide retention prediction filtering of false positive identifications.

  3. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches

    International Nuclear Information System (INIS)

    Berge-Thierry, C.

    2007-05-01

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  4. Site-specific strong ground motion prediction using 2.5-D modelling

    Science.gov (United States)

    Narayan, J. P.

    2001-08-01

    An algorithm was developed using the 2.5-D elastodynamic wave equation, based on the displacement-stress relation. One of the most significant advantages of the 2.5-D simulation is that the 3-D radiation pattern can be generated using double-couple point shear-dislocation sources in the 2-D numerical grid. A parsimonious staggered grid scheme was adopted instead of the standard staggered grid scheme, since this is the only scheme suitable for computing the dislocation. This new 2.5-D numerical modelling avoids the extensive computational cost of 3-D modelling. The significance of this exercise is that it makes it possible to simulate the strong ground motion (SGM), taking into account the energy released, 3-D radiation pattern, path effects and local site conditions at any location around the epicentre. The slowness vector (py) was used in the supersonic region for each layer, so that all the components of the inertia coefficient are positive. The double-couple point shear-dislocation source was implemented in the numerical grid using the moment tensor components as the body-force couples. The moment per unit volume was used in both the 3-D and 2.5-D modelling. A good agreement in the 3-D and 2.5-D responses for different grid sizes was obtained when the moment per unit volume was further reduced by a factor equal to the finite-difference grid size in the case of the 2.5-D modelling. The components of the radiation pattern were computed in the xz-plane using 3-D and 2.5-D algorithms for various focal mechanisms, and the results were in good agreement. A comparative study of the amplitude behaviour of the 3-D and 2.5-D wavefronts in a layered medium reveals the spatial and temporal damped nature of the 2.5-D elastodynamic wave equation. 3-D and 2.5-D simulated responses at a site using a different strike direction reveal that strong ground motion (SGM) can be predicted just by rotating the strike of the fault counter-clockwise by the same amount as the azimuth of

  5. Enhanced outage prediction modeling for strong extratropical storms and hurricanes in the Northeastern United States

    Science.gov (United States)

    Cerrai, D.; Anagnostou, E. N.; Wanik, D. W.; Bhuiyan, M. A. E.; Zhang, X.; Yang, J.; Astitha, M.; Frediani, M. E.; Schwartz, C. S.; Pardakhti, M.

    2016-12-01

    The overwhelming majority of human activities need reliable electric power. Severe weather events can cause power outages, resulting in substantial economic losses and a temporary worsening of living conditions. Accurate prediction of these events and the communication of forecasted impacts to the affected utilities is necessary for efficient emergency preparedness and mitigation. The University of Connecticut Outage Prediction Model (OPM) uses regression tree models, high-resolution weather reanalysis and real-time weather forecasts (WRF and NCAR ensemble), airport station data, vegetation and electric grid characteristics and historical outage data to forecast the number and spatial distribution of outages in the power distribution grid located within dense vegetation. Recent OPM improvements consist of improved storm classification and addition of new predictive weather-related variables and are demonstrated using a leave-one-storm-out cross-validation based on 130 severe extratropical storms and two hurricanes (Sandy and Irene) in the Northeast US. We show that it is possible to predict the number of trouble spots causing outages in the electric grid with a median absolute percentage error as low as 27% for some storm types, and at most around 40%, in a scale that varies between four orders of magnitude, from few outages to tens of thousands. This outage information can be communicated to the electric utility to manage allocation of crews and equipment and minimize the recovery time for an upcoming storm hazard.

  6. Prediction of strongly-heated gas flows in a vertical tube using explicit algebraic stress/heat-flux models

    International Nuclear Information System (INIS)

    Baek, Seong Gu; Park, Seung O.

    2003-01-01

    This paper provides the assessment of prediction performance of explicit algebraic stress and heat-flux models under conditions of mixed convective gas flows in a strongly-heated vertical tube. Two explicit algebraic stress models and four algebraic heat-flux models are selected for assessment. Eight combinations of explicit algebraic stress and heat-flux models are used in predicting the flows experimentally studied by Shehata and McEligot (IJHMT 41(1998) p.4333) in which property variation was significant. Among the various model combinations, the Wallin and Johansson (JFM 403(2000) p. 89) explicit algebraic stress model-Abe, Kondo, and Nagano (IJHFF 17(1996) p. 228) algebraic heat-flux model combination is found to perform best. We also found that the dimensionless wall distance y + should be calculated based on the local property rather than the property at the wall for property-variation flows. When the buoyancy or the property variation effects are so strong that the flow may relaminarize, the choice of the basic platform two-equation model is a most important factor in improving the predictions

  7. Predicting long-term recovery of a strongly acidified stream using MAGIC and climate models (Litavka, Czech Republic

    Directory of Open Access Journals (Sweden)

    D. W. Hardekopf

    2008-03-01

    Full Text Available Two branches forming the headwaters of a stream in the Czech Republic were studied. Both streams have similar catchment characteristics and historical deposition; however one is rain-fed and strongly affected by acid atmospheric deposition, the other spring-fed and only moderately acidified. The MAGIC model was used to reconstruct past stream water and soil chemistry of the rain-fed branch, and predict future recovery up to 2050 under current proposed emissions levels. A future increase in air temperature calculated by a regional climate model was then used to derive climate-related scenarios to test possible factors affecting chemical recovery up to 2100. Macroinvertebrates were sampled from both branches, and differences in stream chemistry were reflected in the community structures. According to modelled forecasts, recovery of the rain-fed branch will be gradual and limited, and continued high levels of sulphate release from the soils will continue to dominate stream water chemistry, while scenarios related to a predicted increase in temperature will have little impact. The likelihood of colonization of species from the spring-fed branch was evaluated considering the predicted extent of chemical recovery. The results suggest that the possibility of colonization of species from the spring-fed branch to the rain-fed will be limited to only the acid-tolerant stonefly, caddisfly and dipteran taxa in the modelled period.

  8. Extension of the Nambu-Jona-Lasinio model predictions at high temperatures and strong external magnetic field

    International Nuclear Information System (INIS)

    Gomes, Karina P.; Farias, R.L.S.; Pinto, M.B.; Krein, G.

    2013-01-01

    Full text: Recently much attention is dedicated to understand the effects of an external magnetic field on the QCD phase diagram. Actually there is a contradiction in the literature: while effective models of QCD like the Nambu-Jona- Lasinio model (NJL) and linear sigma model predict an increase of the critical temperature of chiral symmetry restoration a function of the magnetic field, recent lattice results shows the opposite behavior. The NJL model is nonrenormalizable; then the high momentum part of the model has to be regularized in a phenomenological way. The common practice is to regularize the divergent loop amplitudes with a three-dimensional momentum cutoff, which also sets the energy-momentum scale for the validity of the model. That is, the model cannot be used for studying phenomena involving momenta running in loops larger than the cutoff. In particular, the model cannot be used to study quark matter at high densities. One of the symptoms of this problem is the prediction of vanishing superconducting gaps at high baryon densities, a feature of the model that is solely caused by the use of a regularizing momentum cutoff of the divergent vacuum and also in finite loop integrals. In a renormalizable theory all the dependence on the cutoff can be removed in favor of running physical parameters, like the coupling constants of QED and QCD. The running is given by the renormalization group equations of the theory and is controlled by an energy scale that is adjusted to the scale of the experimental conditions under consideration. In a recent publication, Casalbuoni et al. have introduced the concept of a running coupling constant for the NJL model to extend the applicability of the model to high density. Their arguments are based on making the cutoff density dependent, using an analogy with the natural cutoff of the Debye frequency of phonon oscillations in an ordinary solid. In the present work we follow such an approach introducing a magnetic field

  9. Deep subsurface structure modeling and site amplification factor estimation in Niigata plain for broadband strong motion prediction

    International Nuclear Information System (INIS)

    Sato, Hiroaki

    2009-01-01

    This report addresses a methodology of deep subsurface structure modeling in Niigata plain, Japan to estimate site amplification factor in the broadband frequency range for broadband strong motion prediction. In order to investigate deep S-wave velocity structures, we conduct microtremor array measurements at nine sites in Niigata plain, which are important to estimate both long- and short-period ground motion. The estimated depths of the top of the basement layer agree well with those of the Green tuff formation as well as the Bouguer anomaly distribution. Dispersion characteristics derived from the observed long-period ground motion records are well explained by the theoretical dispersion curves of Love wave group velocities calculated from the estimated subsurface structures. These results demonstrate the deep subsurface structures from microtremor array measurements make it possible to estimate long-period ground motions in Niigata plain. Moreover an applicability of microtremor array exploration for inclined basement structure like a folding structure is shown from the two dimensional finite difference numerical simulations. The short-period site amplification factors in Niigata plain are empirically estimated by the spectral inversion analysis from S-wave parts of strong motion data. The resultant characteristics of site amplification are relative large in the frequency range of about 1.5-5 Hz, and decay significantly with the frequency increasing over about 5 Hz. However, these features can't be explained by the calculations from the deep subsurface structures. The estimation of site amplification factors in the frequency range of about 1.5-5 Hz are improved by introducing a shallow detailed structure down to GL-20m depth at a site. We also propose to consider random fluctuation in a modeling of deep S-wave velocity structure for broadband site amplification factor estimation. The Site amplification in the frequency range higher than about 5 Hz are filtered

  10. Right Heart End-Systolic Remodeling Index Strongly Predicts Outcomes in Pulmonary Arterial Hypertension: Comparison With Validated Models.

    Science.gov (United States)

    Amsallem, Myriam; Sweatt, Andrew J; Aymami, Marie C; Kuznetsova, Tatiana; Selej, Mona; Lu, HongQuan; Mercier, Olaf; Fadel, Elie; Schnittger, Ingela; McConnell, Michael V; Rabinovitch, Marlene; Zamanian, Roham T; Haddad, Francois

    2017-06-01

    Right ventricular (RV) end-systolic dimensions provide information on both size and function. We investigated whether an internally scaled index of end-systolic dimension is incremental to well-validated prognostic scores in pulmonary arterial hypertension. From 2005 to 2014, 228 patients with pulmonary arterial hypertension were prospectively enrolled. RV end-systolic remodeling index (RVESRI) was defined by lateral length divided by septal height. The incremental values of RV free wall longitudinal strain and RVESRI to risk scores were determined. Mean age was 49±14 years, 78% were female, 33% had connective tissue disease, 52% were in New York Heart Association class ≥III, and mean pulmonary vascular resistance was 11.2±6.4 WU. RVESRI and right atrial area were strongly connected to the other right heart metrics. Three zones of adaptation (adapted, maladapted, and severely maladapted) were identified based on the RVESRI to RV systolic pressure relationship. During a mean follow-up of 3.9±2.4 years, the primary end point of death, transplant, or admission for heart failure was reached in 88 patients. RVESRI was incremental to risk prediction scores in pulmonary arterial hypertension, including the Registry to Evaluate Early and Long-Term PAH Disease Management score, the Pulmonary Hypertension Connection equation, and the Mayo Clinic model. Using multivariable analysis, New York Heart Association class III/IV, RVESRI, and log NT-proBNP (N-Terminal Pro-B-Type Natriuretic Peptide) were retained (χ 2 , 62.2; P right heart metrics, RVESRI demonstrated the best test-retest characteristics. RVESRI is a simple reproducible prognostic marker in patients with pulmonary arterial hypertension. © 2017 American Heart Association, Inc.

  11. Strong interactions - quark models

    International Nuclear Information System (INIS)

    Goto, M.; Ferreira, P.L.

    1979-01-01

    The variational method is used for the PSI and upsilon family spectra reproduction from the quark model, through several phenomenological potentials, viz.: linear, linear plus coulomb term and logarithmic. (L.C.) [pt

  12. Strong ground motion prediction using virtual earthquakes.

    Science.gov (United States)

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  13. Is It Possible to Predict Strong Earthquakes?

    Science.gov (United States)

    Polyakov, Y. S.; Ryabinin, G. V.; Solovyeva, A. B.; Timashev, S. F.

    2015-07-01

    The possibility of earthquake prediction is one of the key open questions in modern geophysics. We propose an approach based on the analysis of common short-term candidate precursors (2 weeks to 3 months prior to strong earthquake) with the subsequent processing of brain activity signals generated in specific types of rats (kept in laboratory settings) who reportedly sense an impending earthquake a few days prior to the event. We illustrate the identification of short-term precursors using the groundwater sodium-ion concentration data in the time frame from 2010 to 2014 (a major earthquake occurred on 28 February 2013) recorded at two different sites in the southeastern part of the Kamchatka Peninsula, Russia. The candidate precursors are observed as synchronized peaks in the nonstationarity factors, introduced within the flicker-noise spectroscopy framework for signal processing, for the high-frequency component of both time series. These peaks correspond to the local reorganizations of the underlying geophysical system that are believed to precede strong earthquakes. The rodent brain activity signals are selected as potential "immediate" (up to 2 weeks) deterministic precursors because of the recent scientific reports confirming that rodents sense imminent earthquakes and the population-genetic model of K irshvink (Soc Am 90, 312-323, 2000) showing how a reliable genetic seismic escape response system may have developed over the period of several hundred million years in certain animals. The use of brain activity signals, such as electroencephalograms, in contrast to conventional abnormal animal behavior observations, enables one to apply the standard "input-sensor-response" approach to determine what input signals trigger specific seismic escape brain activity responses.

  14. Predictions for Boson-Jet Observables and Fragmentation Function Ratios from a Hybrid Strong/Weak Coupling Model for Jet Quenching

    CERN Document Server

    Casalderrey-Solana, Jorge; Milhano, José Guilherme; Pablos, Daniel; Rajagopal, Krishna

    2016-01-01

    We have previously introduced a hybrid strong/weak coupling model for jet quenching in heavy ion collisions that describes the production and fragmentation of jets at weak coupling, using PYTHIA, and describes the rate at which each parton in the jet shower loses energy as it propagates through the strongly coupled plasma, dE/dx, using an expression computed holographically at strong coupling. The model has a single free parameter that we fit to a single experimental measurement. We then confront our model with experimental data on many other jet observables, focusing here on boson-jet observables, finding that it provides a good description of present jet data. Next, we provide the predictions of our hybrid model for many measurements to come, including those for inclusive jet, dijet, photon-jet and Z-jet observables in heavy ion collisions with energy $\\sqrt{s}=5.02$ ATeV coming soon at the LHC. As the statistical uncertainties on near-future measurements of photon-jet observables are expected to be much sm...

  15. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu [University of Michigan, Department of Astronomy, 1085 South University Avenue, Ann Arbor, MI 48109-1107 (United States)

    2016-11-20

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  16. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  17. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  18. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  19. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  20. Modeling and synthesis of strong ground motion

    Indian Academy of Sciences (India)

    There have been many developments in modeling techniques, and ... damage life and property in a city or region. How- ... quake of 26 January 2001 as a case study. 2. ...... quake derived from a dense strong-motion network; Bull. Seismol.

  1. Earthquake source model using strong motion displacement

    Indian Academy of Sciences (India)

    The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...

  2. Prediction of the occurrence of related strong earthquakes in Italy

    International Nuclear Information System (INIS)

    Vorobieva, I.A.; Panza, G.F.

    1993-06-01

    In the seismic flow it is often observed that a Strong Earthquake (SE), is followed by Related Strong Earthquakes (RSEs), which occur near the epicentre of the SE with origin time rather close to the origin time of the SE. The algorithm for the prediction of the occurrence of a RSE has been developed and applied for the first time to the seismicity data of the California-Nevada region and has been successfully tested in several regions of the World, the statistical significance of the result being 97%. So far, it has been possible to make five successful forward predictions, with no false alarms or failures to predict. The algorithm is applied here to the Italian territory, where the occurrence of RSEs is a particularly rare phenomenon. Our results show that the standard algorithm is successfully directly applicable without any adjustment of the parameters. Eleven SEs are considered. Of them, three are followed by a RSE, as predicted by the algorithm, eight SEs are not followed by a RSE, and the algorithm predicts this behaviour for seven of them, giving rise to only one false alarm. Since, in Italy, quite often the series of strong earthquakes are relatively short, the algorithm has been extended to handle such situation. The result of this experiment indicates that it is possible to attempt to test a SE, for the occurrence of a RSE, soon after the occurrence of the SE itself, performing timely ''preliminary'' recognition on reduced data sets. This fact, the high confidence level of the retrospective analysis, and the first successful forward predictions, made in different parts of the World, indicates that, even if additional tests are desirable, the algorithm can already be considered for routine application to Civil Defence. (author). Refs, 3 figs, 7 tabs

  3. Strongly coupled models at the LHC

    International Nuclear Information System (INIS)

    Vries, Maikel de

    2014-10-01

    In this thesis strongly coupled models where the Higgs boson is composite are discussed. These models provide an explanation for the origin of electroweak symmetry breaking including a solution for the hierarchy problem. Strongly coupled models provide an alternative to the weakly coupled supersymmetric extensions of the Standard Model and lead to different and interesting phenomenology at the Large Hadron Collider (LHC). This thesis discusses two particular strongly coupled models, a composite Higgs model with partial compositeness and the Littlest Higgs model with T-parity - a composite model with collective symmetry breaking. The phenomenology relevant for the LHC is covered and the applicability of effective operators for these types of strongly coupled models is explored. First, a composite Higgs model with partial compositeness is discussed. In this model right-handed light quarks could be significantly composite, yet compatible with experimental searches at the LHC and precision tests on Standard Model couplings. In these scenarios, which are motivated by flavour physics, large cross sections for the production of new resonances coupling to light quarks are expected. Experimental signatures of right-handed compositeness at the LHC are studied, and constraints on the parameter space of these models are derived using recent results by ATLAS and CMS. Furthermore, dedicated searches for multi-jet signals at the LHC are proposed which could significantly improve the sensitivity to signatures of right-handed compositeness. The Littlest Higgs model with T-parity, providing an attractive solution to the fine-tuning problem, is discussed next. This solution is only natural if its intrinsic symmetry breaking scale f is relatively close to the electroweak scale. The constraints from the latest results of the 8 TeV run at the LHC are examined. The model's parameter space is being excluded based on a combination of electroweak precision observables, Higgs precision

  4. Model predictions of the results of interferometric observations for stars under conditions of strong gravitational scattering by black holes and wormholes

    International Nuclear Information System (INIS)

    Shatskiy, A. A.; Kovalev, Yu. Yu.; Novikov, I. D.

    2015-01-01

    The characteristic and distinctive features of the visibility amplitude of interferometric observations for compact objects like stars in the immediate vicinity of the central black hole in our Galaxy are considered. These features are associated with the specifics of strong gravitational scattering of point sources by black holes, wormholes, or black-white holes. The revealed features will help to determine the most important topological characteristics of the central object in our Galaxy: whether this object possesses the properties of only a black hole or also has characteristics unique to wormholes or black-white holes. These studies can be used to interpret the results of optical, infrared, and radio interferometric observations

  5. Model predictions of the results of interferometric observations for stars under conditions of strong gravitational scattering by black holes and wormholes

    Energy Technology Data Exchange (ETDEWEB)

    Shatskiy, A. A., E-mail: shatskiy@asc.rssi.ru; Kovalev, Yu. Yu.; Novikov, I. D. [Russian Academy of Sciences, Astro Space Center, Lebedev Physical Institute (Russian Federation)

    2015-05-15

    The characteristic and distinctive features of the visibility amplitude of interferometric observations for compact objects like stars in the immediate vicinity of the central black hole in our Galaxy are considered. These features are associated with the specifics of strong gravitational scattering of point sources by black holes, wormholes, or black-white holes. The revealed features will help to determine the most important topological characteristics of the central object in our Galaxy: whether this object possesses the properties of only a black hole or also has characteristics unique to wormholes or black-white holes. These studies can be used to interpret the results of optical, infrared, and radio interferometric observations.

  6. Prediction of strongly-heated internal gas flows

    International Nuclear Information System (INIS)

    McEligot, D.M.; Shehata, A.M.; Kunugi, Tomoaki

    1997-01-01

    The purposes of the present article are to remind practitioners why the usual textbook approaches may not be appropriate for treating gas flows heated from the surface with large heat fluxes and to review the successes of some recent applications of turbulence models to this case. Simulations from various turbulence models have been assessed by comparison to the measurements of internal mean velocity and temperature distributions by Shehata for turbulent, laminarizing and intermediate flows with significant gas property variation. Of about fifteen models considered, five were judged to provide adequate predictions

  7. The hadronic standard model for strong and electroweak interactions

    International Nuclear Information System (INIS)

    Raczka, R.

    1993-01-01

    We propose a new model for strong and electro-weak interactions. First, we review various QCD predictions for hadron-hadron and lepton-hadron processes. We indicate that the present formulation of strong interactions in the frame work of Quantum Chromodynamics encounters serious conceptual and numerical difficulties in a reliable description of hadron-hadron and lepton-hadron interactions. Next we propose to replace the strong sector of Standard Model based on unobserved quarks and gluons by the strong sector based on the set of the observed baryons and mesons determined by the spontaneously broken SU(6) gauge field theory model. We analyse various properties of this model such as asymptotic freedom, Reggeization of gauge bosons and fundamental fermions, baryon-baryon and meson-baryon high energy scattering, generation of Λ-polarization in inclusive processes and others. Finally we extend this model by electro-weak sector. We demonstrate a remarkable lepton and hadron anomaly cancellation and we analyse a series of important lepton-hadron and hadron-hadron processes such as e + + e - → hadrons, e + + e - → W + + W - , e + + e - → p + anti-p, e + p → e + p and p + anti-p → p + anti-p processes. We obtained a series of interesting new predictions in this model especially for processes with polarized particles. We estimated the value of the strong coupling constant α(M z ) and we predicted the top baryon mass M Λ t ≅ 240 GeV. Since in our model the proton, neutron, Λ-particles, vector mesons like ρ, ω, φ, J/ψ ect. and leptons are elementary most of experimentally analysed lepton-hadron and hadron-hadron processes in LEP1, LEP2, LEAR, HERA, HERMES, LHC and SSC experiments may be relatively easily analysed in our model. (author). 252 refs, 65 figs, 1 tab

  8. Quantitative prediction of strong motion for a potential earthquake fault

    Directory of Open Access Journals (Sweden)

    Shamita Das

    2010-02-01

    Full Text Available This paper describes a new method for calculating strong motion records for a given seismic region on the basis of the laws of physics using information on the tectonics and physical properties of the earthquake fault. Our method is based on a earthquake model, called a «barrier model», which is characterized by five source parameters: fault length, width, maximum slip, rupture velocity, and barrier interval. The first three parameters may be constrained from plate tectonics, and the fourth parameter is roughly a constant. The most important parameter controlling the earthquake strong motion is the last parameter, «barrier interval». There are three methods to estimate the barrier interval for a given seismic region: 1 surface measurement of slip across fault breaks, 2 model fitting with observed near and far-field seismograms, and 3 scaling law data for small earthquakes in the region. The barrier intervals were estimated for a dozen earthquakes and four seismic regions by the above three methods. Our preliminary results for California suggest that the barrier interval may be determined if the maximum slip is given. The relation between the barrier interval and maximum slip varies from one seismic region to another. For example, the interval appears to be unusually long for Kilauea, Hawaii, which may explain why only scattered evidence of strong ground shaking was observed in the epicentral area of the Island of Hawaii earthquake of November 29, 1975. The stress drop associated with an individual fault segment estimated from the barrier interval and maximum slip lies between 100 and 1000 bars. These values are about one order of magnitude greater than those estimated earlier by the use of crack models without barriers. Thus, the barrier model can resolve, at least partially, the well known discrepancy between the stress-drops measured in the laboratory and those estimated for earthquakes.

  9. Diagnosing a Strong-Fault Model by Conflict and Consistency.

    Science.gov (United States)

    Zhang, Wenfeng; Zhao, Qi; Zhao, Hongbo; Zhou, Gan; Feng, Wenquan

    2018-03-29

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model's prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain-the heat control unit of a spacecraft-where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  10. Diagnosing a Strong-Fault Model by Conflict and Consistency

    Directory of Open Access Journals (Sweden)

    Wenfeng Zhang

    2018-03-01

    Full Text Available The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model’s prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF. Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain—the heat control unit of a spacecraft—where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  11. The hadronic standard model for strong and electroweak interactions

    Energy Technology Data Exchange (ETDEWEB)

    Raczka, R. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)

    1993-12-31

    We propose a new model for strong and electro-weak interactions. First, we review various QCD predictions for hadron-hadron and lepton-hadron processes. We indicate that the present formulation of strong interactions in the frame work of Quantum Chromodynamics encounters serious conceptual and numerical difficulties in a reliable description of hadron-hadron and lepton-hadron interactions. Next we propose to replace the strong sector of Standard Model based on unobserved quarks and gluons by the strong sector based on the set of the observed baryons and mesons determined by the spontaneously broken SU(6) gauge field theory model. We analyse various properties of this model such as asymptotic freedom, Reggeization of gauge bosons and fundamental fermions, baryon-baryon and meson-baryon high energy scattering, generation of {Lambda}-polarization in inclusive processes and others. Finally we extend this model by electro-weak sector. We demonstrate a remarkable lepton and hadron anomaly cancellation and we analyse a series of important lepton-hadron and hadron-hadron processes such as e{sup +} + e{sup -} {yields} hadrons, e{sup +} + e{sup -} {yields} W{sup +} + W{sup -}, e{sup +} + e{sup -} {yields} p + anti-p, e + p {yields} e + p and p + anti-p {yields} p + anti-p processes. We obtained a series of interesting new predictions in this model especially for processes with polarized particles. We estimated the value of the strong coupling constant {alpha}(M{sub z}) and we predicted the top baryon mass M{sub {Lambda}{sub t}} {approx_equal} 240 GeV. Since in our model the proton, neutron, {Lambda}-particles, vector mesons like {rho}, {omega}, {phi}, J/{psi} ect. and leptons are elementary most of experimentally analysed lepton-hadron and hadron-hadron processes in LEP1, LEP2, LEAR, HERA, HERMES, LHC and SSC experiments may be relatively easily analysed in our model. (author). 252 refs, 65 figs, 1 tab.

  12. The hadronic standard model for strong and electroweak interactions

    Energy Technology Data Exchange (ETDEWEB)

    Raczka, R [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)

    1994-12-31

    We propose a new model for strong and electro-weak interactions. First, we review various QCD predictions for hadron-hadron and lepton-hadron processes. We indicate that the present formulation of strong interactions in the frame work of Quantum Chromodynamics encounters serious conceptual and numerical difficulties in a reliable description of hadron-hadron and lepton-hadron interactions. Next we propose to replace the strong sector of Standard Model based on unobserved quarks and gluons by the strong sector based on the set of the observed baryons and mesons determined by the spontaneously broken SU(6) gauge field theory model. We analyse various properties of this model such as asymptotic freedom, Reggeization of gauge bosons and fundamental fermions, baryon-baryon and meson-baryon high energy scattering, generation of {Lambda}-polarization in inclusive processes and others. Finally we extend this model by electro-weak sector. We demonstrate a remarkable lepton and hadron anomaly cancellation and we analyse a series of important lepton-hadron and hadron-hadron processes such as e{sup +} + e{sup -} {yields} hadrons, e{sup +} + e{sup -} {yields} W{sup +} + W{sup -}, e{sup +} + e{sup -} {yields} p + anti-p, e + p {yields} e + p and p + anti-p {yields} p + anti-p processes. We obtained a series of interesting new predictions in this model especially for processes with polarized particles. We estimated the value of the strong coupling constant {alpha}(M{sub z}) and we predicted the top baryon mass M{sub {Lambda}{sub t}} {approx_equal} 240 GeV. Since in our model the proton, neutron, {Lambda}-particles, vector mesons like {rho}, {omega}, {phi}, J/{psi} ect. and leptons are elementary most of experimentally analysed lepton-hadron and hadron-hadron processes in LEP1, LEP2, LEAR, HERA, HERMES, LHC and SSC experiments may be relatively easily analysed in our model. (author). 252 refs, 65 figs, 1 tab.

  13. A Strong Self-adaptivity Localization Algorithm Based on Gray Prediction Model for Mobile Nodes%一种使用灰度预测模型的强自适应性移动节点定位算法

    Institute of Scientific and Technical Information of China (English)

    单志龙; 刘兰辉; 张迎胜; 黄广雄

    2014-01-01

    定位技术是无线传感器网络的关键技术,而关于移动节点的定位又是其中的技术难点。该文针对移动节点定位问题提出基于灰度预测模型的强自适应性移动节点定位算法(GPLA)。算法在基于蒙特卡罗定位思想的基础上,利用灰度预测模型进行运动预测,精确采样区域,用估计距离进行滤波,提高采样粒子的有效性,通过限制性的线性交叉操作来生成新粒子,从而加快样本生成,减少采样次数,提高算法效率。仿真实验中,该算法在通信半径、锚节点密度、样本大小等条件变化的情况下,表现出较好的性能与较强的自适应性。%Localization of sensor nodes is an important issue in Wireless Sensor Networks (WSNs), and positioning of the mobile nodes is one of the difficulties. To deal with this issue, a strong self-adaptive Localization Algorithm based on Gray Prediction model for mobile nodes (GPLA) is proposed. On the background of Monte Carlo Localization Algoritm, gray prediction model is used in GPLA, which can accurate sampling area is used to predict nodes motion situation. In filtering process, estimated distance is taken to improve the validity of the sample particles. Finally, restrictive linear crossover is used to generate new particles, which can accelerate the sampling process, reduce the times of sampling and heighten the efficiency of GPLA. Simulation results show that the algorithm has excellent performance and strong self-adaptivity in different communication radius, anchor node, sample size, and other conditions.

  14. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  15. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  16. Strong exploration of a cast iron pipe failure model

    International Nuclear Information System (INIS)

    Moglia, M.; Davis, P.; Burn, S.

    2008-01-01

    A physical probabilistic failure model for buried cast iron pipes is described, which is based on the fracture mechanics of the pipe failure process. Such a model is useful in the asset management of buried pipelines. The model is then applied within a Monte-Carlo simulation framework after adding stochasticity to input variables. Historical failure rates are calculated based on a database of 81,595 pipes and their recorded failures, and model parameters are chosen to provide the best fit between historical and predicted failure rates. This provides an estimated corrosion rate distribution, which agrees well with experimental results. The first model design was chosen in a deliberate simplistic fashion in order to allow for further strong exploration of model assumptions. Therefore, first runs of the initial model resulted in a poor quantitative and qualitative fit in regards to failure rates. However, by exploring natural additional assumptions such as relating to stochastic loads, a number of assumptions were chosen which improved the model to a stage where an acceptable fit was achieved. The model bridges the gap between micro- and macro-level, and this is the novelty in the approach. In this model, data can be used both from the macro-level in terms of failure rates, as well as from the micro-level such as in terms of corrosion rates

  17. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  18. Prediction of North Pacific Height Anomalies During Strong Madden-Julian Oscillation Events

    Science.gov (United States)

    Kai-Chih, T.; Barnes, E. A.; Maloney, E. D.

    2017-12-01

    The Madden Julian Oscillation (MJO) creates strong variations in extratropical atmospheric circulations that have important implications for subseasonal-to-seasonal prediction. In particular, certain MJO phases are characterized by a consistent modulation of geopotential height in the North Pacific and adjacent regions across different MJO events. Until recently, only limited research has examined the relationship between these robust MJO tropical-extratropical teleconnections and model prediction skill. In this study, reanalysis data (MERRA and ERA-Interim) and ECMWF ensemble hindcasts are used to demonstrate that robust teleconnections in specific MJO phases and time lags are also characterized by excellent agreement in the prediction of geopotential height anoma- lies across model ensemble members at forecast leads of up to 3 weeks. These periods of enhanced prediction capabilities extend the possibility for skillful extratropical weather prediction beyond traditional 10-13 day limits. Furthermore, we also examine the phase dependency of teleconnection robustness by using Linear Baroclinic Model (LBM) and the result is consistent with the ensemble hindcasts : the anomalous heating of MJO phase 2 (phase 6) can consistently generate positive (negative) geopotential height anomalies around the extratropical Pacific with a lead of 15-20 days, while other phases are more sensitive to the variaion of the mean state.

  19. A strong viscous–inviscid interaction model for rotating airfoils

    DEFF Research Database (Denmark)

    Ramos García, Néstor; Sørensen, Jens Nørkær; Shen, Wen Zhong

    2014-01-01

    Two-dimensional (2D) and quasi-three dimensional (3D), steady and unsteady, viscous–inviscid interactive codes capable of predicting the aerodynamic behavior of wind turbine airfoils are presented. The model is based on a viscous–inviscid interaction technique using strong coupling between...... a boundary-layer trip or computed using an en envelope transition method. Validation of the incompressible 2D version of the code is carried out against measurements and other numerical codes for different airfoil geometries at various Reynolds numbers, ranging from 0.9 ⋅ 106 to 8.2 ⋅ 106. In the quasi-3D...... version, a parametric study on rotational effects induced by the Coriolis and centrifugal forces in the boundary-layer equations shows that the effects of rotation are to decrease the growth of the boundary-layer and delay the onset of separation, hence increasing the lift coefficient slightly while...

  20. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    CR cultural resource CRM cultural resource management CRPM Cultural Resource Predictive Modeling DoD Department of Defense ESTCP Environmental...resource management ( CRM ) legal obligations under NEPA and the NHPA, military installations need to demonstrate that CRM decisions are based on objective...maxim “one size does not fit all,” and demonstrate that DoD installations have many different CRM needs that can and should be met through a variety

  1. The Cornwall-Norton model in the strong coupling regime

    International Nuclear Information System (INIS)

    Natale, A.A.

    1991-01-01

    The Cornwall-Norton model is studied in the strong coupling regime. It is shown that the fermionic self-energy at large momenta behaves as Σ(p) ∼ (m 2 /p) ln (p/m). We verify that in the strong coupling phase the dynamically generated masses of gauge and scalar bosons are of the same order, and the essential features of the model remain intact. (author)

  2. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  3. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  4. Prediction of the Midlatitude Response to Strong Madden-Julian Oscillation Events on S2S Time Scales

    Science.gov (United States)

    Tseng, K.-C.; Barnes, E. A.; Maloney, E. D.

    2018-01-01

    The Madden-Julian Oscillation (MJO) forces strong variations in extratropical atmospheric circulations that have important implications for subseasonal-to-seasonal (S2S) prediction. In particular, certain MJO phases are characterized by a consistent modulation of geopotential height in the North Pacific and adjacent regions across different MJO events. Until recently, only limited research has examined the relationship between these robust MJO tropical-extratropical teleconnections and model prediction skill. In this study, reanalysis data and numerical forecast model ensemble hindcasts are used to demonstrate that robust teleconnections in specific MJO phases and time lags are also characterized by excellent agreement in the prediction of geopotential height anomalies across model ensemble members at forecast leads of up to 3 weeks. These periods of enhanced prediction capabilities extend the possibility for skillful extratropical weather prediction beyond traditional 10-13 day limits.

  5. A high and low noise model for strong motion accelerometers

    Science.gov (United States)

    Clinton, J. F.; Cauzzi, C.; Olivieri, M.

    2010-12-01

    We present reference noise models for high-quality strong motion accelerometer installations. We use continuous accelerometer data acquired by the Swiss Seismological Service (SED) since 2006 and other international high-quality accelerometer network data to derive very broadband (50Hz-100s) high and low noise models. The proposed noise models are compared to the Peterson (1993) low and high noise models designed for broadband seismometers; the datalogger self-noise; background noise levels at existing Swiss strong motion stations; and typical earthquake signals recorded in Switzerland and worldwide. The standard strong motion station operated by the SED consists of a Kinemetrics Episensor (2g clip level; flat acceleration response from 200 Hz to DC; insulated sensor / datalogger systems placed in vault quality sites. At all frequencies, there is at least one order of magnitude between the ALNM and the AHNM; at high frequencies (> 1Hz) this extends to 2 orders of magnitude. This study provides remarkable confirmation of the capability of modern strong motion accelerometers to record low-amplitude ground motions with seismic observation quality. In particular, an accelerometric station operating at the ALNM is capable of recording the full spectrum of near source earthquakes, out to 100 km, down to M2. Of particular interest for the SED, this study provides acceptable noise limits for candidate sites for the on-going Strong Motion Network modernisation.

  6. Quantum field model of strong-coupling binucleon

    International Nuclear Information System (INIS)

    Amirkhanov, I.V.; Puzynin, I.V.; Puzynina, T.P.; Strizh, T.A.; Zemlyanaya, E.V.; Lakhno, V.D.

    1996-01-01

    The quantum field binucleon model for the case of the nucleon spot interaction with the scalar and pseudoscalar meson fields is considered. It is shown that the nonrelativistic problem of the two nucleon interaction reduces to the one-particle problem. For the strong coupling limit the nonlinear equations describing two nucleons in the meson field are developed [ru

  7. Solution of the strong CP problem in models with scalars

    International Nuclear Information System (INIS)

    Dimopoulos, S.

    1978-01-01

    A possible solution to the strong CP problem is pointed out within the context of a Weinberg--Salam model with two Higgs fields coupled in a Peccei--Quinn symmetric fashion. This is done by extending the colour group to a bigger simple group which is broken at some very high energy. The model contains a heavy axion. No old or new U(1) problem re-emerges. 31 references

  8. 1D energy transport in a strongly scattering laboratory model

    International Nuclear Information System (INIS)

    Wijk, Kasper van; Scales, John A.; Haney, Matthew

    2004-01-01

    Radiative transfer (RT) theory is often invoked to describe energy propagation in strongly scattering media. Fitting RT to measured wave field intensities is rather different at late times, when the transport is diffusive, than at intermediate times (around one extinction mean free time), when ballistic and diffusive behavior coexist. While there are many examples of late-time RT fits, we describe ultrasonic multiple scattering measurements with RT over the entire range of times--from ballistic to diffusive. In addition to allowing us to retrieve the scattering and absorption mean free paths independently, our results also support theoretical predictions in 1D that suggest an intermediate regime of diffusive (nonlocalized) behavior

  9. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation,...

  10. Describing a Strongly Correlated Model System with Density Functional Theory.

    Science.gov (United States)

    Kong, Jing; Proynov, Emil; Yu, Jianguo; Pachter, Ruth

    2017-07-06

    The linear chain of hydrogen atoms, a basic prototype for the transition from a metal to Mott insulator, is studied with a recent density functional theory model functional for nondynamic and strong correlation. The computed cohesive energy curve for the transition agrees well with accurate literature results. The variation of the electronic structure in this transition is characterized with a density functional descriptor that yields the atomic population of effectively localized electrons. These new methods are also applied to the study of the Peierls dimerization of the stretched even-spaced Mott insulator to a chain of H 2 molecules, a different insulator. The transitions among the two insulating states and the metallic state of the hydrogen chain system are depicted in a semiquantitative phase diagram. Overall, we demonstrate the capability of studying strongly correlated materials with a mean-field model at the fundamental level, in contrast to the general pessimistic view on such a feasibility.

  11. Prediction and discovery of extremely strong hydrodynamic instabilities due to a velocity jump: theory and experiments

    International Nuclear Information System (INIS)

    Fridman, A M

    2008-01-01

    The theory and the experimental discovery of extremely strong hydrodynamic instabilities are described, viz. the Kelvin-Helmholtz, centrifugal, and superreflection instabilities. The discovery of the last two instabilities was predicted and the Kelvin-Helmholtz instability in real systems was revised by us. (reviews of topical problems)

  12. Strong to fragile transition in a model of liquid silica

    OpenAIRE

    Barrat, Jean-Louis; Badro, James; Gillet, Philippe

    1996-01-01

    The transport properties of an ionic model for liquid silica at high temperatures and pressure are investigated using molecular dynamics simulations. With increasing pressure, a clear change from "strong" to "fragile" behaviour (according to Angell's classification of glass-forming liquids) is observed, albeit only on the small viscosity range that can be explored in MD simulations.. This change is related to structural changes, from an almost perfect four-fold coordination to an imperfect fi...

  13. Prediction of strong ground motion based on scaling law of earthquake

    International Nuclear Information System (INIS)

    Kamae, Katsuhiro; Irikura, Kojiro; Fukuchi, Yasunaga.

    1991-01-01

    In order to predict more practically strong ground motion, it is important to study how to use a semi-empirical method in case of having no appropriate observation records for actual small-events as empirical Green's functions. We propose a prediction procedure using artificially simulated small ground motions as substitute for the actual motions. First, we simulate small-event motion by means of stochastic simulation method proposed by Boore (1983) in considering pass effects such as attenuation, and broadening of waveform envelope empirically in the objective region. Finally, we attempt to predict the strong ground motion due to a future large earthquake (M 7, Δ = 13 km) using the same summation procedure as the empirical Green's function method. We obtained the results that the characteristics of the synthetic motion using M 5 motion were in good agreement with those by the empirical Green's function method. (author)

  14. On autostability of almost prime models relative to strong constructivizations

    International Nuclear Information System (INIS)

    Goncharov, Sergey S

    2011-01-01

    Questions of autostability and algorithmic dimension of models go back to papers by A.I. Malcev and by A. Froehlich and J.C. Shepherdson in which the effect of the existence of computable presentations which are non-equivalent from the viewpoint of their algorithmic properties was first discovered. Today there are many papers by various authors devoted to investigations of such questions. The present paper deals with the question of inheritance of the properties of autostability and non-autostability relative to strong constructivizations under elementary extensions for almost prime models. Bibliography: 37 titles.

  15. Orbifolds and Exact Solutions of Strongly-Coupled Matrix Models

    Science.gov (United States)

    Córdova, Clay; Heidenreich, Ben; Popolitov, Alexandr; Shakirov, Shamil

    2018-02-01

    We find an exact solution to strongly-coupled matrix models with a single-trace monomial potential. Our solution yields closed form expressions for the partition function as well as averages of Schur functions. The results are fully factorized into a product of terms linear in the rank of the matrix and the parameters of the model. We extend our formulas to include both logarithmic and finite-difference deformations, thereby generalizing the celebrated Selberg and Kadell integrals. We conjecture a formula for correlators of two Schur functions in these models, and explain how our results follow from a general orbifold-like procedure that can be applied to any one-matrix model with a single-trace potential.

  16. PREDICTED PERCENTAGE DISSATISFIED (PPD) MODEL ...

    African Journals Online (AJOL)

    HOD

    their low power requirements, are relatively cheap and are environment friendly. ... PREDICTED PERCENTAGE DISSATISFIED MODEL EVALUATION OF EVAPORATIVE COOLING ... The performance of direct evaporative coolers is a.

  17. Ruling out a strongly interacting standard Higgs model

    International Nuclear Information System (INIS)

    Riesselmann, K.; Willenbrock, S.

    1997-01-01

    Previous work has suggested that perturbation theory is unreliable for Higgs- and Goldstone-boson scattering, at energies above the Higgs-boson mass, for relatively small values of the Higgs quartic coupling λ(μ). By performing a summation of nonlogarithmic terms, we show that perturbation theory is in fact reliable up to relatively large coupling. This eliminates the possibility of a strongly interacting standard Higgs model at energies above the Higgs-boson mass, complementing earlier studies which excluded strong interactions at energies near the Higgs-boson mass. The summation can be formulated in terms of an appropriate scale in the running coupling, μ=√(s)/e∼√(s)/2.7, so it can be incorporated easily in renormalization-group-improved tree-level amplitudes as well as higher-order calculations. copyright 1996 The American Physical Society

  18. Classical and quantum models of strong cosmic censorship

    International Nuclear Information System (INIS)

    Moncrief, V.E.

    1983-01-01

    The cosmic censorship conjecture states that naked singularities should not evolve from regular initial conditions in general relativity. In its strong form the conjecture asserts that space-times with Cauchy horizons must always be unstable and thus that the generic solution of Einstein's equations must be inextendible beyond its maximal Cauchy development. In this paper it is shown that one can construct an infinite-dimensional family of extendible cosmological solutions similar to Taub-NUT space-time; however, each of these solutions is unstable in precisely the way demanded by strong cosmic censorship. Finally it is shown that quantum fluctuations in the metric always provide (though in an unexpectedly subtle way) the ''generic perturbations'' which destroy the Cauchy horizons in these models. (author)

  19. Classical and quantum models of strong cosmic censorship

    Energy Technology Data Exchange (ETDEWEB)

    Moncrief, V.E. (Yale Univ., New Haven, CT (USA). Dept. of Physics)

    1983-04-01

    The cosmic censorship conjecture states that naked singularities should not evolve from regular initial conditions in general relativity. In its strong form the conjecture asserts that space-times with Cauchy horizons must always be unstable and thus that the generic solution of Einstein's equations must be inextendible beyond its maximal Cauchy development. In this paper it is shown that one can construct an infinite-dimensional family of extendible cosmological solutions similar to Taub-NUT space-time; however, each of these solutions is unstable in precisely the way demanded by strong cosmic censorship. Finally it is shown that quantum fluctuations in the metric always provide (though in an unexpectedly subtle way) the ''generic perturbations'' which destroy the Cauchy horizons in these models.

  20. Procedure to predict the storey where plastic drift dominates in two-storey building under strong ground motion

    DEFF Research Database (Denmark)

    Hibino, Y.; Ichinose, T.; Costa, J.L.D.

    2009-01-01

    A procedure is presented to predict the storey where plastic drift dominates in two-storey buildings under strong ground motion. The procedure utilizes the yield strength and the mass of each storey as well as the peak ground acceleration. The procedure is based on two different assumptions: (1....... The efficiency of the procedure is verified by dynamic response analyses using elasto-plastic model....

  1. Constraints on cosmological models from strong gravitational lensing systems

    International Nuclear Information System (INIS)

    Cao, Shuo; Pan, Yu; Zhu, Zong-Hong; Biesiada, Marek; Godlowski, Wlodzimierz

    2012-01-01

    Strong lensing has developed into an important astrophysical tool for probing both cosmology and galaxies (their structure, formation, and evolution). Using the gravitational lensing theory and cluster mass distribution model, we try to collect a relatively complete observational data concerning the Hubble constant independent ratio between two angular diameter distances D ds /D s from various large systematic gravitational lens surveys and lensing by galaxy clusters combined with X-ray observations, and check the possibility to use it in the future as complementary to other cosmological probes. On one hand, strongly gravitationally lensed quasar-galaxy systems create such a new opportunity by combining stellar kinematics (central velocity dispersion measurements) with lensing geometry (Einstein radius determination from position of images). We apply such a method to a combined gravitational lens data set including 70 data points from Sloan Lens ACS (SLACS) and Lens Structure and Dynamics survey (LSD). On the other hand, a new sample of 10 lensing galaxy clusters with redshifts ranging from 0.1 to 0.6 carefully selected from strong gravitational lensing systems with both X-ray satellite observations and optical giant luminous arcs, is also used to constrain three dark energy models (ΛCDM, constant w and CPL) under a flat universe assumption. For the full sample (n = 80) and the restricted sample (n = 46) including 36 two-image lenses and 10 strong lensing arcs, we obtain relatively good fitting values of basic cosmological parameters, which generally agree with the results already known in the literature. This results encourages further development of this method and its use on larger samples obtained in the future

  2. Constraints on cosmological models from strong gravitational lensing systems

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Shuo; Pan, Yu; Zhu, Zong-Hong [Department of Astronomy, Beijing Normal University, Beijing 100875 (China); Biesiada, Marek [Department of Astrophysics and Cosmology, Institute of Physics, University of Silesia, Uniwersytecka 4, 40-007 Katowice (Poland); Godlowski, Wlodzimierz, E-mail: baodingcaoshuo@163.com, E-mail: panyu@cqupt.edu.cn, E-mail: biesiada@us.edu.pl, E-mail: godlowski@uni.opole.pl, E-mail: zhuzh@bnu.edu.cn [Institute of Physics, Opole University, Oleska 48, 45-052 Opole (Poland)

    2012-03-01

    Strong lensing has developed into an important astrophysical tool for probing both cosmology and galaxies (their structure, formation, and evolution). Using the gravitational lensing theory and cluster mass distribution model, we try to collect a relatively complete observational data concerning the Hubble constant independent ratio between two angular diameter distances D{sub ds}/D{sub s} from various large systematic gravitational lens surveys and lensing by galaxy clusters combined with X-ray observations, and check the possibility to use it in the future as complementary to other cosmological probes. On one hand, strongly gravitationally lensed quasar-galaxy systems create such a new opportunity by combining stellar kinematics (central velocity dispersion measurements) with lensing geometry (Einstein radius determination from position of images). We apply such a method to a combined gravitational lens data set including 70 data points from Sloan Lens ACS (SLACS) and Lens Structure and Dynamics survey (LSD). On the other hand, a new sample of 10 lensing galaxy clusters with redshifts ranging from 0.1 to 0.6 carefully selected from strong gravitational lensing systems with both X-ray satellite observations and optical giant luminous arcs, is also used to constrain three dark energy models (ΛCDM, constant w and CPL) under a flat universe assumption. For the full sample (n = 80) and the restricted sample (n = 46) including 36 two-image lenses and 10 strong lensing arcs, we obtain relatively good fitting values of basic cosmological parameters, which generally agree with the results already known in the literature. This results encourages further development of this method and its use on larger samples obtained in the future.

  3. Regional Characterization of the Crust in Metropolitan Areas for Prediction of Strong Ground Motion

    Science.gov (United States)

    Hirata, N.; Sato, H.; Koketsu, K.; Umeda, Y.; Iwata, T.; Kasahara, K.

    2003-12-01

    Introduction: After the 1995 Kobe earthquake, the Japanese government increased its focus and funding of earthquake hazards evaluation, studies of man-made structures integrity, and emergency response planning in the major urban centers. A new agency, the Ministry of Education, Science, Sports and Culture (MEXT) has started a five-year program titled as Special Project for Earthquake Disaster Mitigation in Urban Areas (abbreviated to Dai-dai-toku in Japanese) since 2002. The project includes four programs: I. Regional characterization of the crust in metropolitan areas for prediction of strong ground motion. II. Significant improvement of seismic performance of structure. III. Advanced disaster management system. IV. Investigation of earthquake disaster mitigation research results. We will present the results from the first program conducted in 2002 and 2003. Regional Characterization of the Crust in Metropolitan Areas for Prediction of Strong Ground Motion: A long-term goal is to produce map of reliable estimations of strong ground motion. This requires accurate determination of ground motion response, which includes a source process, an effect of propagation path, and near surface response. The new five-year project was aimed to characterize the "source" and "propagation path" in the Kanto (Tokyo) region and Kinki (Osaka) region. The 1923 Kanto Earthquake is one of the important targets to be addressed in the project. The proximity of the Pacific and Philippine Sea subducting plates requires study of the relationship between earthquakes and regional tectonics. This project focuses on identification and geometry of: 1) Source faults, 2) Subducting plates and mega-thrust faults, 3) Crustal structure, 4) Seismogenic zone, 5) Sedimentary basins, 6) 3D velocity properties We have conducted a series of seismic reflection and refraction experiment in the Kanto region. In 2002 we have completed to deploy seismic profiling lines in the Boso peninsula (112 km) and the

  4. Monitoring of the future strong Vrancea events by using the CN formal earthquake prediction algorithm

    International Nuclear Information System (INIS)

    Moldoveanu, C.L.; Novikova, O.V.; Panza, G.F.; Radulian, M.

    2003-06-01

    The preparation process of the strong subcrustal events originating in Vrancea region, Romania, is monitored using an intermediate-term medium-range earthquake prediction method - the CN algorithm (Keilis-Borok and Rotwain, 1990). We present the results of the monitoring of the preparation of future strong earthquakes for the time interval from January 1, 1994 (1994.1.1), to January 1, 2003 (2003.1.1) using the updated catalogue of the Romanian local network. The database considered for the CN monitoring of the preparation of future strong earthquakes in Vrancea covers the period from 1966.3.1 to 2003.1.1 and the geographical rectangle 44.8 deg - 48.4 deg N, 25.0 deg - 28.0 deg E. The algorithm correctly identifies, by retrospective prediction, the TJPs for all the three strong earthquakes (Mo=6.4) that occurred in Vrancea during this period. The cumulated duration of the TIPs represents 26.5% of the total period of time considered (1966.3.1-2003.1.1). The monitoring of current seismicity using the algorithm CN has been carried out since 1994. No strong earthquakes occurred from 1994.1.1 to 2003.1.1 but the CN declared an extended false alarm from 1999.5.1 to 2000.11.1. No alarm has currently been declared in the region (on January 1, 2003), as can be seen from the TJPs diagram shown. (author)

  5. Strongly coupled models with a Higgs-like boson

    International Nuclear Information System (INIS)

    Pich, A.; Rosell, I.; Sanz-Cillero, J. J.

    2013-01-01

    Considering the one-loop calculation of the oblique S and T parameters, we have presented a study of the viability of strongly-coupled scenarios of electroweak symmetry breaking with a light Higgs-like boson. The calculation has been done by using an effective Lagrangian, being short-distance constraints and dispersive relations the main ingredients of the estimation. Contrary to a widely spread believe, we have demonstrated that strongly coupled electroweak models with massive resonances are not in conflict with experimental constraints on these parameters and the recently observed Higgs-like resonance. So there is room for these models, but they are stringently constrained. The vector and axial-vector states should be heavy enough (with masses above the TeV scale), the mass splitting between them is highly preferred to be small and the Higgs-like scalar should have a WW coupling close to the Standard Model one. It is important to stress that these conclusions do not depend critically on the inclusion of the second Weinberg sum rule. (authors)

  6. Strongly Coupled Models with a Higgs-like Boson

    Science.gov (United States)

    Pich, Antonio; Rosell, Ignasi; José Sanz-Cillero, Juan

    2013-11-01

    Considering the one-loop calculation of the oblique S and T parameters, we have presented a study of the viability of strongly-coupled scenarios of electroweak symmetry breaking with a light Higgs-like boson. The calculation has been done by using an effective Lagrangian, being short-distance constraints and dispersive relations the main ingredients of the estimation. Contrary to a widely spread believe, we have demonstrated that strongly coupled electroweak models with massive resonances are not in conflict with experimentalconstraints on these parameters and the recently observed Higgs-like resonance. So there is room for these models, but they are stringently constrained. The vector and axial-vector states should be heavy enough (with masses above the TeV scale), the mass splitting between them is highly preferred to be small and the Higgs-like scalar should have a WW coupling close to the Standard Model one. It is important to stress that these conclusions do not depend critically on the inclusion of the second Weinberg sum rule. We wish to thank the organizers of LHCP 2013 for the pleasant conference. This work has been supported in part by the Spanish Government and the European Commission [FPA2010-17747, FPA2011- 23778, AIC-D-2011-0818, SEV-2012-0249 (Severo Ochoa Program), CSD2007-00042 (Consolider Project CPAN)], the Generalitat Valenciana [PrometeoII/2013/007] and the Comunidad de Madrid [HEPHACOS S2009/ESP-1473].

  7. Microscopic modeling of photoluminescence of strongly disordered semiconductors

    International Nuclear Information System (INIS)

    Bozsoki, P.; Kira, M.; Hoyer, W.; Meier, T.; Varga, I.; Thomas, P.; Koch, S.W.

    2007-01-01

    A microscopic theory for the luminescence of ordered semiconductors is modified to describe photoluminescence of strongly disordered semiconductors. The approach includes both diagonal disorder and the many-body Coulomb interaction. As a case study, the light emission of a correlated plasma is investigated numerically for a one-dimensional two-band tight-binding model. The band structure of the underlying ordered system is assumed to correspond to either a direct or an indirect semiconductor. In particular, luminescence and absorption spectra are computed for various levels of disorder and sample temperature to determine thermodynamic relations, the Stokes shift, and the radiative lifetime distribution

  8. Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century.

    Science.gov (United States)

    Ganusov, Vitaly V

    2016-01-01

    While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest "strong inference in mathematical modeling" as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century.

  9. Strong inference in mathematical modeling: a method for robust science in the 21st century

    Directory of Open Access Journals (Sweden)

    Vitaly V. Ganusov

    2016-07-01

    Full Text Available While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers [1], the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions and data. Following the principle of strong inference for experimental sciences proposed by Platt [2], I suggest ``strong inference in mathematical modeling'' as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are 1 to develop multiple alternative models for the phenomenon in question; 2 to compare the models with available experimental data and to determine which of the models are not consistent with the data; 3 to determine reasons why rejected models failed to explain the data, and 4 to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the 21st century.

  10. Screening important inputs in models with strong interaction properties

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Campolongo, Francesca; Cariboni, Jessica

    2009-01-01

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  11. Screening important inputs in models with strong interaction properties

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy); Campolongo, Francesca [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)], E-mail: francesca.campolongo@jrc.it; Cariboni, Jessica [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)

    2009-07-15

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  12. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  13. Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century

    Science.gov (United States)

    Ganusov, Vitaly V.

    2016-01-01

    While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest “strong inference in mathematical modeling” as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century. PMID:27499750

  14. Hirshfeld atom refinement for modelling strong hydrogen bonds.

    Science.gov (United States)

    Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon

    2014-09-01

    High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.

  15. A multifluid model extended for strong temperature nonequilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-08

    We present a multifluid model in which the material temperature is strongly affected by the degree of segregation of each material. In order to track temperatures of segregated form and mixed form of the same material, they are defined as different materials with their own energy. This extension makes it necessary to extend multifluid models to the case in which each form is defined as a separate material. Statistical variations associated with the morphology of the mixture have to be simplified. Simplifications introduced include combining all molecularly mixed species into a single composite material, which is treated as another segregated material. Relative motion within the composite material, diffusion, is represented by material velocity of each component in the composite material. Compression work, momentum and energy exchange, virtual mass forces, and dissipation of the unresolved kinetic energy have been generalized to the heterogeneous mixture in temperature nonequilibrium. The present model can be further simplified by combining all mixed forms of materials into a composite material. Molecular diffusion in this case is modeled by the Stefan-Maxwell equations.

  16. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... signal based on a process model, coping with constraints on inputs and ... paper, we will present an introduction to the theory and application of MPC with Matlab codes ... section 5 presents the simulation results and section 6.

  17. Strong Local-Nonlocal Coupling for Integrated Fracture Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Littlewood, David John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silling, Stewart A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, John A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Seleson, Pablo D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bond, Stephen D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Parks, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Burnett, Damon J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Gunzburger, Max [Florida State Univ., Tallahassee, FL (United States)

    2015-09-01

    Peridynamics, a nonlocal extension of continuum mechanics, is unique in its ability to capture pervasive material failure. Its use in the majority of system-level analyses carried out at Sandia, however, is severely limited, due in large part to computational expense and the challenge posed by the imposition of nonlocal boundary conditions. Combined analyses in which peridynamics is em- ployed only in regions susceptible to material failure are therefore highly desirable, yet available coupling strategies have remained severely limited. This report is a summary of the Laboratory Directed Research and Development (LDRD) project "Strong Local-Nonlocal Coupling for Inte- grated Fracture Modeling," completed within the Computing and Information Sciences (CIS) In- vestment Area at Sandia National Laboratories. A number of challenges inherent to coupling local and nonlocal models are addressed. A primary result is the extension of peridynamics to facilitate a variable nonlocal length scale. This approach, termed the peridynamic partial stress, can greatly reduce the mathematical incompatibility between local and nonlocal equations through reduction of the peridynamic horizon in the vicinity of a model interface. A second result is the formulation of a blending-based coupling approach that may be applied either as the primary coupling strategy, or in combination with the peridynamic partial stress. This blending-based approach is distinct from general blending methods, such as the Arlequin approach, in that it is specific to the coupling of peridynamics and classical continuum mechanics. Facilitating the coupling of peridynamics and classical continuum mechanics has also required innovations aimed directly at peridynamic models. Specifically, the properties of peridynamic constitutive models near domain boundaries and shortcomings in available discretization strategies have been addressed. The results are a class of position-aware peridynamic constitutive laws for

  18. Relativistic strings and dual models of strong interactions

    International Nuclear Information System (INIS)

    Marinov, M.S.

    1977-01-01

    The theory of strong interactions,based on the model depicting a hardon as a one-dimentional elastic relativistic system(''string'') is considered. The relationship between this model and the concepts of quarks and partons is discussed. Presented are the principal results relating to the Veneziano dual theory, which may be considered as the consequence of the string model, and to its modifications. The classical string theory is described in detail. Attention is focused on questions of importance to the construction of the quantum theory - the Hamilton mechanisms and conformal symmetry. Quantization is described, and it is shown that it is not contradictory only in the 26-dimentional space and with a special requirement imposed on the spectrum of states. The theory of a string with a distributed spin is considered. The spin is introduced with the aid of the Grassman algebra formalism. In this case quantization is possible only in the 10-dimentional space. The strings interact by their ruptures and gluings. A method for calculating the interaction amplitudes is indicated

  19. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  1. <strong>Driving forces behind the increasing cardiovascular treatment intensity.strong>>A dynamic epidemiologic model of trends in Danish cardiovascular drug utilization. strong>

    DEFF Research Database (Denmark)

    Kildemoes, Helle Wallach; Andersen, Morten

    . Objectives: To investigate the driving forces behind the increasing treatment prevalence of cardiovascular drugs, in particular statins, by means of a dynamic epidemiologic drug utilization model. Methods: Material: All Danish residents older than 20 years by January 1, 1996 (4.0 million inhabitants), were...

  2. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  3. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  4. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  5. CN earthquake prediction algorithm and the monitoring of the future strong Vrancea events

    International Nuclear Information System (INIS)

    Moldoveanu, C.L.; Radulian, M.; Novikova, O.V.; Panza, G.F.

    2002-01-01

    The strong earthquakes originating at intermediate-depth in the Vrancea region (located in the SE corner of the highly bent Carpathian arc) represent one of the most important natural disasters able to induce heavy effects (high tool of casualties and extensive damage) in the Romanian territory. The occurrence of these earthquakes is irregular, but not infrequent. Their effects are felt over a large territory, from Central Europe to Moscow and from Greece to Scandinavia. The largest cultural and economical center exposed to the seismic risk due to the Vrancea earthquakes is Bucharest. This metropolitan area (230 km 2 wide) is characterized by the presence of 2.5 million inhabitants (10% of the country population) and by a considerable number of high-risk structures and infrastructures. The best way to face strong earthquakes is to mitigate the seismic risk by using the two possible complementary approaches represented by (a) the antiseismic design of structures and infrastructures (able to support strong earthquakes without significant damage), and (b) the strong earthquake prediction (in terms of alarm intervals declared for long, intermediate or short-term space-and time-windows). The intermediate term medium-range earthquake prediction represents the most realistic target to be reached at the present state of knowledge. The alarm declared in this case extends over a time window of about one year or more, and a space window of a few hundreds of kilometers. In the case of Vrancea events the spatial uncertainty is much less, being of about 100 km. The main measures for the mitigation of the seismic risk allowed by the intermediate-term medium-range prediction are: (a) verification of the buildings and infrastructures stability and reinforcement measures when required, (b) elaboration of emergency plans of action, (c) schedule of the main actions required in order to restore the normality of the social and economical life after the earthquake. The paper presents the

  6. Detailed modelling of strong ground motion in Trieste

    International Nuclear Information System (INIS)

    Vaccari, F.; Romanelli, F.; Panza, G.

    2005-05-01

    Trieste has been included in category IV by the new Italian seismic code. This corresponds to a horizontal acceleration of 0.05g for the anchoring of the elastic response spectrum. A detailed modelling of the ground motion in Trieste has been done for some scenario earthquakes, compatible with the seismotectonic regime of the region. Three-component synthetic seismograms (displacements, velocities and accelerations) have been analyzed to obtain significant parameters of engineering interest. The definition of the seismic input, derived from a comprehensive set of seismograms analyzed in the time and frequency domains, represents a powerful and convenient tool for seismic microzoning. In the specific case of Palazzo Carciotti, depending on the azimuth of the incoming wavefield, an increase of one degree in intensity may be expected due to different amplification patterns, while a nice stability can be seen in the periods corresponding to the peak values, with amplifications around 1 and 2 Hz. For Palazzo Carciotti, the most dangerous scenario considered, for an event of M=6.5 at an epicentral distance of 21 km, modelled taking into account source finiteness and directivity, leads to a peak ground acceleration value of 0.2 g. The seismic code, being based on a probabilistic approach, can be considered representative of the average seismic shaking for the province of Trieste, and can slightly underestimate the seismic input due the seismogenic potential (obtained from the historical seismicity and seismotectonics). Furthermore, relevant local site effects are mostly neglected. Both modelling and observations show that site conditions in the centre of Trieste can amplify the ground motion at the bedrock by a factor of five, in the frequency range of engineering interest. We may therefore expect macroseismic intensities as high as IX (MCS) corresponding to VIII (MSK). Spectral amplifications obtained for the considered scenario earthquakes are strongly event

  7. Strong convective storm nowcasting using a hybrid approach of convolutional neural network and hidden Markov model

    Science.gov (United States)

    Zhang, Wei; Jiang, Ling; Han, Lei

    2018-04-01

    Convective storm nowcasting refers to the prediction of the convective weather initiation, development, and decay in a very short term (typically 0 2 h) .Despite marked progress over the past years, severe convective storm nowcasting still remains a challenge. With the boom of machine learning, it has been well applied in various fields, especially convolutional neural network (CNN). In this paper, we build a servere convective weather nowcasting system based on CNN and hidden Markov model (HMM) using reanalysis meteorological data. The goal of convective storm nowcasting is to predict if there is a convective storm in 30min. In this paper, we compress the VDRAS reanalysis data to low-dimensional data by CNN as the observation vector of HMM, then obtain the development trend of strong convective weather in the form of time series. It shows that, our method can extract robust features without any artificial selection of features, and can capture the development trend of strong convective storm.

  8. Quantitative accuracy of the simplified strong ion equation to predict serum pH in dogs.

    Science.gov (United States)

    Cave, N J; Koo, S T

    2015-01-01

    Electrochemical approach to the assessment of acid-base states should provide a better mechanistic explanation of the metabolic component than methods that consider only pH and carbon dioxide. Simplified strong ion equation (SSIE), using published dog-specific values, would predict the measured serum pH of diseased dogs. Ten dogs, hospitalized for various reasons. Prospective study of a convenience sample of a consecutive series of dogs admitted to the Massey University Veterinary Teaching Hospital (MUVTH), from which serum biochemistry and blood gas analyses were performed at the same time. Serum pH was calculated (Hcal+) using the SSIE, and published values for the concentration and dissociation constant for the nonvolatile weak acids (Atot and Ka ), and subsequently Hcal+ was compared with the dog's actual pH (Hmeasured+). To determine the source of discordance between Hcal+ and Hmeasured+, the calculations were repeated using a series of substituted values for Atot and Ka . The Hcal+ did not approximate the Hmeasured+ for any dog (P = 0.499, r(2) = 0.068), and was consistently more basic. Substituted values Atot and Ka did not significantly improve the accuracy (r(2) = 0.169 to <0.001). Substituting the effective SID (Atot-[HCO3-]) produced a strong association between Hcal+ and Hmeasured+ (r(2) = 0.977). Using the simplified strong ion equation and the published values for Atot and Ka does not appear to provide a quantitative explanation for the acid-base status of dogs. Efficacy of substituting the effective SID in the simplified strong ion equation suggests the error lies in calculating the SID. Copyright © 2015 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  9. Numerical prediction of local transitional features of turbulent forced gas flows in circular tubes with strong heating

    International Nuclear Information System (INIS)

    Ezato, Koichiro; Kunugi, Tomoaki; Shehata, A.M.; McEligot, D.M.

    1997-03-01

    Previous numerical simulation for the laminarization due to heating of the turbulent flow in pipe were assessed by comparison with only macroscopic characteristics such as heat transfer coefficient and pressure drop, since no experimental data on the local distributions of the velocity and temperature in such flow situation was available. Recently, Shehata and McEligot reported the first measurements of local distributions of velocity and temperature for turbulent forced air flow in a vertical circular tube with strongly heating. They carried out the experiments in three situations from turbulent flow to laminarizing flow according to the heating rate. In the present study, we analyzed numerically the local transitional features of turbulent flow evolving laminarizing due to strong heating in their experiments by using the advanced low-Re two-equation turbulence model. As the result, we successfully predicted the local distributions of velocity and temperature as well as macroscopic characteristics in three turbulent flow conditions. By the present study, a numerical procedure has been established to predict the local characteristics such as velocity distribution of the turbulent flow with large thermal-property variation and laminarizing flow due to strong heating with enough accuracy. (author). 60 refs

  10. Strong ground motion prediction applying dynamic rupture simulations for Beppu-Haneyama Active Fault Zone, southwestern Japan

    Science.gov (United States)

    Yoshimi, M.; Matsushima, S.; Ando, R.; Miyake, H.; Imanishi, K.; Hayashida, T.; Takenaka, H.; Suzuki, H.; Matsuyama, H.

    2017-12-01

    We conducted strong ground motion prediction for the active Beppu-Haneyama Fault zone (BHFZ), Kyushu island, southwestern Japan. Since the BHFZ runs through Oita and Beppy cities, strong ground motion as well as fault displacement may affect much to the cities.We constructed a 3-dimensional velocity structure of a sedimentary basin, Beppu bay basin, where the fault zone runs through and Oita and Beppu cities are located. Minimum shear wave velocity of the 3d model is 500 m/s. Additional 1-d structure is modeled for sites with softer sediment: holocene plain area. We observed, collected, and compiled data obtained from microtremor surveys, ground motion observations, boreholes etc. phase velocity and H/V ratio. Finer structure of the Oita Plain is modeled, as 250m-mesh model, with empirical relation among N-value, lithology, depth and Vs, using borehole data, then validated with the phase velocity data obtained by the dense microtremor array observation (Yoshimi et al., 2016).Synthetic ground motion has been calculated with a hybrid technique composed of a stochastic Green's function method (for HF wave), a 3D finite difference (LF wave) and 1D amplification calculation. Fault geometry has been determined based on reflection surveys and active fault map. The rake angles are calculated with a dynamic rupture simulation considering three fault segments under a stress filed estimated from source mechanism of earthquakes around the faults (Ando et al., JpGU-AGU2017). Fault parameters such as the average stress drop, a size of asperity etc. are determined based on an empirical relation proposed by Irikura and Miyake (2001). As a result, strong ground motion stronger than 100 cm/s is predicted in the hanging wall side of the Oita plain.This work is supported by the Comprehensive Research on the Beppu-Haneyama Fault Zone funded by the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.

  11. Partial widths of boson resonances in the quark-gluon model of strong interactions

    International Nuclear Information System (INIS)

    Kaidalov, A.B.; Volkovitsky, P.E.

    1981-01-01

    The quark-gluon model of strong interactions based on the topological expansion and the string model ib used for the calculation of the partial widths of boson resonances in the channels with two pseudoscalar mesons. The partial widths of mesons with arbitrary spins lying on the vector and tensor Regge trajectories are expressed in terms of the only rho-meson width. The violation of SU(3) symmetry increases with the growth of the spin of the resonance. The theoretical predictions are in a good agreement with experimental data [ru

  12. Nonelectrolyte NRTL-NRF model to study thermodynamics of strong and weak electrolyte solutions

    Energy Technology Data Exchange (ETDEWEB)

    Haghtalab, Ali, E-mail: haghtala@modares.ac.i [Department of Chemical Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of); Shojaeian, Abolfazl; Mazloumi, Seyed Hossein [Department of Chemical Engineering, Tarbiat Modares University, P.O. Box 14115-143, Tehran (Iran, Islamic Republic of)

    2011-03-15

    An electrolyte activity coefficient model is proposed by combining non-electrolyte NRTL-NRF local composition model and Pitzer-Debye-Hueckel equation as short-range and long-range contributions, respectively. With two adjustable parameters per each electrolyte, the present model is applied to correlation of the mean activity coefficients of more than 150 strong aqueous electrolyte solutions at 298.15 K. Also the results of the present model are compared with the other local composition models such as electrolyte-NRTL, electrolyte-NRTL-NRF and electrolyte-Wilson-NRF models. Moreover, the present model is used for prediction of the osmotic coefficient of several aqueous binary electrolytes systems at 298.15 K. Also the present activity coefficient model is adopted for representation of nonideality of the acid gases, as weak gas electrolytes, soluble in alkanolamine solutions. The model is applied for calculation of solubility and heat of absorption (enthalpy of solution) of acid gas in the two {l_brace}(H{sub 2}O + MDEA + CO{sub 2}) and (H{sub 2}O + MDEA + H{sub 2}S){r_brace} systems at different conditions. The results demonstrate that the present model can be successfully applied to study thermodynamic properties of both strong and weak electrolyte solutions.

  13. Prediction and design of first super-strong liquid-crystalline polymers

    International Nuclear Information System (INIS)

    Dowell, F.

    1989-01-01

    This paper presents the details of the theoretical prediction and design (atom by atom, bond by bond) of the molecule chemical structures of the first candidate super-strong liquid-crystalline polymers (SS LCPs). These LCPs are the first LCPs designed to have good compressive strengths, as well as to have tensile strengths and tensile moduli significantly larger than those of existing strong LCPs (such as Kevlar). The key feature of this new class of LCPs is that the exceptional strength is three dimensional on a microscopic, molecular level (thus, on a macroscopic level), in contrast to present LCPs (such as Kevlar) with their one-dimensional exceptional strength. These SS LCPs also have some solubility and processing advantages over existing strong LCPs. These SS LCPs are specially-designed combined LCPs such that the side chains of a molecule interdigitate with the side chains of other molecules. This paper also presents other essential general and specific features required for SS LCPs. Considerations in the design of SS LCPs include the spacing distance between side chains along the backbone, the need for rigid sections in the backbone and side chains, the degree of polymerization, the length of the side chains, the regularity of spacing of the side chains along the backbone, the interdigitation of side chains in submolecular strips, the packing of the side chains on one or two sides of the backbone, the symmetry of the side chains, the points of attachment of the side chains to the backbone, the flexibility and size of the chemical group connecting each side chain to the backbone, the effect of semiflexible sections in the backbone and side chains, and the choice of types of dipolar and/or hydrogen bonding forces in the backbones and side chains for easy alignment

  14. Empirical equations for the prediction of PGA and pseudo spectral accelerations using Iranian strong-motion data

    Science.gov (United States)

    Zafarani, H.; Luzi, Lucia; Lanzano, Giovanni; Soghrat, M. R.

    2018-01-01

    A recently compiled, comprehensive, and good-quality strong-motion database of the Iranian earthquakes has been used to develop local empirical equations for the prediction of peak ground acceleration (PGA) and 5%-damped pseudo-spectral accelerations (PSA) up to 4.0 s. The equations account for style of faulting and four site classes and use the horizontal distance from the surface projection of the rupture plane as a distance measure. The model predicts the geometric mean of horizontal components and the vertical-to-horizontal ratio. A total of 1551 free-field acceleration time histories recorded at distances of up to 200 km from 200 shallow earthquakes (depth < 30 km) with moment magnitudes ranging from Mw 4.0 to 7.3 are used to perform regression analysis using the random effects algorithm of Abrahamson and Youngs (Bull Seism Soc Am 82:505-510, 1992), which considers between-events as well as within-events errors. Due to the limited data used in the development of previous Iranian ground motion prediction equations (GMPEs) and strong trade-offs between different terms of GMPEs, it is likely that the previously determined models might have less precision on their coefficients in comparison to the current study. The richer database of the current study allows improving on prior works by considering additional variables that could not previously be adequately constrained. Here, a functional form used by Boore and Atkinson (Earthquake Spect 24:99-138, 2008) and Bindi et al. (Bull Seism Soc Am 9:1899-1920, 2011) has been adopted that allows accounting for the saturation of ground motions at close distances. A regression has been also performed for the V/H in order to retrieve vertical components by scaling horizontal spectra. In order to take into account epistemic uncertainty, the new model can be used along with other appropriate GMPEs through a logic tree framework for seismic hazard assessment in Iran and Middle East region.

  15. Two-dimensional QCD as a model for strong interaction

    International Nuclear Information System (INIS)

    Ellis, J.

    1977-01-01

    After an introduction to the formalism of two-dimensional QCD, its applications to various strong interaction processes are reviewed. Among the topics discussed are spectroscopy, deep inelastic cross-sections, ''hard'' processes involving hadrons, ''Regge'' behaviour, the existence of the Pomeron, and inclusive hadron cross-sections. Attempts are made to abstracts features useful for four-dimensional QCD phenomenology. (author)

  16. Modelling decreased food chain accumulation of HOCs due to strong sorption to carbonaceous materials and metabolic transformation

    NARCIS (Netherlands)

    Moermond, C.T.A.; Traas, T.P.; Roessink, I.; Veltman, K.; Hendriks, A.J.; Koelmans, A.A.

    2007-01-01

    The predictive power of bioaccumulation models may be limited when they do not account for strong sorption of organic contaminants to carbonaceous materials (CM) such as black carbon, and when they do not include metabolic transformation. We tested a food web accumulation model, including sorption

  17. Long-term predictability of regions and dates of strong earthquakes

    Science.gov (United States)

    Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey

    2016-04-01

    Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation

  18. Prediction of strong acceleration motion depended on focal mechanism; Shingen mechanism wo koryoshita jishindo yosoku ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Kaneda, Y; Ejiri, J [Obayashi Corp., Tokyo (Japan)

    1996-10-01

    This paper describes simulation results of strong acceleration motion with varying uncertain fault parameters mainly for a fault model of Hyogo-ken Nanbu earthquake. For the analysis, based on the fault parameters, the strong acceleration motion was simulated using the radiation patterns and the breaking time difference of composite faults as parameters. A statistic waveform composition method was used for the simulation. For the theoretical radiation patterns, directivity was emphasized which depended on the strike of faults, and the maximum acceleration was more than 220 gal. While, for the homogeneous radiation patterns, the maximum accelerations were isotopically distributed around the fault as a center. For variations in the maximum acceleration and the predominant frequency due to the breaking time difference of three faults, the response spectral value of maximum/minimum was about 1.7 times. From the viewpoint of seismic disaster prevention, underground structures including potential faults and non-arranging properties can be grasped using this simulation. Significance of the prediction of strong acceleration motion was also provided through this simulation using uncertain factors, such as breaking time of composite faults, as parameters. 4 refs., 4 figs., 1 tab.

  19. State resolved vibrational relaxation modeling for strongly nonequilibrium flows

    Science.gov (United States)

    Boyd, Iain D.; Josyula, Eswar

    2011-05-01

    Vibrational relaxation is an important physical process in hypersonic flows. Activation of the vibrational mode affects the fundamental thermodynamic properties and finite rate relaxation can reduce the degree of dissociation of a gas. Low fidelity models of vibrational activation employ a relaxation time to capture the process at a macroscopic level. High fidelity, state-resolved models have been developed for use in continuum gas dynamics simulations based on computational fluid dynamics (CFD). By comparison, such models are not as common for use with the direct simulation Monte Carlo (DSMC) method. In this study, a high fidelity, state-resolved vibrational relaxation model is developed for the DSMC technique. The model is based on the forced harmonic oscillator approach in which multi-quantum transitions may become dominant at high temperature. Results obtained for integrated rate coefficients from the DSMC model are consistent with the corresponding CFD model. Comparison of relaxation results obtained with the high-fidelity DSMC model shows significantly less excitation of upper vibrational levels in comparison to the standard, lower fidelity DSMC vibrational relaxation model. Application of the new DSMC model to a Mach 7 normal shock wave in carbon monoxide provides better agreement with experimental measurements than the standard DSMC relaxation model.

  20. Pairing from strong repulsion in triangular lattice Hubbard model

    Science.gov (United States)

    Zhang, Shang-Shun; Zhu, Wei; Batista, Cristian D.

    2018-04-01

    We propose a pairing mechanism between holes in the dilute limit of doped frustrated Mott insulators. Hole pairing arises from a hole-hole-magnon three-body bound state. This pairing mechanism has its roots on single-hole kinetic energy frustration, which favors antiferromagnetic (AFM) correlations around the hole. We demonstrate that the AFM polaron (hole-magnon bound state) produced by a single hole propagating on a field-induced polarized background is strong enough to bind a second hole. The effective interaction between these three-body bound states is repulsive, implying that this pairing mechanism is relevant for superconductivity.

  1. Strongly lensed neutral hydrogen emission: detection predictions with current and future radio interferometers

    Science.gov (United States)

    Deane, R. P.; Obreschkow, D.; Heywood, I.

    2015-09-01

    Strong gravitational lensing provides some of the deepest views of the Universe, enabling studies of high-redshift galaxies only possible with next-generation facilities without the lensing phenomenon. To date, 21-cm radio emission from neutral hydrogen has only been detected directly out to z ˜ 0.2, limited by the sensitivity and instantaneous bandwidth of current radio telescopes. We discuss how current and future radio interferometers such as the Square Kilometre Array (SKA) will detect lensed H I emission in individual galaxies at high redshift. Our calculations rely on a semi-analytic galaxy simulation with realistic H I discs (by size, density profile and rotation), in a cosmological context, combined with general relativistic ray tracing. Wide-field, blind H I surveys with the SKA are predicted to be efficient at discovering lensed H I systems, increasingly so at z ≳ 2. This will be enabled by the combination of the magnification boosts, the steepness of the H I luminosity function at the high-mass end, and the fact that the H I spectral line is relatively isolated in frequency. These surveys will simultaneously provide a new technique for foreground lens selection and yield the highest redshift H I emission detections. More near term (and existing) cm-wave facilities will push the high-redshift H I envelope through targeted surveys of known lenses.

  2. A nonlinear efficient layerwise finite element model for smart piezolaminated composites under strong applied electric field

    International Nuclear Information System (INIS)

    Kapuria, S; Yaqoob Yasin, M

    2013-01-01

    In this work, we present an electromechanically coupled efficient layerwise finite element model for the static response of piezoelectric laminated composite and sandwich plates, considering the nonlinear behavior of piezoelectric materials under strong electric field. The nonlinear model is developed consistently using a variational principle, considering a rotationally invariant second order nonlinear constitutive relationship, and full electromechanical coupling. In the piezoelectric layer, the electric potential is approximated to have a quadratic variation across the thickness, as observed from exact three dimensional solutions, and the equipotential condition of electroded piezoelectric surfaces is modeled using the novel concept of an electric node. The results predicted by the nonlinear model compare very well with the experimental data available in the literature. The effect of the piezoelectric nonlinearity on the static response and deflection/stress control is studied for piezoelectric bimorph as well as hybrid laminated plates with isotropic, angle-ply composite and sandwich substrates. For high electric fields, the difference between the nonlinear and linear predictions is large, and cannot be neglected. The error in the prediction of the smeared counterpart of the present theory with the same number of primary displacement unknowns is also examined. (paper)

  3. WHY WE CANNOT PREDICT STRONG EARTHQUAKES IN THE EARTH’S CRUST

    Directory of Open Access Journals (Sweden)

    Iosif L. Gufeld

    2011-01-01

    Full Text Available In the past decade, earthquake disasters caused multiple fatalities and significant economic losses and challenged the modern civilization. The wellknown achievements and growing power of civilization are backstrapped when facing the Nature. The question arises, what hinders solving a problem of earthquake prediction, while longterm and continuous seismic monitoring systems are in place in many regions of the world. For instance, there was no forecast of the Japan Great Earthquake of March 11, 2011, despite the fact that monitoring conditions for its prediction were unique. Its focal zone was 100–200 km away from the monitoring network installed in the area of permanent seismic hazard, which is subject to nonstop and longterm seismic monitoring. Lesson should be learned from our common fiasco in forecasting, taking into account research results obtained during the past 50–60 years. It is now evident that we failed to identify precursors of the earthquakes. Prior to the earthquake occurrence, the observed local anomalies of various fields reflected other processes that were mistakenly viewed as processes of preparation for largescale faulting. For many years, geotectonic situations were analyzed on the basis of the physics of destruction of laboratory specimens, which was applied to the lithospheric conditions. Many researchers realize that such an approach is inaccurate. Nonetheless, persistent attempts are being undertaken with application of modern computation to detect anomalies of various fields, which may be interpreted as earthquake precursors. In our opinion, such illusory intentions were smashed by the Great Japan Earthquake (Figure 6. It is also obvious that sufficient attention has not been given yet to fundamental studies of seismic processes.This review presents the authors’ opinion concerning the origin of the seismic process and strong earthquakes, being part of the process. The authors realize that a wide discussion is

  4. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  5. Flipped version of the supersymmetric strongly coupled preon model

    Energy Technology Data Exchange (ETDEWEB)

    Fajfer, S. (Institut za Fiziku, University of Sarajevo, Sarajevo, (Yugoslavia)); Milekovic, M.; Tadic, D. (Zavod za Teorijsku Fiziku, Prirodoslovno-Matematicki Fakultet, University of Zagreb, Croatia, (Yugoslavia))

    1989-12-01

    In the supersymmetric SU(5) (SUSY SU(5)) composite model (which was described in an earlier paper) the fermion mass terms can be easily constructed. The SUSY SU(5){direct product}U(1), i.e., flipped, composite model possesses a completely analogous composite-particle spectrum. However, in that model one cannot construct a renormalizable superpotential which would generate fermion mass terms. This contrasts with the standard noncomposite grand unified theories (GUT's) in which both the Georgi-Glashow electrical charge embedding and its flipped counterpart lead to the renormalizable theories.

  6. Strongly coupled semiclassical plasma: interaction model and some properties

    International Nuclear Information System (INIS)

    Baimbetov, N.F.; Bekenov, N.A.

    1999-01-01

    In the report a fully ionized strongly coupled hydrogen plasma is considered. The density number is considered within range n=n e =n i ≅(10 21 -2·10 25 )sm -3 , and the temperature domian is T≅(5·10 4 -10 6 ) K. The coupling parameter Γ is defined by Γ=e 2 /αk B T, where k B is the Boltzmann constant and e is electrical charge, α=(3/4πn) 1/3 is the average distance between the particles (Wigner-Seitz radius). The dimensionless density parameter r s =α/α B is given in terms of the Bohr radius α B =ℎ 2 /me 2 ∼0.529·10 - 8 sm. The degeneracy parameter for the electron was defined by the ratio between the thermal energy k B T and the Fermi energy E F :Θ=k B T/E F ∼0.54·r s /Γ. The intermediate temperature-density region, where Γ≥1; Θ≅1; T>13.6 eV is examined. A semiclassical effective potential which account for the short-range, quantum diffraction and symmetry effects of charge carriers screening

  7. <strong>An Efficient Algorithm for Modelling Duration in Hidden Markov Models, with a Dramatic Applicationstrong>

    DEFF Research Database (Denmark)

    Hauberg, Søren; Sloth, Jakob

    2008-01-01

    For many years, the hidden Markov model (HMM) has been one of the most popular tools for analysing sequential data. One frequently used special case is the left-right model, in which the order of the hidden states is known. If knowledge of the duration of a state is available it is not possible...... to represent it explicitly with an HMM. Methods for modelling duration with HMM's do exist (Rabiner in Proc. IEEE 77(2):257---286, [1989]), but they come at the price of increased computational complexity. Here we present an efficient and robust algorithm for modelling duration in HMM's, and this algorithm...

  8. Strong Sector in non-minimal SUSY model

    Directory of Open Access Journals (Sweden)

    Costantini Antonio

    2016-01-01

    Full Text Available We investigate the squark sector of a supersymmetric theory with an extended Higgs sector. We give the mass matrices of stop and sbottom, comparing the Minimal Supersymmetric Standard Model (MSSM case and the non-minimal case. We discuss the impact of the extra superfields on the decay channels of the stop searched at the LHC.

  9. Variational Boussinesq model for strongly nonlinear dispersive waves

    NARCIS (Netherlands)

    Lawrence, C.; Adytia, D.; van Groesen, E.

    2018-01-01

    For wave tank, coastal and oceanic applications, a fully nonlinear Variational Boussinesq model with optimized dispersion is derived and a simple Finite Element implementation is described. Improving a previous weakly nonlinear version, high waves over flat and varying bottom are shown to be

  10. From strong to weak coupling in holographic models of thermalization

    Energy Technology Data Exchange (ETDEWEB)

    Grozdanov, Sašo; Kaplis, Nikolaos [Instituut-Lorentz for Theoretical Physics, Leiden University,Niels Bohrweg 2, Leiden 2333 CA (Netherlands); Starinets, Andrei O. [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom)

    2016-07-29

    We investigate the analytic structure of thermal energy-momentum tensor correlators at large but finite coupling in quantum field theories with gravity duals. We compute corrections to the quasinormal spectra of black branes due to the presence of higher derivative R{sup 2} and R{sup 4} terms in the action, focusing on the dual to N=4 SYM theory and Gauss-Bonnet gravity. We observe the appearance of new poles in the complex frequency plane at finite coupling. The new poles interfere with hydrodynamic poles of the correlators leading to the breakdown of hydrodynamic description at a coupling-dependent critical value of the wave-vector. The dependence of the critical wave vector on the coupling implies that the range of validity of the hydrodynamic description increases monotonically with the coupling. The behavior of the quasinormal spectrum at large but finite coupling may be contrasted with the known properties of the hierarchy of relaxation times determined by the spectrum of a linearized kinetic operator at weak coupling. We find that the ratio of a transport coefficient such as viscosity to the relaxation time determined by the fundamental non-hydrodynamic quasinormal frequency changes rapidly in the vicinity of infinite coupling but flattens out for weaker coupling, suggesting an extrapolation from strong coupling to the kinetic theory result. We note that the behavior of the quasinormal spectrum is qualitatively different depending on whether the ratio of shear viscosity to entropy density is greater or less than the universal, infinite coupling value of ℏ/4πk{sub B}. In the former case, the density of poles increases, indicating a formation of branch cuts in the weak coupling limit, and the spectral function shows the appearance of narrow peaks. We also discuss the relation of the viscosity-entropy ratio to conjectured bounds on relaxation time in quantum systems.

  11. Modeling strong-field above-threshold ionization

    International Nuclear Information System (INIS)

    Sundaram, B.; Armstrong, L. Jr.

    1990-01-01

    Above-threshold ionization (ATI) by intense, short-pulse lasers is studied numerically, using the stretched hydrogen atom Hamiltonian. Within our model system, we isolate several mechanisms that contribute to the ATI process. These mechanisms, which involve both excited bound states and continuum states, all invoke intermediate, off-energy shell transitions. In particular, the importance of excited bound states and off-energy shell bound-free processes to the ionization mechanism are shown to relate to a simple physical criterion. These processes point to importance differences in the interpretation of ionization characteristics for short pulses from that for longer pulses. Our analysis concludes that although components of ATI admit of simple, few-state modeling, the ultimate synthesis points to a highly complex mechanism

  12. Modelling of strongly coupled particle growth and aggregation

    International Nuclear Information System (INIS)

    Gruy, F; Touboul, E

    2013-01-01

    The mathematical modelling of the dynamics of particle suspension is based on the population balance equation (PBE). PBE is an integro-differential equation for the population density that is a function of time t, space coordinates and internal parameters. Usually, the particle is characterized by a unique parameter, e.g. the matter volume v. PBE consists of several terms: for instance, the growth rate and the aggregation rate. So, the growth rate is a function of v and t. In classical modelling, the growth and the aggregation are independently considered, i.e. they are not coupled. However, current applications occur where the growth and the aggregation are coupled, i.e. the change of the particle volume with time is depending on its initial value v 0 , that in turn is related to an aggregation event. As a consequence, the dynamics of the suspension does not obey the classical Von Smoluchowski equation. This paper revisits this problem by proposing a new modelling by using a bivariate PBE (with two internal variables: v and v 0 ) and by solving the PBE by means of a numerical method and Monte Carlo simulations. This is applied to a physicochemical system with a simple growth law and a constant aggregation kernel.

  13. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  14. The strong interactions beyond the standard model of particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Bergner, Georg [Muenster Univ. (Germany). Inst. for Theoretical Physics

    2016-11-01

    SuperMUC is one of the most convenient high performance machines for our project since it offers a high performance and flexibility regarding different applications. This is of particular importance for investigations of new theories, where on the one hand the parameters and systematic uncertainties have to be estimated in smaller simulations and on the other hand a large computational performance is needed for the estimations of the scale at zero temperature. Our project is just the first investigation of the new physics beyond the standard model of particle physics and we hope to proceed with our studies towards more involved Technicolour candidates, supersymmetric QCD, and extended supersymmetry.

  15. Note on the hydrodynamic description of thin nematic films: Strong anchoring model

    KAUST Repository

    Lin, Te-Sheng; Cummings, Linda J.; Archer, Andrew J.; Kondic, Lou; Thiele, Uwe

    2013-01-01

    We discuss the long-wave hydrodynamic model for a thin film of nematic liquid crystal in the limit of strong anchoring at the free surface and at the substrate. We rigorously clarify how the elastic energy enters the evolution equation for the film thickness in order to provide a solid basis for further investigation: several conflicting models exist in the literature that predict qualitatively different behaviour. We consolidate the various approaches and show that the long-wave model derived through an asymptotic expansion of the full nemato-hydrodynamic equations with consistent boundary conditions agrees with the model one obtains by employing a thermodynamically motivated gradient dynamics formulation based on an underlying free energy functional. As a result, we find that in the case of strong anchoring the elastic distortion energy is always stabilising. To support the discussion in the main part of the paper, an appendix gives the full derivation of the evolution equation for the film thickness via asymptotic expansion. © 2013 AIP Publishing LLC.

  16. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media.

    Science.gov (United States)

    Ma, Manman; Xu, Zhenli

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  17. Self-consistent field model for strong electrostatic correlations and inhomogeneous dielectric media

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Manman, E-mail: mmm@sjtu.edu.cn; Xu, Zhenli, E-mail: xuzl@sjtu.edu.cn [Department of Mathematics, Institute of Natural Sciences, and MoE Key Laboratory of Scientific and Engineering Computing, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2014-12-28

    Electrostatic correlations and variable permittivity of electrolytes are essential for exploring many chemical and physical properties of interfaces in aqueous solutions. We propose a continuum electrostatic model for the treatment of these effects in the framework of the self-consistent field theory. The model incorporates a space- or field-dependent dielectric permittivity and an excluded ion-size effect for the correlation energy. This results in a self-energy modified Poisson-Nernst-Planck or Poisson-Boltzmann equation together with state equations for the self energy and the dielectric function. We show that the ionic size is of significant importance in predicting a finite self energy for an ion in an inhomogeneous medium. Asymptotic approximation is proposed for the solution of a generalized Debye-Hückel equation, which has been shown to capture the ionic correlation and dielectric self energy. Through simulating ionic distribution surrounding a macroion, the modified self-consistent field model is shown to agree with particle-based Monte Carlo simulations. Numerical results for symmetric and asymmetric electrolytes demonstrate that the model is able to predict the charge inversion at high correlation regime in the presence of multivalent interfacial ions which is beyond the mean-field theory and also show strong effect to double layer structure due to the space- or field-dependent dielectric permittivity.

  18. Stability in a fiber bundle model: Existence of strong links and the effect of disorder

    Science.gov (United States)

    Roy, Subhadeep

    2018-05-01

    The present paper deals with a fiber bundle model which consists of a fraction α of infinitely strong fibers. The inclusion of such an unbreakable fraction has been proven to affect the failure process in early studies, especially around a critical value αc. The present work has a twofold purpose: (i) a study of failure abruptness, mainly the brittle to quasibrittle transition point with varying α and (ii) variation of αc as we change the strength of disorder introduced in the model. The brittle to quasibrittle transition is confirmed from the failure abruptness. On the other hand, the αc is obtained from the knowledge of failure abruptness as well as the statistics of avalanches. It is observed that the brittle to quasibrittle transition point scales to lower values, suggesting more quasi-brittle-like continuous failure when α is increased. At the same time, the bundle becomes stronger as there are larger numbers of strong links to support the external stress. High α in a highly disordered bundle leads to an ideal situation where the bundle strength, as well as the predictability in failure process is very high. Also, the critical fraction αc, required to make the model deviate from the conventional results, increases with decreasing strength of disorder. The analytical expression for αc shows good agreement with the numerical results. Finally, the findings in the paper are compared with previous results and real-life applications of composite materials.

  19. Evaluating Predictive Models of Software Quality

    Science.gov (United States)

    Ciaschini, V.; Canaparo, M.; Ronchieri, E.; Salomoni, D.

    2014-06-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  20. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  1. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  2. Modelling and prediction of non-stationary optical turbulence behaviour

    NARCIS (Netherlands)

    Doelman, N.J.; Osborn, J.

    2016-01-01

    There is a strong need to model the temporal fluctuations in turbulence parameters, for instance for scheduling, simulation and prediction purposes. This paper aims at modelling the dynamic behaviour of the turbulence coherence length r0, utilising measurement data from the Stereo-SCIDAR instrument

  3. The ordered network structure of M {>=} 6 strong earthquakes and its prediction in the Jiangsu-South Yellow Sea region

    Energy Technology Data Exchange (ETDEWEB)

    Men, Ke-Pei [Nanjing Univ. of Information Science and Technology (China). College of Mathematics and Statistics; Cui, Lei [California Univ., Santa Barbara, CA (United States). Applied Probability and Statistics Dept.

    2013-05-15

    The the Jiangsu-South Yellow Sea region is one of the key seismic monitoring defence areas in the eastern part of China. Since 1846, M {>=} 6 strong earthquakes have showed an obvious commensurability and orderliness in this region. The main orderly values are 74 {proportional_to} 75 a, 57 {proportional_to} 58 a, 11 {proportional_to} 12 a, and 5 {proportional_to} 6 a, wherein 74 {proportional_to} 75 a and 57 {proportional_to} 58 a with an outstanding predictive role. According to the information prediction theory of Wen-Bo Weng, we conceived the M {>=} 6 strong earthquake ordered network structure in the South Yellow Sea and the whole region. Based on this, we analyzed and discussed the variation of seismicity in detail and also made a trend prediction of M {>=} 6 strong earthquakes in the future. The results showed that since 1998 it has entered into a new quiet episode which may continue until about 2042; and the first M {>=} 6 strong earthquake in the next active episode will probably occur in 2053 pre and post, with the location likely in the sea area of the South Yellow Sea; also, the second and the third ones or strong earthquake swarm in the future will probably occur in 2058 and 2070 pre and post. (orig.)

  4. Departures from predicted type II behavior in dirty strong-coupling superconductors

    International Nuclear Information System (INIS)

    Park, J.C.; Neighbor, J.E.; Shiffman, C.A.

    1976-01-01

    Calorimetric measurements of the Ginsburg-Landau parameters for Pb-Sn and Pb-Bi alloys show good agreement with the calculations of Rainer and Bergmann for kappa 1 (t)/kappa 1 (1). However, the calculations of Rainer and Usadel for kappa 2 (t)/kappa 2 (1) substantially underestimate the enhancements due to strong-coupling. (Auth.)

  5. Modeling loss and backscattering in a photonic-bandgap fiber using strong perturbation

    Science.gov (United States)

    Zamani Aghaie, Kiarash; Digonnet, Michel J. F.; Fan, Shanhui

    2013-02-01

    We use coupled-mode theory with strong perturbation to model the loss and backscattering coefficients of a commercial hollow-core fiber (NKT Photonics' HC-1550-02 fiber) induced by the frozen-in longitudinal perturbations of the fiber cross section. Strong perturbation is used, for the first time to the best of our knowledge, because the large difference between the refractive indices of the two fiber materials (silica and air) makes conventional weak-perturbation less accurate. We first study the loss and backscattering using the mathematical description of conventional surface-capillary waves (SCWs). This model implicitly assumes that the mechanical waves on the core wall of a PBF have the same power spectral density (PSD) as the waves that develop on an infinitely thick cylindrical tube with the same diameter as the PBF core. The loss and backscattering coefficients predicted with this thick-wall SCW roughness are 0.5 dB/km and 1.1×10-10 mm-1, respectively. These values are more than one order of magnitude smaller than the measured values (20-30 dB/km and ~1.5×10-9 mm-1, respectively). This result suggests that the thick-wall SCW PSD is not representative of the roughness of our fiber. We found that this discrepancy occurs at least in part because the effect of the finite thickness of the silica membranes (only ~120 nm) is neglected. We present a new expression for the PSD that takes into account this finite thickness and demonstrates that the finite thickness substantially increases the roughness. The predicted loss and backscattering coefficients predicted with this thin-film SCW PSD are 30 dB/km and 1.3×10-9 mm-1, which are both close to the measured values. We also show that the thin-film SCW PSD accurately predicts the roughness PSD measured by others in a solid-core photonic-crystal fiber.

  6. Model Prediction Control For Water Management Using Adaptive Prediction Accuracy

    NARCIS (Netherlands)

    Tian, X.; Negenborn, R.R.; Van Overloop, P.J.A.T.M.; Mostert, E.

    2014-01-01

    In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for

  7. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  8. Exchange and spin-fluctuation superconducting pairing in the strong correlation limit of the Hubbard model

    International Nuclear Information System (INIS)

    Plakida, N. M.; Anton, L.; Adam, S. . Department of Theoretical Physics, Horia Hulubei National Institute for Physics and Nuclear Engineering, PO Box MG-6, RO-76900 Bucharest - Magurele; RO); Adam, Gh. . Department of Theoretical Physics, Horia Hulubei National Institute for Physics and Nuclear Engineering, PO Box MG-6, RO-76900 Bucharest - Magurele; RO)

    2001-01-01

    A microscopical theory of superconductivity in the two-band singlet-hole Hubbard model, in the strong coupling limit in a paramagnetic state, is developed. The model Hamiltonian is obtained by projecting the p-d model to an asymmetric Hubbard model with the lower Hubbard subband occupied by one-hole Cu d-like states and the upper Hubbard subband occupied by two-hole p-d singlet states. The model requires two microscopical parameters only, the p-d hybridization parameter t and the charge-transfer gap Δ. It was previously shown to secure an appropriate description of the normal state properties of the high -T c cuprates. To treat rigorously the strong correlations, the Hubbard operator technique within the projection method for the Green function is used. The Dyson equation is derived. In the molecular field approximation, d-wave superconducting pairing of conventional hole (electron) pairs in one Hubbard subband is found, which is mediated by the exchange interaction given by the interband hopping, J ij = 4 (t ij ) 2 / Δ. The normal and anomalous components of the self-energy matrix are calculated in the self-consistent Born approximation for the electron-spin-fluctuation scattering mediated by kinematic interaction of the second order of the intraband hopping. The derived numerical and analytical solutions predict the occurrence of singlet d x 2 -y 2 -wave pairing both in the d-hole and singlet Hubbard subbands. The gap functions and T c are calculated for different hole concentrations. The exchange interaction is shown to be the most important pairing interaction in the Hubbard model in the strong correlation limit, while the spin-fluctuation coupling results only in a moderate enhancement of T c . The smaller weight of the latter comes from two specific features: its vanishing inside the Brillouin zone (BZ) along the lines, |k x | + |k y |=π pointing towards the hot spots and the existence of a small energy shell within which the pairing is effective. By

  9. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  10. STRAP Is a Strong Predictive Marker of Adjuvant Chemotherapy Benefit in Colorectal Cancer

    OpenAIRE

    Buess, Martin; Terracciano, Luigi; Reuter, Jurgen; Ballabeni, Pierluigi; Boulay, Jean-Louis; Laffer, Urban; Metzger, Urs; Herrmann, Richard; Rochlitz, Christoph

    2004-01-01

    BACKGROUND: Molecular predictors for the effectiveness of adjuvant chemotherapy in colorectal cancer are of considerable clinical interest. To this aim, we analyzed the serine threonine receptor-associated protein (STRAP), an inhibitor of TGF-βsignaling, with regard to prognosis and prediction of adjuvant 5-FU chemotherapy benefit. i The gene copy status of STRAP was determined using quantitative realtime polymerase chain reaction in 166 colorectal tumor biopsies, which had been collected fro...

  11. Strong interactions between learned helplessness and risky decision-making in a rat gambling model.

    Science.gov (United States)

    Nobrega, José N; Hedayatmofidi, Parisa S; Lobo, Daniela S

    2016-11-18

    Risky decision-making is characteristic of depression and of addictive disorders, including pathological gambling. However it is not clear whether a propensity to risky choices predisposes to depressive symptoms or whether the converse is the case. Here we tested the hypothesis that rats showing risky decision-making in a rat gambling task (rGT) would be more prone to depressive-like behaviour in the learned helplessness (LH) model. Results showed that baseline rGT choice behaviour did not predict escape deficits in the LH protocol. In contrast, exposure to the LH protocol resulted in a significant increase in risky rGT choices on retest. Unexpectedly, control rats subjected only to escapable stress in the LH protocol showed a subsequent decrease in riskier rGT choices. Further analyses indicated that the LH protocol affected primarily rats with high baseline levels of risky choices and that among these it had opposite effects in rats exposed to LH-inducing stress compared to rats exposed only to the escape trials. Together these findings suggest that while baseline risky decision making may not predict LH behaviour it interacts strongly with LH conditions in modulating subsequent decision-making behaviour. The suggested possibility that stress controllability may be a key factor should be further investigated.

  12. South African seasonal rainfall prediction performance by a coupled ocean-atmosphere model

    CSIR Research Space (South Africa)

    Landman, WA

    2010-12-01

    Full Text Available Evidence is presented that coupled ocean-atmosphere models can already outscore computationally less expensive atmospheric models. However, if the atmospheric models are forced with highly skillful SST predictions, they may still be a very strong...

  13. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  14. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  15. Predictive user modeling with actionable attributes

    NARCIS (Netherlands)

    Zliobaite, I.; Pechenizkiy, M.

    2013-01-01

    Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target

  16. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain, which...

  17. Testing strong factorial invariance using three-level structural equation modeling

    NARCIS (Netherlands)

    Jak, Suzanne

    Within structural equation modeling, the most prevalent model to investigate measurement bias is the multigroup model. Equal factor loadings and intercepts across groups in a multigroup model represent strong factorial invariance (absence of measurement bias) across groups. Although this approach is

  18. Robust predictions of the interacting boson model

    International Nuclear Information System (INIS)

    Casten, R.F.; Koeln Univ.

    1994-01-01

    While most recognized for its symmetries and algebraic structure, the IBA model has other less-well-known but equally intrinsic properties which give unavoidable, parameter-free predictions. These predictions concern central aspects of low-energy nuclear collective structure. This paper outlines these ''robust'' predictions and compares them with the data

  19. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  20. Acute post cessation smoking. A strong predictive factor for metabolic syndrome among adult Saudis

    International Nuclear Information System (INIS)

    AlDaghri, Nasser M.

    2009-01-01

    To determine the influence of tobacco exposure in the development of metabolic syndrome (MS) in the adult Saudi population. Six hundred and sixty-four adults (305 males and 359 females) aged 25-70 years were included in this cross-sectional study conducted at the King Abdul Aziz University Hospital, between June 2006 and May 2007. We classified the participants into non-smokers, smokers, and ex-smokers (defined as complete cessation for 1-2 years). All subjects were screened for the presence of MS using the modified American Heart Association/National Heart, Lung and Blood Institute (AHA/NHLBI), International Diabetes Federation (IDF) and World Health Organization (WHO) definitions. Metabolic syndrome was highest among ex-smokers regardless of definition used. Relative risk for ex-smokers (95% CI: 2.23, 1.06-4.73) was more than twice in harboring MS as compared to non-smokers (95% CI: 2.78, 1.57-4.92) (p=0.009). Acute post-cessation smoking is a strong predictor for MS among male and female Arabs. Smoking cessation programs should include a disciplined lifestyle and dietary intervention to counteract the MS-augmenting side-effect of smoking cessation. (author)

  1. The influence of fragmentation models on the determination of the strong coupling constant in e+e- annihilation into hadrons

    International Nuclear Information System (INIS)

    Behrend, H.J.; Chen, C.; Fenner, H.; Schachter, M.J.; Schroeder, V.; Sindt, H.; D'Agostini, G.; Apel, W.D.; Banerjee, S.; Bodenkamp, J.; Chrobaczek, D.; Engler, J.; Fluegge, G.; Fries, D.C.; Fues, W.; Gamerdinger, K.; Hopp, G.; Kuester, H.; Mueller, H.; Randoll, H.; Schmidt, G.; Schneider, H.; Boer, W. de; Buschhorn, G.; Grindhammer, G.; Grosse-Wiesmann, P.; Gunderson, B.; Kiesling, C.; Kotthaus, R.; Kruse, U.; Lierl, H.; Lueers, D.; Oberlack, H.; Schacht, P.; Colas, P.; Cordier, A.; Davier, M.; Fournier, D.; Grivaz, J.F.; Haissinski, J.; Journe, V.; Klarsfeld, A.; Laplanche, F.; Le Diberder, F.; Mallik, U.; Veillet, J.J.; Field, J.H.; George, R.; Goldberg, M.; Grossetete, B.; Hamon, O.; Kapusta, F.; Kovacs, F.; London, G.; Poggioli, L.; Rivoal, M.; Aleksan, R.; Bouchez, J.; Carnesecchi, G.; Cozzika, G.; Ducros, Y.; Gaidot, A.; Jadach, S.; Lavagne, Y.; Pamela, J.; Pansart, J.P.; Pierre, F.

    1983-01-01

    Hadronic events obtained with the CELLO detector at PETRA were compared with first-order QCD predictions using two different models for the fragmentation of quarks and gluons, the Hoyer model and the Lund model. Both models are in reasonable agreement with the data, although they do not completely reproduce the details of many distributions. Several methods have been applied to determine the strong coupling constant αsub(s). Although within one model the value of αsub(s) varies by 20% among the different methods, the values determined using the Lund model are 30% or more larger (depending on the method used) than the values determined with the Hoyer model. Our results using the Hoyer model are in agreement with previous results based on this approach. (orig.)

  2. Connection between strong and weak coupling in the mean spherical model in 1 + 1 dimensions

    International Nuclear Information System (INIS)

    Banks, J.L.

    1980-01-01

    I extend the strong-coupling expansion obtained by Srednicki, for the β-function of the mean spherical model in 1 + 1 dimensions, in the hamiltonian formulation. I use ordinary and two-point Pade approximants to extrapolate this result to weak coupling. I find a reasonably smooth connection between strong and weak coupling, and good numerical agreement with the exact solution. (orig.)

  3. Catalytic cracking models developed for predictive control purposes

    Directory of Open Access Journals (Sweden)

    Dag Ljungqvist

    1993-04-01

    Full Text Available The paper deals with state-space modeling issues in the context of model-predictive control, with application to catalytic cracking. Emphasis is placed on model establishment, verification and online adjustment. Both the Fluid Catalytic Cracking (FCC and the Residual Catalytic Cracking (RCC units are discussed. Catalytic cracking units involve complex interactive processes which are difficult to operate and control in an economically optimal way. The strong nonlinearities of the FCC process mean that the control calculation should be based on a nonlinear model with the relevant constraints included. However, the model can be simple compared to the complexity of the catalytic cracking plant. Model validity is ensured by a robust online model adjustment strategy. Model-predictive control schemes based on linear convolution models have been successfully applied to the supervisory dynamic control of catalytic cracking units, and the control can be further improved by the SSPC scheme.

  4. Extracting falsifiable predictions from sloppy models.

    Science.gov (United States)

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  5. The prediction of epidemics through mathematical modeling.

    Science.gov (United States)

    Schaus, Catherine

    2014-01-01

    Mathematical models may be resorted to in an endeavor to predict the development of epidemics. The SIR model is one of the applications. Still too approximate, the use of statistics awaits more data in order to come closer to reality.

  6. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  7. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    Science.gov (United States)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model

  8. Testing strong factorial invariance using three-level structural equation modeling

    Directory of Open Access Journals (Sweden)

    Suzanne eJak

    2014-07-01

    Full Text Available Within structural equation modeling, the most prevalent model to investigate measurement bias is the multigroup model. Equal factor loadings and intercepts across groups in a multigroup model represent strong factorial invariance (absence of measurement bias across groups. Although this approach is possible in principle, it is hardly practical when the number of groups is large or when the group size is relatively small. Jak, Oort and Dolan (2013 showed how strong factorial invariance across large numbers of groups can be tested in a multilevel structural equation modeling framework, by treating group as a random instead of a fixed variable. In the present study, this model is extended for use with three-level data. The proposed method is illustrated with an investigation of strong factorial invariance across 156 school classes and 50 schools in a Dutch dyscalculia test, using three-level structural equation modeling.

  9. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing

  10. Cross-Validation of Aerobic Capacity Prediction Models in Adolescents.

    Science.gov (United States)

    Burns, Ryan Donald; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Saint-Maurice, Pedro F; Welk, Greg J; Mahar, Matthew T

    2015-08-01

    Cardiorespiratory endurance is a component of health-related fitness. FITNESSGRAM recommends the Progressive Aerobic Cardiovascular Endurance Run (PACER) or One mile Run/Walk (1MRW) to assess cardiorespiratory endurance by estimating VO2 Peak. No research has cross-validated prediction models from both PACER and 1MRW, including the New PACER Model and PACER-Mile Equivalent (PACER-MEQ) using current standards. The purpose of this study was to cross-validate prediction models from PACER and 1MRW against measured VO2 Peak in adolescents. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years (Mean = 14.7 ± 1.3 years; 32 girls, 52 boys) who completed the PACER and 1MRW in addition to a laboratory maximal treadmill test to measure VO2 Peak. Multiple correlations among various models with measured VO2 Peak were considered moderately strong (R = .74-0.78), and prediction error (RMSE) ranged from 5.95 ml·kg⁻¹,min⁻¹ to 8.27 ml·kg⁻¹.min⁻¹. Criterion-referenced agreement into FITNESSGRAM's Healthy Fitness Zones was considered fair-to-good among models (Kappa = 0.31-0.62; Agreement = 75.5-89.9%; F = 0.08-0.65). In conclusion, prediction models demonstrated moderately strong linear relationships with measured VO2 Peak, fair prediction error, and fair-to-good criterion referenced agreement with measured VO2 Peak into FITNESSGRAM's Healthy Fitness Zones.

  11. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  12. Efem vs. XFEM: a comparative study for modeling strong discontinuity in geomechanics

    OpenAIRE

    Das, Kamal C.; Ausas, Roberto Federico; Segura Segarra, José María; Narang, Ankur; Rodrigues, Eduardo; Carol, Ignacio; Lakshmikantha, Ramasesha Mookanahallipatna; Mello,, U.

    2015-01-01

    Modeling of big faults or weak planes of strong and weak discontinuities is of major importance to assess the Geomechanical behaviour of mining/civil tunnel, reservoirs etc. For modelling fractures in Geomechanics, prior art has been limited to Interface Elements which suffer from numerical instability and where faults are required to be aligned with element edges. In this paper, we consider comparative study on finite elements for capturing strong discontinuities by means of elemental (EFEM)...

  13. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  14. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    pumps, heat tanks, electrical vehicle battery charging/discharging, wind farms, power plants). 2.Embed forecasting methodologies for the weather (e.g. temperature, solar radiation), the electricity consumption, and the electricity price in a predictive control system. 3.Develop optimization algorithms....... Chapter 3 introduces Model Predictive Control (MPC) including state estimation, filtering and prediction for linear models. Chapter 4 simulates the models from Chapter 2 with the certainty equivalent MPC from Chapter 3. An economic MPC minimizes the costs of consumption based on real electricity prices...... that determined the flexibility of the units. A predictive control system easily handles constraints, e.g. limitations in power consumption, and predicts the future behavior of a unit by integrating predictions of electricity prices, consumption, and weather variables. The simulations demonstrate the expected...

  15. Group Targets Tracking Using Multiple Models GGIW-CPHD Based on Best-Fitting Gaussian Approximation and Strong Tracking Filter

    Directory of Open Access Journals (Sweden)

    Yun Wang

    2016-01-01

    Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.

  16. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  17. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  18. Classical trajectory perspective of atomic ionization in strong laser fields. Semiclassical modeling

    International Nuclear Information System (INIS)

    Liu, Jie

    2014-01-01

    Dealing with timely and interesting issues in strong laser physics. Illustrates complex strong field atomic ionization with the simple semiclassical model of classical trajectory perspective for the first time. Provides a theoretical model that can be used to account for recent experiments. The ionization of atoms and molecules in strong laser fields is an active field in modern physics and has versatile applications in such as attosecond physics, X-ray generation, inertial confined fusion (ICF), medical science and so on. Classical Trajectory Perspective of Atomic Ionization in Strong Laser Fields covers the basic concepts in this field and discusses many interesting topics using the semiclassical model of classical trajectory ensemble simulation, which is one of the most successful ionization models and has the advantages of a clear picture, feasible computing and accounting for many exquisite experiments quantitatively. The book also presents many applications of the model in such topics as the single ionization, double ionization, neutral atom acceleration and other timely issues in strong field physics, and delivers useful messages to readers with presenting the classical trajectory perspective on the strong field atomic ionization. The book is intended for graduate students and researchers in the field of laser physics, atom molecule physics and theoretical physics. Dr. Jie Liu is a professor of Institute of Applied Physics and Computational Mathematics, China and Peking University.

  19. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  20. Deep Predictive Models in Interactive Music

    OpenAIRE

    Martin, Charles P.; Ellefsen, Kai Olav; Torresen, Jim

    2018-01-01

    Automatic music generation is a compelling task where much recent progress has been made with deep learning models. In this paper, we ask how these models can be integrated into interactive music systems; how can they encourage or enhance the music making of human users? Musical performance requires prediction to operate instruments, and perform in groups. We argue that predictive models could help interactive systems to understand their temporal context, and ensemble behaviour. Deep learning...

  1. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optimal...... steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  2. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior...... and predictive inferences under different reasonable choices of prior distribution in sensitivity analysis have been presented....

  3. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  4. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  5. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  6. Multi-model analysis in hydrological prediction

    Science.gov (United States)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been

  7. Goldberger-treiman relation and nucleon's mean square radius of strong interaction in the Skyrme model

    International Nuclear Information System (INIS)

    Li Bingan

    1988-01-01

    In this letter it is shown that even in m π ≠ 0 case the Goldberger-Treiman relation is still hold in the Skyrme model. The mean square radius of strong interaction of nucleon 2 > s 1/2 is computed in the Skyrme model

  8. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    DEFF Research Database (Denmark)

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    By comparison with the solution of the time-dependent Schrodinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  9. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  12. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  13. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  14. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  15. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  16. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  1. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  2. In silico and cell-based analyses reveal strong divergence between prediction and observation of T-cell-recognized tumor antigen T-cell epitopes.

    Science.gov (United States)

    Schmidt, Julien; Guillaume, Philippe; Dojcinovic, Danijel; Karbach, Julia; Coukos, George; Luescher, Immanuel

    2017-07-14

    Tumor exomes provide comprehensive information on mutated, overexpressed genes and aberrant splicing, which can be exploited for personalized cancer immunotherapy. Of particular interest are mutated tumor antigen T-cell epitopes, because neoepitope-specific T cells often are tumoricidal. However, identifying tumor-specific T-cell epitopes is a major challenge. A widely used strategy relies on initial prediction of human leukocyte antigen-binding peptides by in silico algorithms, but the predictive power of this approach is unclear. Here, we used the human tumor antigen NY-ESO-1 (ESO) and the human leukocyte antigen variant HLA-A*0201 (A2) as a model and predicted in silico the 41 highest-affinity, A2-binding 8-11-mer peptides and assessed their binding, kinetic complex stability, and immunogenicity in A2-transgenic mice and on peripheral blood mononuclear cells from ESO-vaccinated melanoma patients. We found that 19 of the peptides strongly bound to A2, 10 of which formed stable A2-peptide complexes and induced CD8 + T cells in A2-transgenic mice. However, only 5 of the peptides induced cognate T cells in humans; these peptides exhibited strong binding and complex stability and contained multiple large hydrophobic and aromatic amino acids. These results were not predicted by in silico algorithms and provide new clues to improving T-cell epitope identification. In conclusion, our findings indicate that only a small fraction of in silico -predicted A2-binding ESO peptides are immunogenic in humans, namely those that have high peptide-binding strength and complex stability. This observation highlights the need for improving in silico predictions of peptide immunogenicity. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  3. Spent fuel: prediction model development

    International Nuclear Information System (INIS)

    Almassy, M.Y.; Bosi, D.M.; Cantley, D.A.

    1979-07-01

    The need for spent fuel disposal performance modeling stems from a requirement to assess the risks involved with deep geologic disposal of spent fuel, and to support licensing and public acceptance of spent fuel repositories. Through the balanced program of analysis, diagnostic testing, and disposal demonstration tests, highlighted in this presentation, the goal of defining risks and of quantifying fuel performance during long-term disposal can be attained

  4. Navy Recruit Attrition Prediction Modeling

    Science.gov (United States)

    2014-09-01

    have high correlation with attrition, such as age, job characteristics, command climate, marital status, behavior issues prior to recruitment, and the...the additive model. glm(formula = Outcome ~ Age + Gender + Marital + AFQTCat + Pay + Ed + Dep, family = binomial, data = ltraining) Deviance ...0.1 ‘ ‘ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance : 105441 on 85221 degrees of freedom Residual deviance

  5. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  6. Predictive Models and Computational Toxicology (II IBAMTOX)

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  7. Finding furfural hydrogenation catalysts via predictive modelling

    NARCIS (Netherlands)

    Strassberger, Z.; Mooijman, M.; Ruijter, E.; Alberts, A.H.; Maldonado, A.G.; Orru, R.V.A.; Rothenberg, G.

    2010-01-01

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes

  8. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL STRESSES IN ... the transverse residual stress in the x-direction (σx) had a maximum value of 375MPa ... the finite element method are in fair agreement with the experimental results.

  9. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico; Kryshtafovych, Andriy; Tramontano, Anna

    2009-01-01

    established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic

  10. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  11. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  12. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  13. Model predictive Controller for Mobile Robot

    OpenAIRE

    Alireza Rezaee

    2017-01-01

    This paper proposes a Model Predictive Controller (MPC) for control of a P2AT mobile robot. MPC refers to a group of controllers that employ a distinctly identical model of process to predict its future behavior over an extended prediction horizon. The design of a MPC is formulated as an optimal control problem. Then this problem is considered as linear quadratic equation (LQR) and is solved by making use of Ricatti equation. To show the effectiveness of the proposed method this controller is...

  14. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  15. On the model dependence of the determination of the strong coupling constant in second order QCD from e+e--annihilation into hadrons

    International Nuclear Information System (INIS)

    Achterberg, O.; D'Agostini, G.; Apel, W.D.; Engler, J.; Fluegge, G.; Forstbauer, B.; Fries, D.C.; Fues, W.; Gamerdinger, K.; Henkes, T.; Hopp, G.; Krueger, M.; Kuester, H.; Mueller, H.; Randoll, H.; Schmidt, G.; Schneider, H.; Boer, W. de; Buschhorn, G.; Grindhammer, G.; Grosse-Wiesmann, P.; Gunderson, B.; Kiesling, C.; Kotthaus, R.; Kruse, U.; Lierl, H.; Lueers, D.; Oberlack, H.; Schacht, P.; Bonneaud, G.; Colas, P.; Cordier, A.; Davier, M.; Fournier, D.; Grivaz, J.F.; Haissinski, J.; Journe, V.; Laplanche, F.; Le Diberder, F.; Mallik, U.; Ros, E.; Veillet, J.J.; Behrend, H.J.; Fenner, H.; Schachter, M.J.; Schroeder, V.; Sindt, H.

    1983-12-01

    Hadronic events obtained with the CELLO detector at PETRA are compared with second order QCD predictions using different models for the fragmentation of quarks and gluons into hadrons. We find that the model dependence in the determination of the strong coupling constant persists when going from first to second order QCD calculations. (orig.)

  16. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  17. Towards a non-perturbative study of the strongly coupled standard model

    International Nuclear Information System (INIS)

    Dagotto, E.; Kogut, J.

    1988-01-01

    The strongly coupled standard model of Abbott and Farhi can be a good alternative to the standard model if it has a phase where chiral symmetry is not broken, the SU(2) sector confines and the scalar field is in the symmetric regime. To look for such a phase we did a numerical analysis in the context of lattice gauge theory. To simplify the model we studied a U(1) gauge theory with Higgs fields and four species of dynamical fermions. In this toy model we did not find a phase with the correct properties required by the strongly coupled standard model. We also speculate about a possible solution to this problem using a new phase of the SU(2) gauge theory with a large number of flavors. (orig.)

  18. Inert two-Higgs-doublet model strongly coupled to a non-Abelian vector resonance

    Science.gov (United States)

    Rojas-Abatte, Felipe; Mora, Maria Luisa; Urbina, Jose; Zerwekh, Alfonso R.

    2017-11-01

    We study the possibility of a dark matter candidate having its origin in an extended Higgs sector which, at least partially, is related to a new strongly interacting sector. More concretely, we consider an i2HDM (i.e., a Type-I two Higgs doublet model supplemented with a Z2 under which the nonstandard scalar doublet is odd) based on the gauge group S U (2 )1×S U (2 )2×U (1 )Y . We assume that one of the scalar doublets and the standard fermion transform nontrivially under S U (2 )1 while the second doublet transforms under S U (2 )2. Our main hypothesis is that standard sector is weakly coupled while the gauge interactions associated to the second group is characterized by a large coupling constant. We explore the consequences of this construction for the phenomenology of the dark matter candidate and we show that the presence of the new vector resonance reduces the relic density saturation region, compared to the usual i2DHM, in the high dark matter mass range. In the collider side, we argue that the mono-Z production is the channel which offers the best chances to manifest the presence of the new vector field. We study the departures from the usual i2HDM predictions and show that the discovery of the heavy vector at the LHC is challenging even in the mono-Z channel since the typical cross sections are of the order of 10-2 fb .

  19. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  20. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  1. An Improved Car-Following Model Accounting for Impact of Strong Wind

    Directory of Open Access Journals (Sweden)

    Dawei Liu

    2017-01-01

    Full Text Available In order to investigate the effect of strong wind on dynamic characteristic of traffic flow, an improved car-following model based on the full velocity difference model is developed in this paper. Wind force is introduced as the influence factor of car-following behavior. Among three components of wind force, lift force and side force are taken into account. The linear stability analysis is carried out and the stability condition of the newly developed model is derived. Numerical analysis is made to explore the effect of strong wind on spatial-time evolution of a small perturbation. The results show that the strong wind can significantly affect the stability of traffic flow. Driving safety in strong wind is also studied by comparing the lateral force under different wind speeds with the side friction of vehicles. Finally, the fuel consumption of vehicle in strong wind condition is explored and the results show that the fuel consumption decreased with the increase of wind speed.

  2. Strong coupling and quasispinor representations of the SU(3) rotor model

    International Nuclear Information System (INIS)

    Rowe, D.J.; De Guise, H.

    1992-01-01

    We define a coupling scheme, in close parallel to the coupling scheme of Elliott and Wilsdon, in which nucleonic intrinsic spins are strongly coupled to SU(3) spatial wave functions. The scheme is proposed for shell-model calculations in strongly deformed nuclei and for semimicroscopic analyses of rotations in odd-mass nuclei and other nuclei for which the spin-orbit interaction is believed to play an important role. The coupling scheme extends the domain of utility of the SU(3) model, and the symplectic model, to heavy nuclei and odd-mass nuclei. It is based on the observation that the low angular-momentum states of an SU(3) irrep have properties that mimic those of a corresponding irrep of the rotor algebra. Thus, we show that strongly coupled spin-SU(3) bands behave like strongly coupled rotor bands with properties that approach those of irreducible representations of the rigid-rotor algebra in the limit of large SU(3) quantum numbers. Moreover, we determine that the low angular-momentum states of a strongly coupled band of states of half-odd integer angular momentum behave to a high degree of accuracy as if they belonged to an SU(3) irrep. These are the quasispinor SU(3) irreps referred to in the title. (orig.)

  3. Rare Plants of Southeastern Hardwood Forests and the Role of Predictive Modeling

    International Nuclear Information System (INIS)

    Imm, D.W.; Shealy, H.E. Jr.; McLeod, K.W.; Collins, B.

    2001-01-01

    Habitat prediction models for rare plants can be useful when large areas must be surveyed or populations must be established. Investigators developed a habitat prediction model for four species of Southeastern hardwood forests. These four examples suggest that models based on resource and vegetation characteristics can accurately predict habitat, but only when plants are strongly associated with these variables and the scale of modeling coincides with habitat size

  4. Finding Furfural Hydrogenation Catalysts via Predictive Modelling.

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-09-10

    We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (k(H):k(D)=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R(2)=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model's predictions, demonstrating the validity and value of predictive modelling in catalyst optimization.

  5. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  6. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  7. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  8. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  9. Bias Correction in a Stable AD (1,1) Model: Weak versus Strong Exogeneity

    NARCIS (Netherlands)

    van Giersbergen, N.P.A.

    2001-01-01

    This paper compares the behaviour of a bias-corrected estimator assuming strongly exogenous regressors to the behaviour of a bias-corrected estimator assuming weakly exogenous regressors, when in fact the marginal model contains a feedback mechanism. To this end, the effects of a feedback mechanism

  10. Strain localization at the margins of strong lithospheric domains: insights from analogue models

    NARCIS (Netherlands)

    Calignano, Elisa; Sokoutis, Dimitrios; Willingshofer, Ernst; Gueydan, Frederic; Cloetingh, Sierd

    The lateral variation of the mechanical properties of continental lithosphere is an important factor controlling the localization of deformation and thus the deformation history and geometry of intra-plate mountain belts. A series of three-layer lithospheric-scale analog models, with a strong domain

  11. Engineering the Dynamics of Effective Spin-Chain Models for Strongly Interacting Atomic Gases

    DEFF Research Database (Denmark)

    Volosniev, A. G.; Petrosyan, D.; Valiente, M.

    2015-01-01

    We consider a one-dimensional gas of cold atoms with strong contact interactions and construct an effective spin-chain Hamiltonian for a two-component system. The resulting Heisenberg spin model can be engineered by manipulating the shape of the external confining potential of the atomic gas. We...

  12. The one loop calculation of the strong coupling β function in the Toy Model

    International Nuclear Information System (INIS)

    Bai Zhiming; Jiang Yuanfang

    1991-01-01

    The background field quantization is used to calculate the one-loop β function in the Toy Model which has the strong coupling and the SU(3) symmetry. The function obtained is consistent with the Appalquist-Carrazone theorem in the low energy condition

  13. Extending the reach of strong-coupling: an iterative technique for Hamiltonian lattice models

    International Nuclear Information System (INIS)

    Alberty, J.; Greensite, J.; Patkos, A.

    1983-12-01

    The authors propose an iterative method for doing lattice strong-coupling-like calculations in a range of medium to weak couplings. The method is a modified Lanczos scheme, with greatly improved convergence properties. The technique is tested on the Mathieu equation and on a Hamiltonian finite-chain XY model, with excellent results. (Auth.)

  14. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  15. A solution of the strong CP problem in models with scalars

    International Nuclear Information System (INIS)

    Dimopoulos, S.

    1979-01-01

    A possible solution to the strong CP problem within the context of a Weinberg-Salam model with two Higgs fields coupled in a Peccei-Quinn symmetric fashion is pointed out. This is done by extending the colour group to a bigger simple group which is broken at some very high energy. The model contains a heavy axion. No old or new U(1) problem re-emerges. (Auth.)

  16. Spectral nudging in regional climate modelling: How strongly should we nudge?

    OpenAIRE

    Omrani , Hiba; Drobinski , Philippe; Dubos , Thomas

    2012-01-01

    International audience; Spectral nudging is a technique consisting in driving regional climate models (RCMs) on selected spatial scales corresponding to those produced by the driving global circulation model (GCM). This technique prevents large and unrealistic departures between the GCM driving fields and the RCM fields at the GCM spatial scales. Theoretically, the relaxation of the RCM towards the GCM should be infinitely strong provided thre are perfect large-scale fields. In practice, the ...

  17. RAMAN LIGHT SCATTERING IN PSEUDOSPIN-ELECTRON MODEL AT STRONG PSEUDOSPIN-ELECTRON INTERACTION

    Directory of Open Access Journals (Sweden)

    T.S.Mysakovych

    2004-01-01

    Full Text Available Anharmonic phonon contributions to Raman scattering in locally anharmonic crystal systems in the framework of the pseudospin-electron model with tunneling splitting of levels are investigated. The case of strong pseudospin-electron coupling is considered. Pseudospin and electron contributions to scattering are taken into account. Frequency dependences of Raman scattering intensity for different values of model parameters and for different polarization of scattering and incident light are investigated.

  18. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  19. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    Science.gov (United States)

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388

  20. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  1. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2011-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  2. Study of thermodynamic and transport properties of strongly interacting matter in a color string percolation model at RHIC

    International Nuclear Information System (INIS)

    Sahoo, Pragati; Tiwari, Swatantra Kumar; De, Sudipan; Sahoo, Raghunath

    2017-01-01

    The main perspectives of Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory are to study the properties of the strongly interacting matter and to explore the conjectured Quantum Chromodynamics (QCD) phase diagram. Lattice QCD (lQCD) predicts a smooth crossover at vanishing baryon chemical potential (μ B ) and other QCD based theoretical models predicts first order phase transition at large μB. Searching of the Critical Point in the QCD phase diagram, finding the evidence and nature of phase transition, studying the properties of the matter formed in nuclear collisions as a function of √sNN are the main goals of RHIC. To investigate the nature of the matter produced at heavy-ion collisions, the thermodynamical and transport quantities like: energy density, shear viscosity etc. are studied. It is expected that the ratio of shear viscosity (η) to entropy density (s) would exhibit a minimum value near the QCD critical point

  3. Analog quantum simulation of the Rabi model in the ultra-strong coupling regime.

    Science.gov (United States)

    Braumüller, Jochen; Marthaler, Michael; Schneider, Andre; Stehli, Alexander; Rotzinger, Hannes; Weides, Martin; Ustinov, Alexey V

    2017-10-03

    The quantum Rabi model describes the fundamental mechanism of light-matter interaction. It consists of a two-level atom or qubit coupled to a quantized harmonic mode via a transversal interaction. In the weak coupling regime, it reduces to the well-known Jaynes-Cummings model by applying a rotating wave approximation. The rotating wave approximation breaks down in the ultra-strong coupling regime, where the effective coupling strength g is comparable to the energy ω of the bosonic mode, and remarkable features in the system dynamics are revealed. Here we demonstrate an analog quantum simulation of an effective quantum Rabi model in the ultra-strong coupling regime, achieving a relative coupling ratio of g/ω ~ 0.6. The quantum hardware of the simulator is a superconducting circuit embedded in a cQED setup. We observe fast and periodic quantum state collapses and revivals of the initial qubit state, being the most distinct signature of the synthesized model.An analog quantum simulation scheme has been explored with a quantum hardware based on a superconducting circuit. Here the authors investigate the time evolution of the quantum Rabi model at ultra-strong coupling conditions, which is synthesized by slowing down the system dynamics in an effective frame.

  4. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  5. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    Science.gov (United States)

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  7. Interpreting Disruption Prediction Models to Improve Plasma Control

    Science.gov (United States)

    Parsons, Matthew

    2017-10-01

    In order for the tokamak to be a feasible design for a fusion reactor, it is necessary to minimize damage to the machine caused by plasma disruptions. Accurately predicting disruptions is a critical capability for triggering any mitigative actions, and a modest amount of attention has been given to efforts that employ machine learning techniques to make these predictions. By monitoring diagnostic signals during a discharge, such predictive models look for signs that the plasma is about to disrupt. Typically these predictive models are interpreted simply to give a `yes' or `no' response as to whether a disruption is approaching. However, it is possible to extract further information from these models to indicate which input signals are more strongly correlated with the plasma approaching a disruption. If highly accurate predictive models can be developed, this information could be used in plasma control schemes to make better decisions about disruption avoidance. This work was supported by a Grant from the 2016-2017 Fulbright U.S. Student Program, administered by the Franco-American Fulbright Commission in France.

  8. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  9. Alcator C-Mod predictive modeling

    International Nuclear Information System (INIS)

    Pankin, Alexei; Bateman, Glenn; Kritz, Arnold; Greenwald, Martin; Snipes, Joseph; Fredian, Thomas

    2001-01-01

    Predictive simulations for the Alcator C-mod tokamak [I. Hutchinson et al., Phys. Plasmas 1, 1511 (1994)] are carried out using the BALDUR integrated modeling code [C. E. Singer et al., Comput. Phys. Commun. 49, 275 (1988)]. The results are obtained for temperature and density profiles using the Multi-Mode transport model [G. Bateman et al., Phys. Plasmas 5, 1793 (1998)] as well as the mixed-Bohm/gyro-Bohm transport model [M. Erba et al., Plasma Phys. Controlled Fusion 39, 261 (1997)]. The simulated discharges are characterized by very high plasma density in both low and high modes of confinement. The predicted profiles for each of the transport models match the experimental data about equally well in spite of the fact that the two models have different dimensionless scalings. Average relative rms deviations are less than 8% for the electron density profiles and 16% for the electron and ion temperature profiles

  10. The strong-weak coupling symmetry in 2D Φ4 field models

    Directory of Open Access Journals (Sweden)

    B.N.Shalaev

    2005-01-01

    Full Text Available It is found that the exact beta-function β(g of the continuous 2D gΦ4 model possesses two types of dual symmetries, these being the Kramers-Wannier (KW duality symmetry and the strong-weak (SW coupling symmetry f(g, or S-duality. All these transformations are explicitly constructed. The S-duality transformation f(g is shown to connect domains of weak and strong couplings, i.e. above and below g*. Basically it means that there is a tempting possibility to compute multiloop Feynman diagrams for the β-function using high-temperature lattice expansions. The regular scheme developed is found to be strongly unstable. Approximate values of the renormalized coupling constant g* found from duality symmetry equations are in an agreement with available numerical results.

  11. Thermodynamics of strongly interacting system from reparametrized Polyakov-Nambu-Jona-Lasinio model

    International Nuclear Information System (INIS)

    Bhattacharyya, Abhijit; Ghosh, Sanjay K.; Maity, Soumitra; Raha, Sibaji; Ray, Rajarshi; Saha, Kinkar; Upadhaya, Sudipa

    2017-01-01

    The Polyakov-Nambu-Jona-Lasinio model has been quite successful in describing various qualitative features of observables for strongly interacting matter, that are measurable in heavy-ion collision experiments. The question still remains on the quantitative uncertainties in the model results. Such an estimation is possible only by contrasting these results with those obtained from rst principles using the lattice QCD framework. Recently a variety of lattice QCD data were reported in the realistic continuum limit. Here we make a first attempt at reparametrizing the model so as to reproduce these lattice data

  12. Global Behavior for a Strongly Coupled Predator-Prey Model with One Resource and Two Consumers

    Directory of Open Access Journals (Sweden)

    Yujuan Jiao

    2012-01-01

    Full Text Available We consider a strongly coupled predator-prey model with one resource and two consumers, in which the first consumer species feeds on the resource according to the Holling II functional response, while the second consumer species feeds on the resource following the Beddington-DeAngelis functional response, and they compete for the common resource. Using the energy estimates and Gagliardo-Nirenberg-type inequalities, the existence and uniform boundedness of global solutions for the model are proved. Meanwhile, the sufficient conditions for global asymptotic stability of the positive equilibrium for this model are given by constructing a Lyapunov function.

  13. Spatial occupancy models applied to atlas data show Southern Ground Hornbills strongly depend on protected areas.

    Science.gov (United States)

    Broms, Kristin M; Johnson, Devin S; Altwegg, Res; Conquest, Loveday L

    2014-03-01

    Determining the range of a species and exploring species--habitat associations are central questions in ecology and can be answered by analyzing presence--absence data. Often, both the sampling of sites and the desired area of inference involve neighboring sites; thus, positive spatial autocorrelation between these sites is expected. Using survey data for the Southern Ground Hornbill (Bucorvus leadbeateri) from the Southern African Bird Atlas Project, we compared advantages and disadvantages of three increasingly complex models for species occupancy: an occupancy model that accounted for nondetection but assumed all sites were independent, and two spatial occupancy models that accounted for both nondetection and spatial autocorrelation. We modeled the spatial autocorrelation with an intrinsic conditional autoregressive (ICAR) model and with a restricted spatial regression (RSR) model. Both spatial models can readily be applied to any other gridded, presence--absence data set using a newly introduced R package. The RSR model provided the best inference and was able to capture small-scale variation that the other models did not. It showed that ground hornbills are strongly dependent on protected areas in the north of their South African range, but less so further south. The ICAR models did not capture any spatial autocorrelation in the data, and they took an order, of magnitude longer than the RSR models to run. Thus, the RSR occupancy model appears to be an attractive choice for modeling occurrences at large spatial domains, while accounting for imperfect detection and spatial autocorrelation.

  14. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  15. Predictive modeling of pedestal structure in KSTAR using EPED model

    Energy Technology Data Exchange (ETDEWEB)

    Han, Hyunsun; Kim, J. Y. [National Fusion Research Institute, Daejeon 305-806 (Korea, Republic of); Kwon, Ohjin [Department of Physics, Daegu University, Gyeongbuk 712-714 (Korea, Republic of)

    2013-10-15

    A predictive calculation is given for the structure of edge pedestal in the H-mode plasma of the KSTAR (Korea Superconducting Tokamak Advanced Research) device using the EPED model. Particularly, the dependence of pedestal width and height on various plasma parameters is studied in detail. The two codes, ELITE and HELENA, are utilized for the stability analysis of the peeling-ballooning and kinetic ballooning modes, respectively. Summarizing the main results, the pedestal slope and height have a strong dependence on plasma current, rapidly increasing with it, while the pedestal width is almost independent of it. The plasma density or collisionality gives initially a mild stabilization, increasing the pedestal slope and height, but above some threshold value its effect turns to a destabilization, reducing the pedestal width and height. Among several plasma shape parameters, the triangularity gives the most dominant effect, rapidly increasing the pedestal width and height, while the effect of elongation and squareness appears to be relatively weak. Implication of these edge results, particularly in relation to the global plasma performance, is discussed.

  16. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  17. The moduli and gravitino (non)-problems in models with strongly stabilized moduli

    International Nuclear Information System (INIS)

    Evans, Jason L.; Olive, Keith A.; Garcia, Marcos A.G.

    2014-01-01

    In gravity mediated models and in particular in models with strongly stabilized moduli, there is a natural hierarchy between gaugino masses, the gravitino mass and moduli masses: m 1/2 << m 3/2 << m φ . Given this hierarchy, we show that 1) moduli problems associated with excess entropy production from moduli decay and 2) problems associated with moduli/gravitino decays to neutralinos are non-existent. Placed in an inflationary context, we show that the amplitude of moduli oscillations are severely limited by strong stabilization. Moduli oscillations may then never come to dominate the energy density of the Universe. As a consequence, moduli decay to gravitinos and their subsequent decay to neutralinos need not overpopulate the cold dark matter density

  18. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  19. Predictive performance models and multiple task performance

    Science.gov (United States)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  20. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....

  1. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  2. Strongly coupled radiation from moving mirrors and holography in the Karch-Randall model

    International Nuclear Information System (INIS)

    Pujolas, Oriol

    2008-01-01

    Motivated by the puzzles in understanding how Black Holes evaporate into a strongly coupled Conformal Field Theory, we study particle creation by an accelerating mirror. We model the mirror as a gravitating Domain Wall and consider a CFT coupled to it through gravity, in asymptotically Anti de Sitter space. This problem (backreaction included) can be solved exactly at one loop. At strong coupling, this is dual to a Domain Wall localized on the brane in the Karch-Randall model, which can be fully solved as well. Hence, in this case one can see how the particle production is affected by A) strong coupling and B) its own backreaction. We find that A) the amount of CFT radiation at strong coupling is not suppressed relative to the weak coupling result; and B) once the boundary conditions in the AdS 5 bulk are appropriately mapped to the conditions for the CFT on the boundary of AdS 4 , the Karch-Randall model and the CFT side agree to leading order in the backreaction. This agreement holds even for a new class of self-consistent solutions (the 'Bootstrap' Domain Wall spacetimes) that have no classical limit. This provides a quite precise check of the holographic interpretation of the Karch-Randall model. We also comment on the massive gravity interpretation. As a byproduct, we show that relativistic Cosmic Strings (pure tension codimension 2 branes) in Anti de Sitter are repulsive and generate long-range tidal forces even at classical level. This is the phenomenon dual to particle production by Domain Walls.

  3. Generalized Models from Beta(p, 2) Densities with Strong Allee Effect: Dynamical Approach

    OpenAIRE

    Aleixo, Sandra M.; Rocha, J. Leonel

    2012-01-01

    A dynamical approach to study the behaviour of generalized populational growth models from Beta(p, 2) densities, with strong Allee effect, is presented. The dynamical analysis of the respective unimodal maps is performed using symbolic dynamics techniques. The complexity of the correspondent discrete dynamical systems is measured in terms of topological entropy. Different populational dynamics regimes are obtained when the intrinsic growth rates are modified: extinction, bistability, chaotic ...

  4. Optimal model-free prediction from multivariate time series

    Science.gov (United States)

    Runge, Jakob; Donner, Reik V.; Kurths, Jürgen

    2015-05-01

    Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.

  5. A stepwise model to predict monthly streamflow

    Science.gov (United States)

    Mahmood Al-Juboori, Anas; Guven, Aytac

    2016-12-01

    In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.

  6. Detailed modelling of the susceptibility of a thermally populated, strongly driven circuit-QED system

    International Nuclear Information System (INIS)

    Kockum, Anton Frisk; Johansson, Göran; Sandberg, Martin; Vissers, Michael R; Gao, Jiansong; Pappas, David P

    2013-01-01

    We present measurements and modelling of the susceptibility of a 2D microstrip cavity coupled to a driven transmon qubit. We are able to fit the response of the cavity to a weak probe signal with high accuracy in the strong coupling, low detuning, i.e., non-dispersive, limit over a wide bandwidth. The observed spectrum is rich in multi-photon processes for the doubly dressed transmon. These features are well explained by including the higher transmon levels in the driven Jaynes–Cummings model and solving the full master equation to calculate the susceptibility of the cavity. (paper)

  7. Lattice Hamiltonian approach to the Schwinger model. Further results from the strong coupling expansion

    International Nuclear Information System (INIS)

    Szyniszewski, Marcin; Manchester Univ.; Cichy, Krzysztof; Poznan Univ.; Kujawa-Cichy, Agnieszka

    2014-10-01

    We employ exact diagonalization with strong coupling expansion to the massless and massive Schwinger model. New results are presented for the ground state energy and scalar mass gap in the massless model, which improve the precision to nearly 10 -9 %. We also investigate the chiral condensate and compare our calculations to previous results available in the literature. Oscillations of the chiral condensate which are present while increasing the expansion order are also studied and are shown to be directly linked to the presence of flux loops in the system.

  8. Effective model with strong Kitaev interactions for α -RuCl3

    Science.gov (United States)

    Suzuki, Takafumi; Suga, Sei-ichiro

    2018-04-01

    We use an exact numerical diagonalization method to calculate the dynamical spin structure factors of three ab initio models and one ab initio guided model for a honeycomb-lattice magnet α -RuCl3 . We also use thermal pure quantum states to calculate the temperature dependence of the heat capacity, the nearest-neighbor spin-spin correlation function, and the static spin structure factor. From the results obtained from these four effective models, we find that, even when the magnetic order is stabilized at low temperature, the intensity at the Γ point in the dynamical spin structure factors increases with increasing nearest-neighbor spin correlation. In addition, we find that the four models fail to explain heat-capacity measurements whereas two of the four models succeed in explaining inelastic-neutron-scattering experiments. In the four models, when temperature decreases, the heat capacity shows a prominent peak at a high temperature where the nearest-neighbor spin-spin correlation function increases. However, the peak temperature in heat capacity is too low in comparison with that observed experimentally. To address these discrepancies, we propose an effective model that includes strong ferromagnetic Kitaev coupling, and we show that this model quantitatively reproduces both inelastic-neutron-scattering experiments and heat-capacity measurements. To further examine the adequacy of the proposed model, we calculate the field dependence of the polarized terahertz spectra, which reproduces the experimental results: the spin-gapped excitation survives up to an onset field where the magnetic order disappears and the response in the high-field region is almost linear. Based on these numerical results, we argue that the low-energy magnetic excitation in α -RuCl3 is mainly characterized by interactions such as off-diagonal interactions and weak Heisenberg interactions between nearest-neighbor pairs, rather than by the strong Kitaev interactions.

  9. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  10. An Intelligent Model for Stock Market Prediction

    Directory of Open Access Journals (Sweden)

    IbrahimM. Hamed

    2012-08-01

    Full Text Available This paper presents an intelligent model for stock market signal prediction using Multi-Layer Perceptron (MLP Artificial Neural Networks (ANN. Blind source separation technique, from signal processing, is integrated with the learning phase of the constructed baseline MLP ANN to overcome the problems of prediction accuracy and lack of generalization. Kullback Leibler Divergence (KLD is used, as a learning algorithm, because it converges fast and provides generalization in the learning mechanism. Both accuracy and efficiency of the proposed model were confirmed through the Microsoft stock, from wall-street market, and various data sets, from different sectors of the Egyptian stock market. In addition, sensitivity analysis was conducted on the various parameters of the model to ensure the coverage of the generalization issue. Finally, statistical significance was examined using ANOVA test.

  11. Predictive Models, How good are they?

    DEFF Research Database (Denmark)

    Kasch, Helge

    The WAD grading system has been used for more than 20 years by now. It has shown long-term viability, but with strengths and limitations. New bio-psychosocial assessment of the acute whiplash injured subject may provide better prediction of long-term disability and pain. Furthermore, the emerging......-up. It is important to obtain prospective identification of the relevant risk underreported disability could, if we were able to expose these hidden “risk-factors” during our consultations, provide us with better predictive models. New data from large clinical studies will present exciting new genetic risk markers...

  12. The prediction of surface temperature in the new seasonal prediction system based on the MPI-ESM coupled climate model

    Science.gov (United States)

    Baehr, J.; Fröhlich, K.; Botzet, M.; Domeisen, D. I. V.; Kornblueh, L.; Notz, D.; Piontek, R.; Pohlmann, H.; Tietsche, S.; Müller, W. A.

    2015-05-01

    A seasonal forecast system is presented, based on the global coupled climate model MPI-ESM as used for CMIP5 simulations. We describe the initialisation of the system and analyse its predictive skill for surface temperature. The presented system is initialised in the atmospheric, oceanic, and sea ice component of the model from reanalysis/observations with full field nudging in all three components. For the initialisation of the ensemble, bred vectors with a vertically varying norm are implemented in the ocean component to generate initial perturbations. In a set of ensemble hindcast simulations, starting each May and November between 1982 and 2010, we analyse the predictive skill. Bias-corrected ensemble forecasts for each start date reproduce the observed surface temperature anomalies at 2-4 months lead time, particularly in the tropics. Niño3.4 sea surface temperature anomalies show a small root-mean-square error and predictive skill up to 6 months. Away from the tropics, predictive skill is mostly limited to the ocean, and to regions which are strongly influenced by ENSO teleconnections. In summary, the presented seasonal prediction system based on a coupled climate model shows predictive skill for surface temperature at seasonal time scales comparable to other seasonal prediction systems using different underlying models and initialisation strategies. As the same model underlying our seasonal prediction system—with a different initialisation—is presently also used for decadal predictions, this is an important step towards seamless seasonal-to-decadal climate predictions.

  13. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    SILVA R. G.

    1999-01-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  14. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  15. Long-wave model for strongly anisotropic growth of a crystal step.

    Science.gov (United States)

    Khenner, Mikhail

    2013-08-01

    A continuum model for the dynamics of a single step with the strongly anisotropic line energy is formulated and analyzed. The step grows by attachment of adatoms from the lower terrace, onto which atoms adsorb from a vapor phase or from a molecular beam, and the desorption is nonnegligible (the "one-sided" model). Via a multiscale expansion, we derived a long-wave, strongly nonlinear, and strongly anisotropic evolution PDE for the step profile. Written in terms of the step slope, the PDE can be represented in a form similar to a convective Cahn-Hilliard equation. We performed the linear stability analysis and computed the nonlinear dynamics. Linear stability depends on whether the stiffness is minimum or maximum in the direction of the step growth. It also depends nontrivially on the combination of the anisotropy strength parameter and the atomic flux from the terrace to the step. Computations show formation and coarsening of a hill-and-valley structure superimposed onto a long-wavelength profile, which independently coarsens. Coarsening laws for the hill-and-valley structure are computed for two principal orientations of a maximum step stiffness, the increasing anisotropy strength, and the varying atomic flux.

  16. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  17. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  18. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  19. A theoretical model of strong and moderate El Niño regimes

    Science.gov (United States)

    Takahashi, Ken; Karamperidou, Christina; Dewitte, Boris

    2018-02-01

    The existence of two regimes for El Niño (EN) events, moderate and strong, has been previously shown in the GFDL CM2.1 climate model and also suggested in observations. The two regimes have been proposed to originate from the nonlinearity in the Bjerknes feedback, associated with a threshold in sea surface temperature (T_c ) that needs to be exceeded for deep atmospheric convection to occur in the eastern Pacific. However, although the recent 2015-16 EN event provides a new data point consistent with the sparse strong EN regime, it is not enough to statistically reject the null hypothesis of a unimodal distribution based on observations alone. Nevertheless, we consider the possibility suggestive enough to explore it with a simple theoretical model based on the nonlinear Bjerknes feedback. In this study, we implemented this nonlinear mechanism in the recharge-discharge (RD) ENSO model and show that it is sufficient to produce the two EN regimes, i.e. a bimodal distribution in peak surface temperature (T) during EN events. The only modification introduced to the original RD model is that the net damping is suppressed when T exceeds T_c , resulting in a weak nonlinearity in the system. Due to the damping, the model is globally stable and it requires stochastic forcing to maintain the variability. The sustained low-frequency component of the stochastic forcing plays a key role for the onset of strong EN events (i.e. for T>T_c ), at least as important as the precursor positive heat content anomaly (h). High-frequency forcing helps some EN events to exceed T_c , increasing the number of strong events, but the rectification effect is small and the overall number of EN events is little affected by this forcing. Using the Fokker-Planck equation, we show how the bimodal probability distribution of EN events arises from the nonlinear Bjerknes feedback and also propose that the increase in the net feedback with increasing T is a necessary condition for bimodality in the RD

  20. Modelling alongshore flow in a semi-enclosed lagoon strongly forced by tides and waves

    Science.gov (United States)

    Taskjelle, Torbjørn; Barthel, Knut; Christensen, Kai H.; Furaca, Noca; Gammelsrød, Tor; Hoguane, António M.; Nharreluga, Bilardo

    2014-08-01

    Alongshore flows strongly driven by tides and waves is studied in the context of a one-dimensional numerical model. Observations from field surveys performed in a semi-enclosed lagoon (1.7 km×0.2 km) outside Xai-Xai, Mozambique, are used to validate the model results. The model is able to capture most of the observed temporal variability of the current, but sea surface height tends to be overestimated at high tide, especially during high wave events. Inside the lagoon we observed a mainly uni-directional alongshore current, with speeds up to 1 ms-1. The current varies primarily with the tide, being close to zero near low tide, generally increasing during flood and decreasing during ebb. The observations revealed a local minimum in the alongshore flow at high tide, which the model was successful in reproducing. Residence times in the lagoon were calculated to be less than one hour with wave forcing dominating the flushing. At this beach a high number of drowning casualties have occurred, but no connection was found between them and strong current events in a simulation covering the period 2011-2012.

  1. Direct characterization of chaotic and stochastic dynamics in a population model with strong periodicity

    International Nuclear Information System (INIS)

    Tung Wenwen; Qi Yan; Gao, J.B.; Cao Yinhe; Billings, Lora

    2005-01-01

    In recent years it has been increasingly recognized that noise and determinism may have comparable but different influences on population dynamics. However, no simple analysis methods have been introduced into ecology which can readily characterize those impacts. In this paper, we study a population model with strong periodicity and both with and without noise. The noise-free model generates both quasi-periodic and chaotic dynamics for certain parameter values. Due to the strong periodicity, however, the generated chaotic dynamics have not been satisfactorily described. The dynamics becomes even more complicated when there is noise. Characterizing the chaotic and stochastic dynamics in this model thus represents a challenging problem. Here we show how the chaotic dynamics can be readily characterized by the direct dynamical test for deterministic chaos developed by [Gao JB, Zheng ZM. Europhys. Lett. 1994;25:485] and how the influence of noise on quasi-periodic motions can be characterized as asymmetric diffusions wandering along the quasi-periodic orbit. It is hoped that the introduced methods will be useful in studying other population models as well as population time series obtained both in field and laboratory experiments

  2. Linked cluster expansion in the SU(2) lattice Higgs model at strong gauge coupling

    International Nuclear Information System (INIS)

    Wagner, C.E.M.

    1989-01-01

    A linked cluster expansion is developed for the β=0 limit of the SU(2) Higgs model. This method, when combined with strong gauge coupling expansions, is used to obtain the phase transition surface and the behaviour of scalar and vector masses in the lattice regularized theory. The method, in spite of the low order of truncation of the series applied, gives a reasonable agreement with Monte Carlo data for the phase transition surface and a qualitatively good picture of the behaviour of Higgs, glueball and gauge vector boson masses, in the strong coupling limit. Some limitations of the method are discussed, and an intuitive picture of the different behaviour for small and large bare self-coupling λ is given. (orig.)

  3. Field-theoretic methods in strongly-coupled models of general gauge mediation

    International Nuclear Information System (INIS)

    Fortin, Jean-François; Stergiou, Andreas

    2013-01-01

    An often-exploited feature of the operator product expansion (OPE) is that it incorporates a splitting of ultraviolet and infrared physics. In this paper we use this feature of the OPE to perform simple, approximate computations of soft masses in gauge-mediated supersymmetry breaking. The approximation amounts to truncating the OPEs for hidden-sector current–current operator products. Our method yields visible-sector superpartner spectra in terms of vacuum expectation values of a few hidden-sector IR elementary fields. We manage to obtain reasonable approximations to soft masses, even when the hidden sector is strongly coupled. We demonstrate our techniques in several examples, including a new framework where supersymmetry breaking arises both from a hidden sector and dynamically. Our results suggest that strongly-coupled models of supersymmetry breaking are naturally split

  4. The random transverse field Ising model in d = 2: analysis via boundary strong disorder renormalization

    International Nuclear Information System (INIS)

    Monthus, Cécile; Garel, Thomas

    2012-01-01

    To avoid the complicated topology of surviving clusters induced by standard strong disorder RG in dimension d > 1, we introduce a modified procedure called ‘boundary strong disorder RG’ where the order of decimations is chosen a priori. We apply this modified procedure numerically to the random transverse field Ising model in dimension d = 2. We find that the location of the critical point, the activated exponent ψ ≃ 0.5 of the infinite-disorder scaling, and the finite-size correlation exponent ν FS ≃ 1.3 are compatible with the values obtained previously using standard strong disorder RG. Our conclusion is thus that strong disorder RG is very robust with respect to changes in the order of decimations. In addition, we analyze the RG flows within the two phases in more detail, to show explicitly the presence of various correlation length exponents: we measure the typical correlation exponent ν typ ≃ 0.64 for the disordered phase (this value is very close to the correlation exponent ν pure Q (d=2)≅0.6 3 of the pure two-dimensional quantum Ising model), and the typical exponent ν h ≃ 1 for the ordered phase. These values satisfy the relations between critical exponents imposed by the expected finite-size scaling properties at infinite-disorder critical points. We also measure, within the disordered phase, the fluctuation exponent ω ≃ 0.35 which is compatible with the directed polymer exponent ω DP (1+1)= 1/3 in (1 + 1) dimensions. (paper)

  5. Weak and strong chaos in Fermi-Pasta-Ulam models and beyond

    Science.gov (United States)

    Pettini, Marco; Casetti, Lapo; Cerruti-Sola, Monica; Franzosi, Roberto; Cohen, E. G. D.

    2005-03-01

    We briefly review some of the most relevant results that our group obtained in the past, while investigating the dynamics of the Fermi-Pasta-Ulam (FPU) models. The first result is the numerical evidence of the existence of two different kinds of transitions in the dynamics of the FPU models: (i) A stochasticity threshold (ST), characterized by a value of the energy per degree of freedom below which the overwhelming majority of the phase space trajectories are regular (vanishing Lyapunov exponents). It tends to vanish as the number N of degrees of freedom is increased. (ii) A strong stochasticity threshold (SST), characterized by a value of the energy per degree of freedom at which a crossover appears between two different power laws of the energy dependence of the largest Lyapunov exponent, which phenomenologically corresponds to the transition between weak and strong chaotic regimes. It is stable with N. The second result is the development of a Riemannian geometric theory to explain the origin of Hamiltonian chaos. Starting this theory has been motivated by the inadequacy of the approach based on homoclinic intersections to explain the origin of chaos in systems of arbitrarily large N, or arbitrarily far from quasi-integrability, or displaying a transition between weak and strong chaos. Finally, the third result stems from the search for the transition between weak and strong chaos in systems other than FPU. Actually, we found that a very sharp SST appears as the dynamical counterpart of a thermodynamic phase transition, which in turn has led, in the light of the Riemannian theory of chaos, to the development of a topological theory of phase transitions.

  6. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  7. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Science.gov (United States)

    Rosa, Isabel M D; Purves, Drew; Souza, Carlos; Ewers, Robert M

    2013-01-01

    Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia")-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently

  8. Analytical modeling of light transport in scattering materials with strong absorption.

    Science.gov (United States)

    Meretska, M L; Uppu, R; Vissenberg, G; Lagendijk, A; Ijzerman, W L; Vos, W L

    2017-10-02

    We have investigated the transport of light through slabs that both scatter and strongly absorb, a situation that occurs in diverse application fields ranging from biomedical optics, powder technology, to solid-state lighting. In particular, we study the transport of light in the visible wavelength range between 420 and 700 nm through silicone plates filled with YAG:Ce 3+ phosphor particles, that even re-emit absorbed light at different wavelengths. We measure the total transmission, the total reflection, and the ballistic transmission of light through these plates. We obtain average single particle properties namely the scattering cross-section σ s , the absorption cross-section σ a , and the anisotropy factor µ using an analytical approach, namely the P3 approximation to the radiative transfer equation. We verify the extracted transport parameters using Monte-Carlo simulations of the light transport. Our approach fully describes the light propagation in phosphor diffuser plates that are used in white LEDs and that reveal a strong absorption (L/l a > 1) up to L/l a = 4, where L is the slab thickness, l a is the absorption mean free path. In contrast, the widely used diffusion theory fails to describe this parameter range. Our approach is a suitable analytical tool for industry, since it provides a fast yet accurate determination of key transport parameters, and since it introduces predictive power into the design process of white light emitting diodes.

  9. Validated predictive modelling of the environmental resistome.

    Science.gov (United States)

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.

  10. A Simple Model of Fields Including the Strong or Nuclear Force and a Cosmological Speculation

    Directory of Open Access Journals (Sweden)

    David L. Spencer

    2016-10-01

    Full Text Available Reexamining the assumptions underlying the General Theory of Relativity and calling an object's gravitational field its inertia, and acceleration simply resistance to that inertia, yields a simple field model where the potential (kinetic energy of a particle at rest is its capacity to move itself when its inertial field becomes imbalanced. The model then attributes electromagnetic and strong forces to the effects of changes in basic particle shape. Following up on the model's assumption that the relative intensity of a particle's gravitational field is always inversely related to its perceived volume and assuming that all black holes spin, may create the possibility of a cosmic rebound where a final spinning black hole ends with a new Big Bang.

  11. Modeling of Nonlinear Propagation in Multi-layer Biological Tissues for Strong Focused Ultrasound

    International Nuclear Information System (INIS)

    Ting-Bo, Fan; Zhen-Bo, Liu; Zhe, Zhang; Dong, Zhang; Xiu-Fen, Gong

    2009-01-01

    A theoretical model of the nonlinear propagation in multi-layered tissues for strong focused ultrasound is proposed. In this model, the spheroidal beam equation (SBE) is utilized to describe the nonlinear sound propagation in each layer tissue, and generalized oblique incidence theory is used to deal with the sound transmission between two layer tissues. Computer simulation is performed on a fat-muscle-liver tissue model under the irradiation of a 1 MHz focused transducer with a large aperture angle of 35°. The results demonstrate that the tissue layer would change the amplitude of sound pressure at the focal region and cause the increase of side petals. (fundamental areas of phenomenology (including applications))

  12. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  13. Baryogenesis model predicting antimatter in the Universe

    International Nuclear Information System (INIS)

    Kirilova, D.

    2003-01-01

    Cosmic ray and gamma-ray data do not rule out antimatter domains in the Universe, separated at distances bigger than 10 Mpc from us. Hence, it is interesting to analyze the possible generation of vast antimatter structures during the early Universe evolution. We discuss a SUSY-condensate baryogenesis model, predicting large separated regions of matter and antimatter. The model provides generation of the small locally observed baryon asymmetry for a natural initial conditions, it predicts vast antimatter domains, separated from the matter ones by baryonically empty voids. The characteristic scale of antimatter regions and their distance from the matter ones is in accordance with observational constraints from cosmic ray, gamma-ray and cosmic microwave background anisotropy data

  14. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    Science.gov (United States)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  15. Finding Furfural Hydrogenation Catalysts via Predictive Modelling

    OpenAIRE

    Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi

    2010-01-01

    Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre t...

  16. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  17. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  18. Strong motion modeling at the Paducah Diffusion Facility for a large New Madrid earthquake

    International Nuclear Information System (INIS)

    Herrmann, R.B.

    1991-01-01

    The Paducah Diffusion Facility is within 80 kilometers of the location of the very large New Madrid earthquakes which occurred during the winter of 1811-1812. Because of their size, seismic moment of 2.0 x 10 27 dyne-cm or moment magnitude M w = 7.5, the possible recurrence of these earthquakes is a major element in the assessment of seismic hazard at the facility. Probabilistic hazard analysis can provide uniform hazard response spectra estimates for structure evaluation, but a deterministic modeling of a such a large earthquake can provide strong constraints on the expected duration of motion. The large earthquake is modeled by specifying the earthquake fault and its orientation with respect to the site, and by specifying the rupture process. Synthetic time histories, based on forward modeling of the wavefield, from each subelement are combined to yield a three component time history at the site. Various simulations are performed to sufficiently exercise possible spatial and temporal distributions of energy release on the fault. Preliminary results demonstrate the sensitivity of the method to various assumptions, and also indicate strongly that the total duration of ground motion at the site is controlled primarily by the length of the rupture process on the fault

  19. Breast cancer risks and risk prediction models.

    Science.gov (United States)

    Engel, Christoph; Fischer, Christine

    2015-02-01

    BRCA1/2 mutation carriers have a considerably increased risk to develop breast and ovarian cancer. The personalized clinical management of carriers and other at-risk individuals depends on precise knowledge of the cancer risks. In this report, we give an overview of the present literature on empirical cancer risks, and we describe risk prediction models that are currently used for individual risk assessment in clinical practice. Cancer risks show large variability between studies. Breast cancer risks are at 40-87% for BRCA1 mutation carriers and 18-88% for BRCA2 mutation carriers. For ovarian cancer, the risk estimates are in the range of 22-65% for BRCA1 and 10-35% for BRCA2. The contralateral breast cancer risk is high (10-year risk after first cancer 27% for BRCA1 and 19% for BRCA2). Risk prediction models have been proposed to provide more individualized risk prediction, using additional knowledge on family history, mode of inheritance of major genes, and other genetic and non-genetic risk factors. User-friendly software tools have been developed that serve as basis for decision-making in family counseling units. In conclusion, further assessment of cancer risks and model validation is needed, ideally based on prospective cohort studies. To obtain such data, clinical management of carriers and other at-risk individuals should always be accompanied by standardized scientific documentation.

  20. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  1. Three-loop Standard Model effective potential at leading order in strong and top Yukawa couplings

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Stephen P. [Santa Barbara, KITP

    2014-01-08

    I find the three-loop contribution to the effective potential for the Standard Model Higgs field, in the approximation that the strong and top Yukawa couplings are large compared to all other couplings, using dimensional regularization with modified minimal subtraction. Checks follow from gauge invariance and renormalization group invariance. I also briefly comment on the special problems posed by Goldstone boson contributions to the effective potential, and on the numerical impact of the result on the relations between the Higgs vacuum expectation value, mass, and self-interaction coupling.

  2. Strong self-coupling expansion in the lattice-regularized standard SU(2) Higgs model

    International Nuclear Information System (INIS)

    Decker, K.; Weisz, P.; Montvay, I.

    1985-11-01

    Expectation values at an arbitrary point of the 3-dimensional coupling parameter space in the lattice-regularized SU(2) Higgs-model with a doublet scalar field are expressed by a series of expectation values at infinite self-coupling (lambda=infinite). Questions of convergence of this 'strong self-coupling expansion' (SSCE) are investigated. The SSCE is a potentially useful tool for the study of the lambda-dependence at any value (zero or non-zero) of the bare gauge coupling. (orig.)

  3. Strong self-coupling expansion in the lattice-regularized standard SU(2) Higgs model

    International Nuclear Information System (INIS)

    Decker, K.; Weisz, P.

    1986-01-01

    Expectation values at an arbitrary point of the 3-dimensional coupling parameter space in the lattice-regularized SU(2) Higgs model with a doublet scalar field are expressed by a series of expectation values at infinite self-coupling (lambda=infinite). Questions of convergence of this ''strong self-coupling expansion'' (SSCE) are investigated. The SSCE is a potentially useful tool for the study of the lambda-dependence at any value (zero or non-zero) of the bare gauge coupling. (orig.)

  4. Two stage neural network modelling for robust model predictive control.

    Science.gov (United States)

    Patan, Krzysztof

    2018-01-01

    The paper proposes a novel robust model predictive control scheme realized by means of artificial neural networks. The neural networks are used twofold: to design the so-called fundamental model of a plant and to catch uncertainty associated with the plant model. In order to simplify the optimization process carried out within the framework of predictive control an instantaneous linearization is applied which renders it possible to define the optimization problem in the form of constrained quadratic programming. Stability of the proposed control system is also investigated by showing that a cost function is monotonically decreasing with respect to time. Derived robust model predictive control is tested and validated on the example of a pneumatic servomechanism working at different operating regimes. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Predicting extinction rates in stochastic epidemic models

    International Nuclear Information System (INIS)

    Schwartz, Ira B; Billings, Lora; Dykman, Mark; Landsman, Alexandra

    2009-01-01

    We investigate the stochastic extinction processes in a class of epidemic models. Motivated by the process of natural disease extinction in epidemics, we examine the rate of extinction as a function of disease spread. We show that the effective entropic barrier for extinction in a susceptible–infected–susceptible epidemic model displays scaling with the distance to the bifurcation point, with an unusual critical exponent. We make a direct comparison between predictions and numerical simulations. We also consider the effect of non-Gaussian vaccine schedules, and show numerically how the extinction process may be enhanced when the vaccine schedules are Poisson distributed

  6. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  7. Data Driven Economic Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Masoud Kheradmandi

    2018-04-01

    Full Text Available This manuscript addresses the problem of data driven model based economic model predictive control (MPC design. To this end, first, a data-driven Lyapunov-based MPC is designed, and shown to be capable of stabilizing a system at an unstable equilibrium point. The data driven Lyapunov-based MPC utilizes a linear time invariant (LTI model cognizant of the fact that the training data, owing to the unstable nature of the equilibrium point, has to be obtained from closed-loop operation or experiments. Simulation results are first presented demonstrating closed-loop stability under the proposed data-driven Lyapunov-based MPC. The underlying data-driven model is then utilized as the basis to design an economic MPC. The economic improvements yielded by the proposed method are illustrated through simulations on a nonlinear chemical process system example.

  8. A MAGNIFIED GLANCE INTO THE DARK SECTOR: PROBING COSMOLOGICAL MODELS WITH STRONG LENSING IN A1689

    International Nuclear Information System (INIS)

    Magaña, Juan; Motta, V.; Cárdenas, Victor H.; Verdugo, T.; Jullo, Eric

    2015-01-01

    In this paper we constrain four alternative models to the late cosmic acceleration in the universe: Chevallier–Polarski–Linder (CPL), interacting dark energy (IDE), Ricci holographic dark energy (HDE), and modified polytropic Cardassian (MPC). Strong lensing (SL) images of background galaxies produced by the galaxy cluster Abell 1689 are used to test these models. To perform this analysis we modify the LENSTOOL lens modeling code. The value added by this probe is compared with other complementary probes: Type Ia supernovae (SN Ia), baryon acoustic oscillations (BAO), and cosmic microwave background (CMB). We found that the CPL constraints obtained for the SL data are consistent with those estimated using the other probes. The IDE constraints are consistent with the complementary bounds only if large errors in the SL measurements are considered. The Ricci HDE and MPC constraints are weak, but they are similar to the BAO, SN Ia, and CMB estimations. We also compute the figure of merit as a tool to quantify the goodness of fit of the data. Our results suggest that the SL method provides statistically significant constraints on the CPL parameters but is weak for those of the other models. Finally, we show that the use of the SL measurements in galaxy clusters is a promising and powerful technique to constrain cosmological models. The advantage of this method is that cosmological parameters are estimated by modeling the SL features for each underlying cosmology. These estimations could be further improved by SL constraints coming from other galaxy clusters

  9. An attempt of modelling debris flows characterised by strong inertial effects through Cellular Automata

    Science.gov (United States)

    Iovine, G.; D'Ambrosio, D.

    2003-04-01

    Cellular Automata models do represent a valid method for the simulation of complex phenomena, when these latter can be described in "a-centric" terms - i.e. through local interactions within a discrete time-space. In particular, flow-type landslides (such as debris flows) can be viewed as a-centric dynamical system. SCIDDICA S4b, the last release of a family of two-dimensional hexagonal Cellular Automata models, has recently been developed for simulating debris flows characterised by strong inertial effects. It has been derived by progressively enriching an initial simplified CA model, originally derived for simulating very simple cases of slow-moving flow-type landslides. In S4b, by applying an empirical strategy, the inertial characters of the flowing mass have been translated into CA terms. In the transition function of the model, the distribution of landslide debris among the cells is computed by considering the momentum of the debris which move among the cells of the neighbourhood, and privileging the flow direction. By properly setting the value of one of the global parameters of the model (the "inertial factor"), the mechanism of distribution of the landslide debris among the cells can be influenced in order to emphasise the inertial effects, according to the energy of the flowing mass. Moreover, the high complexity of both the model and of the phenomena to be simulated (e.g. debris flows characterised by severe erosion along their path, and by strong inertial effects) suggested to employ an automated technique of evaluation, for the determination of the best set of global parameters. Accordingly, the calibration of the model has been performed through Genetic Algorithms, by considering several real cases of study: these latter have been selected among the population of landslides triggered in Campania (Southern Italy) in May 1998 and December 1999. Obtained results are satisfying: errors computed by comparing the simulations with the map of the real

  10. Plant control using embedded predictive models

    International Nuclear Information System (INIS)

    Godbole, S.S.; Gabler, W.E.; Eschbach, S.L.

    1990-01-01

    B and W recently undertook the design of an advanced light water reactor control system. A concept new to nuclear steam system (NSS) control was developed. The concept, which is called the Predictor-Corrector, uses mathematical models of portions of the controlled NSS to calculate, at various levels within the system, demand and control element position signals necessary to satisfy electrical demand. The models give the control system the ability to reduce overcooling and undercooling of the reactor coolant system during transients and upsets. Two types of mathematical models were developed for use in designing and testing the control system. One model was a conventional, comprehensive NSS model that responds to control system outputs and calculates the resultant changes in plant variables that are then used as inputs to the control system. Two other models, embedded in the control system, were less conventional, inverse models. These models accept as inputs plant variables, equipment states, and demand signals and predict plant operating conditions and control element states that will satisfy the demands. This paper reports preliminary results of closed-loop Reactor Coolant (RC) pump trip and normal load reduction testing of the advanced concept. Results of additional transient testing, and of open and closed loop stability analyses will be reported as they are available

  11. BGK-type models in strong reaction and kinetic chemical equilibrium regimes

    International Nuclear Information System (INIS)

    Monaco, R; Bianchi, M Pandolfi; Soares, A J

    2005-01-01

    A BGK-type procedure is applied to multi-component gases undergoing chemical reactions of bimolecular type. The relaxation process towards local Maxwellians, depending on mass and numerical densities of each species as well as common velocity and temperature, is investigated in two different cases with respect to chemical regimes. These cases are related to the strong reaction regime characterized by slow reactions, and to the kinetic chemical equilibrium regime where fast reactions take place. The consistency properties of both models are stated in detail. The trend to equilibrium is numerically tested and comparisons for the two regimes are performed within the hydrogen-air and carbon-oxygen reaction mechanism. In the spatial homogeneous case, it is also shown that the thermodynamical equilibrium of the models recovers satisfactorily the asymptotic equilibrium solutions to the reactive Euler equations

  12. Single-particle model of a strongly driven, dense, nanoscale quantum ensemble

    Science.gov (United States)

    DiLoreto, C. S.; Rangan, C.

    2018-01-01

    We study the effects of interatomic interactions on the quantum dynamics of a dense, nanoscale, atomic ensemble driven by a strong electromagnetic field. We use a self-consistent, mean-field technique based on the pseudospectral time-domain method and a full, three-directional basis to solve the coupled Maxwell-Liouville equations. We find that interatomic interactions generate a decoherence in the state of an ensemble on a much faster time scale than the excited-state lifetime of individual atoms. We present a single-particle model of the driven, dense ensemble by incorporating interactions into a dephasing rate. This single-particle model reproduces the essential physics of the full simulation and is an efficient way of rapidly estimating the collective dynamics of a dense ensemble.

  13. The angular structure of jet quenching within a hybrid strong/weak coupling model

    Science.gov (United States)

    Casalderrey-Solana, Jorge; Gulhan, Doga Can; Milhano, José Guilherme; Pablos, Daniel; Rajagopal, Krishna

    2017-08-01

    Building upon the hybrid strong/weak coupling model for jet quenching, we incorporate and study the effects of transverse momentum broadening and medium response of the plasma to jets on a variety of observables. For inclusive jet observables, we find little sensitivity to the strength of broadening. To constrain those dynamics, we propose new observables constructed from ratios of differential jet shapes, in which particles are binned in momentum, which are sensitive to the in-medium broadening parameter. We also investigate the effect of the back-reaction of the medium on the angular structure of jets as reconstructed with different cone radii R. Finally we provide results for the so called ;missing-pt;, finding a qualitative agreement between our model calculations and data in many respects, although a quantitative agreement is beyond our simplified treatment of the hadrons originating from the hydrodynamic wake.

  14. A consistent and predictable commercial broiler chicken bacterial microbiota in antibiotic-free production displays strong correlations with performance.

    Science.gov (United States)

    Johnson, Timothy J; Youmans, Bonnie P; Noll, Sally; Cardona, Carol; Evans, Nicholas P; Karnezos, T Peter; Ngunjiri, John M; Abundo, Michael C; Lee, Chang-Won

    2018-04-06

    Defining the baseline bacterial microbiome is critical towards understanding its relationship with health and disease. In broiler chickens, the core microbiome and its possible relationships with health and disease have been difficult to define due to high variability between birds and flocks. Presented are data from a large, comprehensive microbiota-based study in commercial broilers. The primary goals of this study included understanding what constitutes the core bacterial microbiota in the broiler gastrointestinal, respiratory, and barn environments; how these core players change across age, geography, and time; and which bacterial taxa correlate with enhanced bird performance in antibiotic-free flocks. Using 2,309 samples from 37 different commercial flocks within a vertically integrated broiler system, and metadata from these and an additional 512 flocks within that system, the baseline bacterial microbiota was defined using 16S rRNA gene sequencing. The effects of age, sample type, flock, and successive flock cycles were compared, and results indicate a consistent, predictable, age-dependent bacterial microbiota, irrespective of flock. The tracheal bacterial microbiota of broilers was comprehensively defined, and Lactobacillus was the dominant bacterial taxa in the trachea. Numerous bacterial taxa were identified which were strongly correlated with broiler chicken performance, across multiple tissues. While many positively correlated taxa were identified, negatively associated potential pathogens were also identified in the absence of clinical disease, indicating subclinical dynamics occurring that impact performance. Overall, this work provides necessary baseline data for the development of effective antibiotic alternatives, such as probiotics, for sustainable poultry production. Importance Multidrug resistant bacterial pathogens are perhaps the greatest medical challenge we will face in the 21 st century and beyond. Antibiotics are necessary in animal

  15. Models of the Strongly Lensed Quasar DES J0408-5354

    Energy Technology Data Exchange (ETDEWEB)

    Agnello, A.; et al.

    2017-02-01

    We present gravitational lens models of the multiply imaged quasar DES J0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with the aim of interpreting its remarkable quad-like configuration. We first model the DES single-epoch $grizY$ images as a superposition of a lens galaxy and four point-like objects, obtaining spectral energy distributions (SEDs) and relative positions for the objects. Three of the point sources (A,B,D) have SEDs compatible with the discovery quasar spectra, while the faintest point-like image (G2/C) shows significant reddening and a `grey' dimming of $\\approx0.8$mag. In order to understand the lens configuration, we fit different models to the relative positions of A,B,D. Models with just a single deflector predict a fourth image at the location of G2/C but considerably brighter and bluer. The addition of a small satellite galaxy ($R_{\\rm E}\\approx0.2$") in the lens plane near the position of G2/C suppresses the flux of the fourth image and can explain both the reddening and grey dimming. All models predict a main deflector with Einstein radius between $1.7"$ and $2.0",$ velocity dispersion $267-280$km/s and enclosed mass $\\approx 6\\times10^{11}M_{\\odot},$ even though higher resolution imaging data are needed to break residual degeneracies in model parameters. The longest time-delay (B-A) is estimated as $\\approx 85$ (resp. $\\approx125$) days by models with (resp. without) a perturber near G2/C. The configuration and predicted time-delays of J0408-5354 make it an excellent target for follow-up aimed at understanding the source quasar host galaxy and substructure in the lens, and measuring cosmological parameters. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.

  16. Oblique S and T constraints on electroweak strongly-coupled models with a light Higgs

    Energy Technology Data Exchange (ETDEWEB)

    Pich, A. [Departament de Física Teòrica, IFIC, Universitat de València - CSIC,Apt. Correus 22085, E-46071 València (Spain); Rosell, I. [Departament de Física Teòrica, IFIC, Universitat de València - CSIC,Apt. Correus 22085, E-46071 València (Spain); Departamento de Ciencias Físicas, Matemáticas y de la Computación,Universidad CEU Cardenal Herrera,c/ Sant Bartomeu 55, E-46115 Alfara del Patriarca, València (Spain); Sanz-Ciller, J.J. [Departamento de Física Teórica, Instituto de Física Teórica,Universidad Autónoma de Madrid - CSIC,c/ Nicolás Cabrera 13-15, E-28049 Cantoblanco, Madrid (Spain)

    2014-01-28

    Using a general effective Lagrangian implementing the chiral symmetry breaking SU(2){sub L}⊗SU(2){sub R}→SU(2){sub L+R}, we present a one-loop calculation of the oblique S and T parameters within electroweak strongly-coupled models with a light scalar. Imposing a proper ultraviolet behaviour, we determine S and T at next-to-leading order in terms of a few resonance parameters. The constraints from the global fit to electroweak precision data force the massive vector and axial-vector states to be heavy, with masses above the TeV scale, and suggest that the W{sup +}W{sup −} and ZZ couplings of the Higgs-like scalar should be close to the Standard Model value. Our findings are generic, since they only rely on soft requirements on the short-distance properties of the underlying strongly-coupled theory, which are widely satisfied in more specific scenarios.

  17. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  18. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  19. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  20. Predicting FLDs Using a Multiscale Modeling Scheme

    Science.gov (United States)

    Wu, Z.; Loy, C.; Wang, E.; Hegadekatte, V.

    2017-09-01

    The measurement of a single forming limit diagram (FLD) requires significant resources and is time consuming. We have developed a multiscale modeling scheme to predict FLDs using a combination of limited laboratory testing, crystal plasticity (VPSC) modeling, and dual sequential-stage finite element (ABAQUS/Explicit) modeling with the Marciniak-Kuczynski (M-K) criterion to determine the limit strain. We have established a means to work around existing limitations in ABAQUS/Explicit by using an anisotropic yield locus (e.g., BBC2008) in combination with the M-K criterion. We further apply a VPSC model to reduce the number of laboratory tests required to characterize the anisotropic yield locus. In the present work, we show that the predicted FLD is in excellent agreement with the measured FLD for AA5182 in the O temper. Instead of 13 different tests as for a traditional FLD determination within Novelis, our technique uses just four measurements: tensile properties in three orientations; plane strain tension; biaxial bulge; and the sheet crystallographic texture. The turnaround time is consequently far less than for the traditional laboratory measurement of the FLD.

  1. PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION

    Directory of Open Access Journals (Sweden)

    Narciso Ysac Avila Serrano

    2009-06-01

    Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (P≤ 0.05 among cultivars. Paceño and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients ≥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (P≤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.

  2. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  3. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  4. A molecular prognostic model predicts esophageal squamous cell carcinoma prognosis.

    Directory of Open Access Journals (Sweden)

    Hui-Hui Cao

    Full Text Available Esophageal squamous cell carcinoma (ESCC has the highest mortality rates in China. The 5-year survival rate of ESCC remains dismal despite improvements in treatments such as surgical resection and adjuvant chemoradiation, and current clinical staging approaches are limited in their ability to effectively stratify patients for treatment options. The aim of the present study, therefore, was to develop an immunohistochemistry-based prognostic model to improve clinical risk assessment for patients with ESCC.We developed a molecular prognostic model based on the combined expression of axis of epidermal growth factor receptor (EGFR, phosphorylated Specificity protein 1 (p-Sp1, and Fascin proteins. The presence of this prognostic model and associated clinical outcomes were analyzed for 130 formalin-fixed, paraffin-embedded esophageal curative resection specimens (generation dataset and validated using an independent cohort of 185 specimens (validation dataset.The expression of these three genes at the protein level was used to build a molecular prognostic model that was highly predictive of ESCC survival in both generation and validation datasets (P = 0.001. Regression analysis showed that this molecular prognostic model was strongly and independently predictive of overall survival (hazard ratio = 2.358 [95% CI, 1.391-3.996], P = 0.001 in generation dataset; hazard ratio = 1.990 [95% CI, 1.256-3.154], P = 0.003 in validation dataset. Furthermore, the predictive ability of these 3 biomarkers in combination was more robust than that of each individual biomarker.This technically simple immunohistochemistry-based molecular model accurately predicts ESCC patient survival and thus could serve as a complement to current clinical risk stratification approaches.

  5. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  6. Interpreting the Strongly Lensed Supernova iPTF16geu: Time Delay Predictions, Microlensing, and Lensing Rates

    Energy Technology Data Exchange (ETDEWEB)

    More, Anupreeta; Oguri, Masamune; More, Surhud [Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), University of Tokyo, Chiba 277-8583 (Japan); Suyu, Sherry H. [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching (Germany); Lee, Chien-Hsiu, E-mail: anupreeta.more@ipmu.jp [Subaru Telescope, National Astronomical Observatory of Japan, 650 North Aohoku Place, Hilo, HI 96720 (United States)

    2017-02-01

    We present predictions for time delays between multiple images of the gravitationally lensed supernova, iPTF16geu, which was recently discovered from the intermediate Palomar Transient Factory (iPTF). As the supernova is of Type Ia where the intrinsic luminosity is usually well known, accurately measured time delays of the multiple images could provide tight constraints on the Hubble constant. According to our lens mass models constrained by the Hubble Space Telescope F814W image, we expect the maximum relative time delay to be less than a day, which is consistent with the maximum of 100 hr reported by Goobar et al. but places a stringent upper limit. Furthermore, the fluxes of most of the supernova images depart from expected values suggesting that they are affected by microlensing. The microlensing timescales are small enough that they may pose significant problems to measure the time delays reliably. Our lensing rate calculation indicates that the occurrence of a lensed SN in iPTF is likely. However, the observed total magnification of iPTF16geu is larger than expected, given its redshift. This may be a further indication of ongoing microlensing in this system.

  7. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  8. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  9. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  10. A predictive model for the tokamak density limit

    International Nuclear Information System (INIS)

    Teng, Q.; Brennan, D. P.; Delgado-Aparicio, L.; Gates, D. A.; Swerdlow, J.; White, R. B.

    2016-01-01

    We reproduce the Greenwald density limit, in all tokamak experiments by using a phenomenologically correct model with parameters in the range of experiments. A simple model of equilibrium evolution and local power balance inside the island has been implemented to calculate the radiation-driven thermo-resistive tearing mode growth and explain the density limit. Strong destabilization of the tearing mode due to an imbalance of local Ohmic heating and radiative cooling in the island predicts the density limit within a few percent. Furthermore, we found the density limit and it is a local edge limit and weakly dependent on impurity densities. Our results are robust to a substantial variation in model parameters within the range of experiments.

  11. RELICS: Strong Lens Models for Five Galaxy Clusters from the Reionization Lensing Cluster Survey

    Science.gov (United States)

    Cerny, Catherine; Sharon, Keren; Andrade-Santos, Felipe; Avila, Roberto J.; Bradač, Maruša; Bradley, Larry D.; Carrasco, Daniela; Coe, Dan; Czakon, Nicole G.; Dawson, William A.; Frye, Brenda L.; Hoag, Austin; Huang, Kuang-Han; Johnson, Traci L.; Jones, Christine; Lam, Daniel; Lovisari, Lorenzo; Mainali, Ramesh; Oesch, Pascal A.; Ogaz, Sara; Past, Matthew; Paterno-Mahler, Rachel; Peterson, Avery; Riess, Adam G.; Rodney, Steven A.; Ryan, Russell E.; Salmon, Brett; Sendra-Server, Irene; Stark, Daniel P.; Strolger, Louis-Gregory; Trenti, Michele; Umetsu, Keiichi; Vulcani, Benedetta; Zitrin, Adi

    2018-06-01

    Strong gravitational lensing by galaxy clusters magnifies background galaxies, enhancing our ability to discover statistically significant samples of galaxies at {\\boldsymbol{z}}> 6, in order to constrain the high-redshift galaxy luminosity functions. Here, we present the first five lens models out of the Reionization Lensing Cluster Survey (RELICS) Hubble Treasury Program, based on new HST WFC3/IR and ACS imaging of the clusters RXC J0142.9+4438, Abell 2537, Abell 2163, RXC J2211.7–0349, and ACT-CLJ0102–49151. The derived lensing magnification is essential for estimating the intrinsic properties of high-redshift galaxy candidates, and properly accounting for the survey volume. We report on new spectroscopic redshifts of multiply imaged lensed galaxies behind these clusters, which are used as constraints, and detail our strategy to reduce systematic uncertainties due to lack of spectroscopic information. In addition, we quantify the uncertainty on the lensing magnification due to statistical and systematic errors related to the lens modeling process, and find that in all but one cluster, the magnification is constrained to better than 20% in at least 80% of the field of view, including statistical and systematic uncertainties. The five clusters presented in this paper span the range of masses and redshifts of the clusters in the RELICS program. We find that they exhibit similar strong lensing efficiencies to the clusters targeted by the Hubble Frontier Fields within the WFC3/IR field of view. Outputs of the lens models are made available to the community through the Mikulski Archive for Space Telescopes.

  12. Modeling of strongly heat-driven flow in partially saturated fractured porous media

    International Nuclear Information System (INIS)

    Pruess, K.; Tsang, Y.W.; Wang, J.S.Y.

    1985-01-01

    The authors have performed modeling studies on the simultaneous transport of heat, liquid water, vapor, and air in partially saturated fractured porous media, with particular emphasis on strongly heat-driven flow. The presence of fractures makes the transport problem very complex, both in terms of flow geometry and physics. The numerical simulator used for their flow calculations takes into account most of the physical effects which are important in multi-phase fluid and heat flow. It has provisions to handle the extreme non-linearities which arise in phase transitions, component disappearances, and capillary discontinuities at fracture faces. They model a region around an infinite linear string of nuclear waste canisters, taking into account both the discrete fractures and the porous matrix. From an analysis of the results obtained with explicit fractures, they develop equivalent continuum models which can reproduce the temperature, saturation, and pressure variation, and gas and liquid flow rates of the discrete fracture-porous matrix calculations. The equivalent continuum approach makes use of a generalized relative permeability concept to take into account the fracture effects. This results in a substantial simplification of the flow problem which makes larger scale modeling of complicated unsaturated fractured porous systems feasible. Potential applications for regional scale simulations and limitations of the continuum approach are discussed. 27 references, 13 figures, 2 tables

  13. Modeling of strongly heat-driven flow in partially saturated fractured porous media

    International Nuclear Information System (INIS)

    Pruess, K.; Tsang, Y.W.; Wang, J.S.Y.

    1984-10-01

    We have performed modeling studies on the simultaneous transport of heat, liquid water, vapor, and air in partially saturated fractured porous media, with particular emphasis on strongly heat-driven flow. The presence of fractures makes the transport problem very complex, both in terms of flow geometry and physics. The numerical simulator used for our flow calculations takes into account most of the physical effects which are important in multi-phase fluid and heat flow. It has provisions to handle the extreme non-linearities which arise in phase transitions, component disappearances, and capillary discontinuities at fracture faces. We model a region around an infinite linear string of nuclear waste canisters, taking into account both the discrete fractures and the porous matrix. From an analysis of the results obtained with explicit fractures, we develop equivalent continuum models which can reproduce the temperature, saturation, and pressure variation, and gas and liquid flow rates of the discrete fracture-porous matrix calculations. The equivalent continuum approach makes use of a generalized relative permeability concept to take into account for fracture effects. This results in a substantial simplification of the flow problem which makes larger scale modeling of complicated unsaturated fractured porous systems feasible. Potential applications for regional scale simulations and limitations of the continuum approach are discussed. 27 references, 13 figures, 2 tables

  14. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  15. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  16. Models of the strongly lensed quasar DES J0408-5354

    Science.gov (United States)

    Agnello, A.; Lin, H.; Buckley-Geer, L.; Treu, T.; Bonvin, V.; Courbin, F.; Lemon, C.; Morishita, T.; Amara, A.; Auger, M. W.; Birrer, S.; Chan, J.; Collett, T.; More, A.; Fassnacht, C. D.; Frieman, J.; Marshall, P. J.; McMahon, R. G.; Meylan, G.; Suyu, S. H.; Castander, F.; Finley, D.; Howell, A.; Kochanek, C.; Makler, M.; Martini, P.; Morgan, N.; Nord, B.; Ostrovski, F.; Schechter, P.; Tucker, D.; Wechsler, R.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Dietrich, J. P.; Eifler, T. F.; Flaugher, B.; Fosalba, P.; García-Bellido, J.; Gaztanaga, E.; Gill, M. S.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kuehn, K.; Kuropatkin, N.; Li, T. S.; Lima, M.; Maia, M. A. G.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Ogando, R. L. C.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Smith, R. C.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.

    2017-12-01

    We present detailed modelling of the recently discovered, quadruply lensed quasar J0408-5354, with the aim of interpreting its remarkable configuration: besides three quasar images (A,B,D) around the main deflector (G1), a fourth image (C) is significantly reddened and dimmed by a perturber (G2) which is not detected in the Dark Energy Survey imaging data. From lens models incorporating (dust-corrected) flux ratios, we find a perturber Einstein radius 0.04 arcsec ≲ RE, G2 ≲ 0.2 arcsec and enclosed mass Mp(RE, G2) ≲ 1.0 × 1010 M⊙. The main deflector has stellar mass log _{10}(M_{\\star }/M_{⊙})=11.49^{+0.46}_{-0.32}, a projected mass Mp(RE, G1) ≈ 6 × 1011 M⊙ within its Einstein radius RE, G1 = (1.85 ± 0.15) arcsec and predicted velocity dispersion 267-280 km s-1. Follow-up images from a companion monitoring campaign show additional components, including a candidate second source at a redshift between the quasar and G1. Models with free perturbers, and dust-corrected and delay-corrected flux ratios, are also explored. The predicted time-delays (ΔtAB = (135.0 ± 12.6) d, ΔtBD = (21.0 ± 3.5) d) roughly agree with those measured, but better imaging is required for proper modelling and comparison. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.

  17. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...

  18. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  19. Effective modelling for predictive analytics in data science ...

    African Journals Online (AJOL)

    Effective modelling for predictive analytics in data science. ... the nearabsence of empirical or factual predictive analytics in the mainstream research going on ... Keywords: Predictive Analytics, Big Data, Business Intelligence, Project Planning.

  20. Advanced Practice Nursing Committee on Process Improvement in Trauma: An Innovative Application of the Strong Model.

    Science.gov (United States)

    West, Sarah Katherine

    2016-01-01

    This article aims to summarize the successes and future implications for a nurse practitioner-driven committee on process improvement in trauma. The trauma nurse practitioner is uniquely positioned to recognize the need for clinical process improvement and enact change within the clinical setting. Application of the Strong Model of Advanced Practice proves to actively engage the trauma nurse practitioner in process improvement initiatives. Through enhancing nurse practitioner professional engagement, the committee aims to improve health care delivery to the traumatically injured patient. A retrospective review of the committee's first year reveals trauma nurse practitioner success in the domains of direct comprehensive care, support of systems, education, and leadership. The need for increased trauma nurse practitioner involvement has been identified for the domains of research and publication.

  1. Fluctuation instability of the Dirac Sea in quark models of strong interactions

    Science.gov (United States)

    Zinovjev, G. M.; Molodtsov, S. V.

    2016-03-01

    A number of exactly integrable (quark) models of quantum field theory that feature an infinite correlation length are considered. An instability of the standard vacuum quark ensemble, a Dirac sea (in spacetimes of dimension higher than three), is highlighted. It is due to a strong ground-state degeneracy, which, in turn, stems from a special character of the energy distribution. In the case where the momentumcutoff parameter tends to infinity, this distribution becomes infinitely narrow and leads to large (unlimited) fluctuations. A comparison of the results for various vacuum ensembles, including a Dirac sea, a neutral ensemble, a color superconductor, and a Bardeen-Cooper-Schrieffer (BCS) state, was performed. In the presence of color quark interaction, a BCS state is unambiguously chosen as the ground state of the quark ensemble.

  2. Fluctuation instability of the Dirac Sea in quark models of strong interactions

    International Nuclear Information System (INIS)

    Zinovjev, G. M.; Molodtsov, S. V.

    2016-01-01

    A number of exactly integrable (quark) models of quantum field theory that feature an infinite correlation length are considered. An instability of the standard vacuum quark ensemble, a Dirac sea (in spacetimes of dimension higher than three), is highlighted. It is due to a strong ground-state degeneracy, which, in turn, stems from a special character of the energy distribution. In the case where the momentumcutoff parameter tends to infinity, this distribution becomes infinitely narrow and leads to large (unlimited) fluctuations. A comparison of the results for various vacuum ensembles, including a Dirac sea, a neutral ensemble, a color superconductor, and a Bardeen–Cooper–Schrieffer (BCS) state, was performed. In the presence of color quark interaction, a BCS state is unambiguously chosen as the ground state of the quark ensemble.

  3. Fluctuation instability of the Dirac Sea in quark models of strong interactions

    Energy Technology Data Exchange (ETDEWEB)

    Zinovjev, G. M., E-mail: Gennady.Zinovjev@cern.ch [National Academy of Sciences of Ukraine, Bogolyubov Institute for Theoretical Physics (Ukraine); Molodtsov, S. V. [Joint Institute for Nuclear Research (Russian Federation)

    2016-03-15

    A number of exactly integrable (quark) models of quantum field theory that feature an infinite correlation length are considered. An instability of the standard vacuum quark ensemble, a Dirac sea (in spacetimes of dimension higher than three), is highlighted. It is due to a strong ground-state degeneracy, which, in turn, stems from a special character of the energy distribution. In the case where the momentumcutoff parameter tends to infinity, this distribution becomes infinitely narrow and leads to large (unlimited) fluctuations. A comparison of the results for various vacuum ensembles, including a Dirac sea, a neutral ensemble, a color superconductor, and a Bardeen–Cooper–Schrieffer (BCS) state, was performed. In the presence of color quark interaction, a BCS state is unambiguously chosen as the ground state of the quark ensemble.

  4. Coherent beam combination of fiber lasers with a strongly confined waveguide: numerical model.

    Science.gov (United States)

    Tao, Rumao; Si, Lei; Ma, Yanxing; Zhou, Pu; Liu, Zejin

    2012-08-20

    Self-imaging properties of fiber lasers in a strongly confined waveguide (SCW) and their application in coherent beam combination (CBC) are studied theoretically. Analytical formulas are derived for the positions, amplitudes, and phases of the N images at the end of an SCW, which is important for quantitative analysis of waveguide CBC. The formulas are verified with experimental results and numerical simulation of a finite difference beam propagation method (BPM). The error of our analytical formulas is less than 6%, which can be reduced to less than 1.5% with Goos-Hahnchen penetration depth considered. Based on the theoretical model and BPM, we studied the combination of two laser beams based on an SCW. The effects of the waveguide refractive index and Gaussian beam waist are studied. We also simulated the CBC of nine and 16 fiber lasers, and a single beam without side lobes was achieved.

  5. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  6. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  7. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  8. Towards predictive models for transitionally rough surfaces

    Science.gov (United States)

    Abderrahaman-Elena, Nabil; Garcia-Mayoral, Ricardo

    2017-11-01

    We analyze and model the previously presented decomposition for flow variables in DNS of turbulence over transitionally rough surfaces. The flow is decomposed into two contributions: one produced by the overlying turbulence, which has no footprint of the surface texture, and one induced by the roughness, which is essentially the time-averaged flow around the surface obstacles, but modulated in amplitude by the first component. The roughness-induced component closely resembles the laminar steady flow around the roughness elements at the same non-dimensional roughness size. For small - yet transitionally rough - textures, the roughness-free component is essentially the same as over a smooth wall. Based on these findings, we propose predictive models for the onset of the transitionally rough regime. Project supported by the Engineering and Physical Sciences Research Council (EPSRC).

  9. Resource-estimation models and predicted discovery

    International Nuclear Information System (INIS)

    Hill, G.W.

    1982-01-01

    Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)

  10. Predictive Models for Photovoltaic Electricity Production in Hot Weather Conditions

    Directory of Open Access Journals (Sweden)

    Jabar H. Yousif

    2017-07-01

    Full Text Available The process of finding a correct forecast equation for photovoltaic electricity production from renewable sources is an important matter, since knowing the factors affecting the increase in the proportion of renewable energy production and reducing the cost of the product has economic and scientific benefits. This paper proposes a mathematical model for forecasting energy production in photovoltaic (PV panels based on a self-organizing feature map (SOFM model. The proposed model is compared with other models, including the multi-layer perceptron (MLP and support vector machine (SVM models. Moreover, a mathematical model based on a polynomial function for fitting the desired output is proposed. Different practical measurement methods are used to validate the findings of the proposed neural and mathematical models such as mean square error (MSE, mean absolute error (MAE, correlation (R, and coefficient of determination (R2. The proposed SOFM model achieved a final MSE of 0.0007 in the training phase and 0.0005 in the cross-validation phase. In contrast, the SVM model resulted in a small MSE value equal to 0.0058, while the MLP model achieved a final MSE of 0.026 with a correlation coefficient of 0.9989, which indicates a strong relationship between input and output variables. The proposed SOFM model closely fits the desired results based on the R2 value, which is equal to 0.9555. Finally, the comparison results of MAE for the three models show that the SOFM model achieved a best result of 0.36156, whereas the SVM and MLP models yielded 4.53761 and 3.63927, respectively. A small MAE value indicates that the output of the SOFM model closely fits the actual results and predicts the desired output.

  11. Prediction of pipeline corrosion rate based on grey Markov models

    International Nuclear Information System (INIS)

    Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin

    2009-01-01

    Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)

  12. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  13. Methods for prediction of strong earthquake ground motion. Final technical report, October 1, 1976--September 30, 1977

    International Nuclear Information System (INIS)

    Trifunac, M.D.

    1977-09-01

    The purpose of this report is to summarize the results of the work on characterization of strong earthquake ground motion. The objective of this effort has been to initiate presentation of simple yet detailed methodology for characterization of strong earthquake ground motion for use in licensing and evaluation of operating Nuclear Power Plants. This report will emphasize the simplicity of the methodology by presenting only the end results in a format that may be useful for the development of the site specific criteria in seismic risk analysis, for work on the development of modern standards and regulatory guides, and for re-evaluation of the existing power plant sites

  14. Data driven propulsion system weight prediction model

    Science.gov (United States)

    Gerth, Richard J.

    1994-10-01

    The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.

  15. Predictive modeling of emergency cesarean delivery.

    Directory of Open Access Journals (Sweden)

    Carlos Campillo-Artero

    Full Text Available To increase discriminatory accuracy (DA for emergency cesarean sections (ECSs.We prospectively collected data on and studied all 6,157 births occurring in 2014 at four public hospitals located in three different autonomous communities of Spain. To identify risk factors (RFs for ECS, we used likelihood ratios and logistic regression, fitted a classification tree (CTREE, and analyzed a random forest model (RFM. We used the areas under the receiver-operating-characteristic (ROC curves (AUCs to assess their DA.The magnitude of the LR+ for all putative individual RFs and ORs in the logistic regression models was low to moderate. Except for parity, all putative RFs were positively associated with ECS, including hospital fixed-effects and night-shift delivery. The DA of all logistic models ranged from 0.74 to 0.81. The most relevant RFs (pH, induction, and previous C-section in the CTREEs showed the highest ORs in the logistic models. The DA of the RFM and its most relevant interaction terms was even higher (AUC = 0.94; 95% CI: 0.93-0.95.Putative fetal, maternal, and contextual RFs alone fail to achieve reasonable DA for ECS. It is the combination of these RFs and the interactions between them at each hospital that make it possible to improve the DA for the type of delivery and tailor interventions through prediction to improve the appropriateness of ECS indications.

  16. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  17. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    Energy Technology Data Exchange (ETDEWEB)

    Andrade-Ines, Eduardo [Institute de Mécanique Céleste et des Calcul des Éphémérides—Observatoire de Paris, 77 Avenue Denfert Rochereau, F-75014 Paris (France); Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, 91109 Pasadena, CA (United States)

    2017-04-01

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-order models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.

  18. Damping at positive frequencies in the limit J⊥-->0 in the strongly correlated Hubbard model

    Science.gov (United States)

    Mohan, Minette M.

    1992-08-01

    I show damping in the two-dimensional strongly correlated Hubbard model within the retraceable-path approximation, using an expansion around dominant poles for the self-energy. The damping half-width ~J2/3z occurs only at positive frequencies ω>5/2Jz, the excitation energy of a pure ``string'' state of length one, where Jz is the Ising part of the superexchange interaction, and occurs even in the absence of spin-flip terms ~J⊥ in contrast to other theoretical treatments. The dispersion relation for both damped and undamped peaks near the upper band edge is found and is shown to have lost the simple J2/3z dependence characteristic of the peaks near the lower band edge. The position of the first three peaks near the upper band edge agrees well with numerical simulations on the t-J model. The weight of the undamped peaks near the upper band edge is ~J4/3z, contrasting with Jz for the weight near the lower band edge.

  19. Nonequilibrium phase transitions in finite arrays of globally coupled Stratonovich models: strong coupling limit

    International Nuclear Information System (INIS)

    Senf, Fabian; Altrock, Philipp M; Behn, Ulrich

    2009-01-01

    A finite array of N globally coupled Stratonovich models exhibits a continuous nonequilibrium phase transition. In the limit of strong coupling, there is a clear separation of timescales of centre of mass and relative coordinates. The latter relax very fast to zero and the array behaves as a single entity described by the centre of mass coordinate. We compute analytically the stationary probability distribution and the moments of the centre of mass coordinate. The scaling behaviour of the moments near the critical value of the control parameter a c (N) is determined. We identify a crossover from linear to square root scaling with increasing distance from a c . The crossover point approaches a c in the limit N→∞ which reproduces previous results for infinite arrays. Our results are obtained in both the Fokker-Planck and the Langevin approach and are corroborated by numerical simulations. For a general class of models we show that the transition manifold in the parameter space depends on N and is determined by the scaling behaviour near a fixed point of the stochastic flow.

  20. Pairing and superconductivity from weak to strong coupling in the attractive Hubbard model

    International Nuclear Information System (INIS)

    Toschi, A; Barone, P; Capone, M; Castellani, C

    2005-01-01

    The finite-temperature phase diagram of the attractive Hubbard model is studied by means of the dynamical mean-field theory. We first consider the normal phase of the model by explicitly frustrating the superconducting ordering. In this case, we obtain a first-order pairing transition between a metallic phase and a paired phase formed by strongly coupled incoherent pairs. The transition line ends in a finite temperature critical point, but a crossover between two qualitatively different solutions still occurs at higher temperature. Comparing the superconducting- and the normal-phase solutions, we find that the superconducting instability always occurs before the pairing transition in the normal phase, i.e. T c > T pairing . Nevertheless, the high-temperature phase diagram at T > T c is still characterized by a crossover from a metallic phase to a preformed pair phase. We characterize this crossover by computing different observables that can be used to identify the pseudogap region, like the spin susceptibility, the specific heat and the single-particle spectral function

  1. Two strongly correlated electron systems: the Kondo mode in the strong coupling limit and a 2-D model of electrons close to an electronic topological transition

    International Nuclear Information System (INIS)

    Bouis, F.

    1999-01-01

    Two strongly correlated electron systems are considered in this work, Kondo insulators and high Tc cuprates. Experiments and theory suggest on one hand that the Kondo screening occurs on a rather short length scale and on the other hand that the Kondo coupling is renormalized to infinity in the low energy limit. The strong coupling limit is then the logical approach although the real coupling is moderate. A systematic development is performed around this limit in the first part. The band structure of these materials is reproduced within this scheme. Magnetic fluctuations are also studied. The antiferromagnetic transition is examined in the case where fermionic excitations are shifted to high energy. In the second part, the Popov and Fedotov representation of spins is used to formulate the Kondo and the antiferromagnetic Heisenberg model in terms of a non-polynomial action of boson fields. In the third part the properties of high Tc cuprates are explained by a change of topology of the Fermi surface. This phenomenon would happen near the point of optimal doping and zero temperature. It results in the appearance of a density wave phase in the under-doped regime. The possibility that this phase has a non-conventional symmetry is considered. The phase diagram that described the interaction and coexistence of density wave and superconductivity is established in the mean-field approximation. The similarities with the experimental observations are numerous in particular those concerning the pseudo-gap and the behavior of the resistivity near optimal doping. (author)

  2. Strong coupling electroweak symmetry breaking

    International Nuclear Information System (INIS)

    Barklow, T.L.; Burdman, G.; Chivukula, R.S.

    1997-04-01

    The authors review models of electroweak symmetry breaking due to new strong interactions at the TeV energy scale and discuss the prospects for their experimental tests. They emphasize the direct observation of the new interactions through high-energy scattering of vector bosons. They also discuss indirect probes of the new interactions and exotic particles predicted by specific theoretical models

  3. Strong coupling electroweak symmetry breaking

    Energy Technology Data Exchange (ETDEWEB)

    Barklow, T.L. [Stanford Linear Accelerator Center, Menlo Park, CA (United States); Burdman, G. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Chivukula, R.S. [Boston Univ., MA (United States). Dept. of Physics

    1997-04-01

    The authors review models of electroweak symmetry breaking due to new strong interactions at the TeV energy scale and discuss the prospects for their experimental tests. They emphasize the direct observation of the new interactions through high-energy scattering of vector bosons. They also discuss indirect probes of the new interactions and exotic particles predicted by specific theoretical models.

  4. Methodology for Designing Models Predicting Success of Infertility Treatment

    OpenAIRE

    Alireza Zarinara; Mohammad Mahdi Akhondi; Hojjat Zeraati; Koorsh Kamali; Kazem Mohammad

    2016-01-01

    Abstract Background: The prediction models for infertility treatment success have presented since 25 years ago. There are scientific principles for designing and applying the prediction models that is also used to predict the success rate of infertility treatment. The purpose of this study is to provide basic principles for designing the model to predic infertility treatment success. Materials and Methods: In this paper, the principles for developing predictive models are explained and...

  5. A model for the training effects in swimming demonstrates a strong relationship between parasympathetic activity, performance and index of fatigue.

    Directory of Open Access Journals (Sweden)

    Sébastien Chalencon

    Full Text Available Competitive swimming as a physical activity results in changes to the activity level of the autonomic nervous system (ANS. However, the precise relationship between ANS activity, fatigue and sports performance remains contentious. To address this problem and build a model to support a consistent relationship, data were gathered from national and regional swimmers during two 30 consecutive-week training periods. Nocturnal ANS activity was measured weekly and quantified through wavelet transform analysis of the recorded heart rate variability. Performance was then measured through a subsequent morning 400 meters freestyle time-trial. A model was proposed where indices of fatigue were computed using Banister's two antagonistic component model of fatigue and adaptation applied to both the ANS activity and the performance. This demonstrated that a logarithmic relationship existed between performance and ANS activity for each subject. There was a high degree of model fit between the measured and calculated performance (R(2=0.84±0.14,p<0.01 and the measured and calculated High Frequency (HF power of the ANS activity (R(2=0.79±0.07, p<0.01. During the taper periods, improvements in measured performance and measured HF were strongly related. In the model, variations in performance were related to significant reductions in the level of 'Negative Influences' rather than increases in 'Positive Influences'. Furthermore, the delay needed to return to the initial performance level was highly correlated to the delay required to return to the initial HF power level (p<0.01. The delay required to reach peak performance was highly correlated to the delay required to reach the maximal level of HF power (p=0.02. Building the ANS/performance identity of a subject, including the time to peak HF, may help predict the maximal performance that could be obtained at a given time.

  6. Finite Unification: Theory, Models and Predictions

    CERN Document Server

    Heinemeyer, S; Zoupanos, G

    2011-01-01

    All-loop Finite Unified Theories (FUTs) are very interesting N=1 supersymmetric Grand Unified Theories (GUTs) realising an old field theory dream, and moreover have a remarkable predictive power due to the required reduction of couplings. The reduction of the dimensionless couplings in N=1 GUTs is achieved by searching for renormalization group invariant (RGI) relations among them holding beyond the unification scale. Finiteness results from the fact that there exist RGI relations among dimensional couplings that guarantee the vanishing of all beta-functions in certain N=1 GUTs even to all orders. Furthermore developments in the soft supersymmetry breaking sector of N=1 GUTs and FUTs lead to exact RGI relations, i.e. reduction of couplings, in this dimensionful sector of the theory, too. Based on the above theoretical framework phenomenologically consistent FUTs have been constructed. Here we review FUT models based on the SU(5) and SU(3)^3 gauge groups and their predictions. Of particular interest is the Hig...

  7. Predicting Biological Information Flow in a Model Oxygen Minimum Zone

    Science.gov (United States)

    Louca, S.; Hawley, A. K.; Katsev, S.; Beltran, M. T.; Bhatia, M. P.; Michiels, C.; Capelle, D.; Lavik, G.; Doebeli, M.; Crowe, S.; Hallam, S. J.

    2016-02-01

    Microbial activity drives marine biochemical fluxes and nutrient cycling at global scales. Geochemical measurements as well as molecular techniques such as metagenomics, metatranscriptomics and metaproteomics provide great insight into microbial activity. However, an integration of molecular and geochemical data into mechanistic biogeochemical models is still lacking. Recent work suggests that microbial metabolic pathways are, at the ecosystem level, strongly shaped by stoichiometric and energetic constraints. Hence, models rooted in fluxes of matter and energy may yield a holistic understanding of biogeochemistry. Furthermore, such pathway-centric models would allow a direct consolidation with meta'omic data. Here we present a pathway-centric biogeochemical model for the seasonal oxygen minimum zone in Saanich Inlet, a fjord off the coast of Vancouver Island. The model considers key dissimilatory nitrogen and sulfur fluxes, as well as the population dynamics of the genes that mediate them. By assuming a direct translation of biocatalyzed energy fluxes to biosynthesis rates, we make predictions about the distribution and activity of the corresponding genes. A comparison of the model to molecular measurements indicates that the model explains observed DNA, RNA, protein and cell depth profiles. This suggests that microbial activity in marine ecosystems such as oxygen minimum zones is well described by DNA abundance, which, in conjunction with geochemical constraints, determines pathway expression and process rates. Our work further demonstrates how meta'omic data can be mechanistically linked to environmental redox conditions and biogeochemical processes.

  8. Revised predictive equations for salt intrusion modelling in estuaries

    NARCIS (Netherlands)

    Gisen, J.I.A.; Savenije, H.H.G.; Nijzink, R.C.

    2015-01-01

    For one-dimensional salt intrusion models to be predictive, we need predictive equations to link model parameters to observable hydraulic and geometric variables. The one-dimensional model of Savenije (1993b) made use of predictive equations for the Van der Burgh coefficient $K$ and the dispersion

  9. Neutrino nucleosynthesis in supernovae: Shell model predictions

    International Nuclear Information System (INIS)

    Haxton, W.C.

    1989-01-01

    Almost all of the 3 · 10 53 ergs liberated in a core collapse supernova is radiated as neutrinos by the cooling neutron star. I will argue that these neutrinos interact with nuclei in the ejected shells of the supernovae to produce new elements. It appears that this nucleosynthesis mechanism is responsible for the galactic abundances of 7 Li, 11 B, 19 F, 138 La, and 180 Ta, and contributes significantly to the abundances of about 15 other light nuclei. I discuss shell model predictions for the charged and neutral current allowed and first-forbidden responses of the parent nuclei, as well as the spallation processes that produce the new elements. 18 refs., 1 fig., 1 tab

  10. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    units. The approach is inspired by smart-grid electric power production and consumption systems, where the flexibility of a large number of power producing and/or power consuming units can be exploited in a smart-grid solution. The objective is to accommodate the load variation on the grid, arising......This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... on one hand from varying consumption, on the other hand by natural variations in power production e.g. from wind turbines. The approach presented is based on quadratic optimization and possess the properties of low algorithmic complexity and of scalability. In particular, the proposed design methodology...

  11. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  12. Model predictive control of a wind turbine modelled in Simpack

    International Nuclear Information System (INIS)

    Jassmann, U; Matzke, D; Reiter, M; Abel, D; Berroth, J; Schelenz, R; Jacobs, G

    2014-01-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine

  13. Model predictive control of a wind turbine modelled in Simpack

    Science.gov (United States)

    Jassmann, U.; Berroth, J.; Matzke, D.; Schelenz, R.; Reiter, M.; Jacobs, G.; Abel, D.

    2014-06-01

    Wind turbines (WT) are steadily growing in size to increase their power production, which also causes increasing loads acting on the turbine's components. At the same time large structures, such as the blades and the tower get more flexible. To minimize this impact, the classical control loops for keeping the power production in an optimum state are more and more extended by load alleviation strategies. These additional control loops can be unified by a multiple-input multiple-output (MIMO) controller to achieve better balancing of tuning parameters. An example for MIMO control, which has been paid more attention to recently by wind industry, is Model Predictive Control (MPC). In a MPC framework a simplified model of the WT is used to predict its controlled outputs. Based on a user-defined cost function an online optimization calculates the optimal control sequence. Thereby MPC can intrinsically incorporate constraints e.g. of actuators. Turbine models used for calculation within the MPC are typically simplified. For testing and verification usually multi body simulations, such as FAST, BLADED or FLEX5 are used to model system dynamics, but they are still limited in the number of degrees of freedom (DOF). Detailed information about load distribution (e.g. inside the gearbox) cannot be provided by such models. In this paper a Model Predictive Controller is presented and tested in a co-simulation with SlMPACK, a multi body system (MBS) simulation framework used for detailed load analysis. The analysis are performed on the basis of the IME6.0 MBS WT model, described in this paper. It is based on the rotor of the NREL 5MW WT and consists of a detailed representation of the drive train. This takes into account a flexible main shaft and its main bearings with a planetary gearbox, where all components are modelled flexible, as well as a supporting flexible main frame. The wind loads are simulated using the NREL AERODYN v13 code which has been implemented as a routine to

  14. Density functional theory and dynamical mean-field theory. A way to model strongly correlated systems

    International Nuclear Information System (INIS)

    Backes, Steffen

    2017-04-01

    -local fluctuations. It has been successfully used to study the whole range of weakly to strongly correlated lattice models, including the metal-insulator transition, since even in the relevant dimensions of d = 2 and d = 3 spatial fluctuations are often small. The extension of DMFT towards realistic system by the use of DFT has been termed LDA+DMFT and has since then allowed for a significant improvement of the understanding of strongly correlated materials. We dedicate this thesis to the LDA+DMFT method and the study of the recently discovered ironpnictide superconductors, which are known to show effects of strong electronic correlations. Thus, in many cases these materials cannot be adequately described by a pure DFT approach alone and provide and ideal case for an investigation of their electronic properties within LDA+DMFT. We will first review the DFT method and point out what kind of approximations have to be made in practical calculations and what deficits they entail. Then we will give an introduction into the Green's function formalism in the real and imaginary time representation and discuss the resulting consequences like analytic continuation to pave the way for the derivation of the DMFT equations. After that, we will discuss the combination of DFT and DMFT into the LDA+DMFT method and how to set up the effective lattice models for practical calculations. Then we will apply the LDA+DMFT method to the hole-doped iron-pnictide superconductor KFe 2 As 2 , which we find to be a rather strongly correlated material that can only be reasonably described when electronic correlations are treated on a proper level beyond the the standard DFT approach. Our results show that the LDA+DMFT method is able to significantly improve the agreement of the theoretical calculation with experimental observations. Then we expand our study towards the isovalent series of KFe 2 As 2 , RbFe 2 As 2 and CsFe 2 As 2 , which we propose to show even stronger effects of electronic correlations due

  15. Density functional theory and dynamical mean-field theory. A way to model strongly correlated systems

    Energy Technology Data Exchange (ETDEWEB)

    Backes, Steffen

    2017-04-15

    -local fluctuations. It has been successfully used to study the whole range of weakly to strongly correlated lattice models, including the metal-insulator transition, since even in the relevant dimensions of d = 2 and d = 3 spatial fluctuations are often small. The extension of DMFT towards realistic system by the use of DFT has been termed LDA+DMFT and has since then allowed for a significant improvement of the understanding of strongly correlated materials. We dedicate this thesis to the LDA+DMFT method and the study of the recently discovered ironpnictide superconductors, which are known to show effects of strong electronic correlations. Thus, in many cases these materials cannot be adequately described by a pure DFT approach alone and provide and ideal case for an investigation of their electronic properties within LDA+DMFT. We will first review the DFT method and point out what kind of approximations have to be made in practical calculations and what deficits they entail. Then we will give an introduction into the Green's function formalism in the real and imaginary time representation and discuss the resulting consequences like analytic continuation to pave the way for the derivation of the DMFT equations. After that, we will discuss the combination of DFT and DMFT into the LDA+DMFT method and how to set up the effective lattice models for practical calculations. Then we will apply the LDA+DMFT method to the hole-doped iron-pnictide superconductor KFe{sub 2}As{sub 2}, which we find to be a rather strongly correlated material that can only be reasonably described when electronic correlations are treated on a proper level beyond the the standard DFT approach. Our results show that the LDA+DMFT method is able to significantly improve the agreement of the theoretical calculation with experimental observations. Then we expand our study towards the isovalent series of KFe{sub 2}As{sub 2}, RbFe{sub 2}As{sub 2} and CsFe{sub 2}As{sub 2}, which we propose to show even stronger

  16. Is Personality Fixed? Personality Changes as Much as "Variable" Economic Factors and More Strongly Predicts Changes to Life Satisfaction

    Science.gov (United States)

    Boyce, Christopher J.; Wood, Alex M.; Powdthavee, Nattavudh

    2013-01-01

    Personality is the strongest and most consistent cross-sectional predictor of high subjective well-being. Less predictive economic factors, such as higher income or improved job status, are often the focus of applied subjective well-being research due to a perception that they can change whereas personality cannot. As such there has been limited…

  17. Poisson Mixture Regression Models for Heart Disease Prediction.

    Science.gov (United States)

    Mufudza, Chipo; Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model.

  18. Predictive integrated modelling for ITER scenarios

    International Nuclear Information System (INIS)

    Artaud, J.F.; Imbeaux, F.; Aniel, T.; Basiuk, V.; Eriksson, L.G.; Giruzzi, G.; Hoang, G.T.; Huysmans, G.; Joffrin, E.; Peysson, Y.; Schneider, M.; Thomas, P.

    2005-01-01

    The uncertainty on the prediction of ITER scenarios is evaluated. 2 transport models which have been extensively validated against the multi-machine database are used for the computation of the transport coefficients. The first model is GLF23, the second called Kiauto is a model in which the profile of dilution coefficient is a gyro Bohm-like analytical function, renormalized in order to get profiles consistent with a given global energy confinement scaling. The package of codes CRONOS is used, it gives access to the dynamics of the discharge and allows the study of interplay between heat transport, current diffusion and sources. The main motivation of this work is to study the influence of parameters such plasma current, heat, density, impurities and toroidal moment transport. We can draw the following conclusions: 1) the target Q = 10 can be obtained in ITER hybrid scenario at I p = 13 MA, using either the DS03 two terms scaling or the GLF23 model based on the same pedestal; 2) I p = 11.3 MA, Q = 10 can be reached only assuming a very peaked pressure profile and a low pedestal; 3) at fixed Greenwald fraction, Q increases with density peaking; 4) achieving a stationary q-profile with q > 1 requires a large non-inductive current fraction (80%) that could be provided by 20 to 40 MW of LHCD; and 5) owing to the high temperature the q-profile penetration is delayed and q = 1 is reached about 600 s in ITER hybrid scenario at I p = 13 MA, in the absence of active q-profile control. (A.C.)

  19. Analytical modeling of equilibrium of strongly anisotropic plasma in tokamaks and stellarators

    International Nuclear Information System (INIS)

    Lepikhin, N. D.; Pustovitov, V. D.

    2013-01-01

    Theoretical analysis of equilibrium of anisotropic plasma in tokamaks and stellarators is presented. The anisotropy is assumed strong, which includes the cases with essentially nonuniform distributions of plasma pressure on magnetic surfaces. Such distributions can arise at neutral beam injection or at ion cyclotron resonance heating. Then the known generalizations of the standard theory of plasma equilibrium that treat p ‖ and p ⊥ (parallel and perpendicular plasma pressures) as almost constant on magnetic surfaces are not applicable anymore. Explicit analytical prescriptions of the profiles of p ‖ and p ⊥ are proposed that allow modeling of the anisotropic plasma equilibrium even with large ratios of p ‖ /p ⊥ or p ⊥ /p ‖ . A method for deriving the equation for the Shafranov shift is proposed that does not require introduction of the flux coordinates and calculation of the metric tensor. It is shown that for p ⊥ with nonuniformity described by a single poloidal harmonic, the equation for the Shafranov shift coincides with a known one derived earlier for almost constant p ⊥ on a magnetic surface. This does not happen in the other more complex case

  20. Angular structure of jet quenching within a hybrid strong/weak coupling model

    Energy Technology Data Exchange (ETDEWEB)

    Casalderrey-Solana, Jorge [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Departament de Física Quàntica i Astrofísica & Institut de Ciències del Cosmos (ICC),Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Gulhan, Doga Can [CERN, EP Department,CH-1211 Geneva 23 (Switzerland); Milhano, José Guilherme [CENTRA, Instituto Superior Técnico, Universidade de Lisboa,Av. Rovisco Pais, P-1049-001 Lisboa (Portugal); Laboratório de Instrumentação e Física Experimental de Partículas (LIP),Av. Elias Garcia 14-1, P-1000-149 Lisboa (Portugal); Theoretical Physics Department, CERN,Geneva (Switzerland); Pablos, Daniel [Departament de Física Quàntica i Astrofísica & Institut de Ciències del Cosmos (ICC),Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Rajagopal, Krishna [Center for Theoretical Physics, Massachusetts Institute of Technology,Cambridge, MA 02139 (United States)

    2017-03-27

    Within the context of a hybrid strong/weak coupling model of jet quenching, we study the modification of the angular distribution of the energy within jets in heavy ion collisions, as partons within jet showers lose energy and get kicked as they traverse the strongly coupled plasma produced in the collision. To describe the dynamics transverse to the jet axis, we add the effects of transverse momentum broadening into our hybrid construction, introducing a parameter K≡q̂/T{sup 3} that governs its magnitude. We show that, because of the quenching of the energy of partons within a jet, even when K≠0 the jets that survive with some specified energy in the final state are narrower than jets with that energy in proton-proton collisions. For this reason, many standard observables are rather insensitive to K. We propose a new differential jet shape ratio observable in which the effects of transverse momentum broadening are apparent. We also analyze the response of the medium to the passage of the jet through it, noting that the momentum lost by the jet appears as the momentum of a wake in the medium. After freezeout this wake becomes soft particles with a broad angular distribution but with net momentum in the jet direction, meaning that the wake contributes to what is reconstructed as a jet. This effect must therefore be included in any description of the angular structure of the soft component of a jet. We show that the particles coming from the response of the medium to the momentum and energy deposited in it leads to a correlation between the momentum of soft particles well separated from the jet in angle with the direction of the jet momentum, and find qualitative but not quantitative agreement with experimental data on observables designed to extract such a correlation. More generally, by confronting the results that we obtain upon introducing transverse momentum broadening and the response of the medium to the jet with available jet data, we highlight the

  1. Predictive modeling of reactive wetting and metal joining.

    Energy Technology Data Exchange (ETDEWEB)

    van Swol, Frank B.

    2013-09-01

    The performance, reproducibility and reliability of metal joints are complex functions of the detailed history of physical processes involved in their creation. Prediction and control of these processes constitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy and reactive wetting. Understanding this process requires coupling strong molecularscale chemistry at the interface with microscopic (diffusion) and macroscopic mass transport (flow) inside the liquid followed by subsequent cooling and solidification of the new metal mixture. The final joint displays compositional heterogeneity and its resulting microstructure largely determines the success or failure of the entire component. At present there exists no computational tool at Sandia that can predict the formation and success of a braze joint, as current capabilities lack the ability to capture surface/interface reactions and their effect on interface properties. This situation precludes us from implementing a proactive strategy to deal with joining problems. Here, we describe what is needed to arrive at a predictive modeling and simulation capability for multicomponent metals with complicated phase diagrams for melting and solidification, incorporating dissolutive and composition-dependent wetting.

  2. Ab initio optimization principle for the ground states of translationally invariant strongly correlated quantum lattice models.

    Science.gov (United States)

    Ran, Shi-Ju

    2016-05-01

    In this work, a simple and fundamental numeric scheme dubbed as ab initio optimization principle (AOP) is proposed for the ground states of translational invariant strongly correlated quantum lattice models. The idea is to transform a nondeterministic-polynomial-hard ground-state simulation with infinite degrees of freedom into a single optimization problem of a local function with finite number of physical and ancillary degrees of freedom. This work contributes mainly in the following aspects: (1) AOP provides a simple and efficient scheme to simulate the ground state by solving a local optimization problem. Its solution contains two kinds of boundary states, one of which play the role of the entanglement bath that mimics the interactions between a supercell and the infinite environment, and the other gives the ground state in a tensor network (TN) form. (2) In the sense of TN, a novel decomposition named as tensor ring decomposition (TRD) is proposed to implement AOP. Instead of following the contraction-truncation scheme used by many existing TN-based algorithms, TRD solves the contraction of a uniform TN in an opposite way by encoding the contraction in a set of self-consistent equations that automatically reconstruct the whole TN, making the simulation simple and unified; (3) AOP inherits and develops the ideas of different well-established methods, including the density matrix renormalization group (DMRG), infinite time-evolving block decimation (iTEBD), network contractor dynamics, density matrix embedding theory, etc., providing a unified perspective that is previously missing in this fields. (4) AOP as well as TRD give novel implications to existing TN-based algorithms: A modified iTEBD is suggested and the two-dimensional (2D) AOP is argued to be an intrinsic 2D extension of DMRG that is based on infinite projected entangled pair state. This paper is focused on one-dimensional quantum models to present AOP. The benchmark is given on a transverse Ising

  3. Nonlinear dynamical modeling and prediction of the terrestrial magnetospheric activity

    International Nuclear Information System (INIS)

    Vassiliadis, D.

    1992-01-01

    The irregular activity of the magnetosphere results from its complex internal dynamics as well as the external influence of the solar wind. The dominating self-organization of the magnetospheric plasma gives rise to repetitive, large-scale coherent behavior manifested in phenomena such as the magnetic substorm. Based on the nonlinearity of the global dynamics this dissertation examines the magnetosphere as a nonlinear dynamical system using time series analysis techniques. Initially the magnetospheric activity is modeled in terms of an autonomous system. A dimension study shows that its observed time series is self-similar, but the correlation dimension is high. The implication of a large number of degrees of freedom is confirmed by other state space techniques such as Poincare sections and search for unstable periodic orbits. At the same time a stability study of the time series in terms of Lyapunov exponents suggests that the series is not chaotic. The absence of deterministic chaos is supported by the low predictive capability of the autonomous model. Rather than chaos, it is an external input which is largely responsible for the irregularity of the magnetospheric activity. In fact, the external driving is so strong that the above state space techniques give results for magnetospheric and solar wind time series that are at least qualitatively similar. Therefore the solar wind input has to be included in a low-dimensional nonautonomous model. Indeed it is shown that such a model can reproduce the observed magnetospheric behavior up to 80-90 percent. The characteristic coefficients of the model show little variation depending on the external disturbance. The impulse response is consistent with earlier results of linear prediction filters. The model can be easily extended to contain nonlinear features of the magnetospheric activity and in particular the loading-unloading behavior of substorms

  4. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... and the resulting predictions can be compared with predictions from the ‘true’ model. By performing this analysis we expect to give the modeler insight into how the uncertainty of model-based prediction can be reduced.......A major purpose of groundwater modeling is to help decision-makers in efforts to manage the natural environment. Increasingly, it is recognized that both the predictions of interest and their associated uncertainties should be quantified to support robust decision making. In particular, decision...

  5. The East Aegean Sea strong earthquake sequence of October–November 2005: lessons learned for earthquake prediction from foreshocks

    Directory of Open Access Journals (Sweden)

    G. A. Papadopoulos

    2006-01-01

    Full Text Available The seismic sequence of October–November 2005 in the Samos area, East Aegean Sea, was studied with the aim to show how it is possible to establish criteria for (a the rapid recognition of both the ongoing foreshock activity and the mainshock, and (b the rapid discrimination between the foreshock and aftershock phases of activity. It has been shown that before the mainshock of 20 October 2005, foreshock activity is not recognizable in the standard earthquake catalogue. However, a detailed examination of the records in the SMG station, which is the closest to the activated area, revealed that hundreds of small shocks not listed in the standard catalogue were recorded in the time interval from 12 October 2005 up to 21 November 2005. The production of reliable relations between seismic signal duration and duration magnitude for earthquakes included in the standard catalogue, made it possible to use signal durations in SMG records and to determine duration magnitudes for 2054 small shocks not included in the standard catalogue. In this way a new catalogue with magnitude determination for 3027 events was obtained while the standard catalogue contains 1025 events. At least 55 of them occurred from 12 October 2005 up to the occurrence of the two strong foreshocks of 17 October 2005. This implies that foreshock activity developed a few days before the strong shocks of 17 October 2005 but it escaped recognition by the routine procedure of seismic analysis. The onset of the foreshock phase of activity is recognizable by the significant increase of the mean seismicity rate which increased exponentially with time. According to the least-squares approach the b-value of the magnitude-frequency relation dropped significantly during the foreshock activity with respect to the b-value prevailing in the declustered background seismicity. However, the maximum likelihood approach does not indicate such a drop of b. The b-value found for the aftershocks that

  6. Mean Bias in Seasonal Forecast Model and ENSO Prediction Error.

    Science.gov (United States)

    Kim, Seon Tae; Jeong, Hye-In; Jin, Fei-Fei

    2017-07-20

    This study uses retrospective forecasts made using an APEC Climate Center seasonal forecast model to investigate the cause of errors in predicting the amplitude of El Niño Southern Oscillation (ENSO)-driven sea surface temperature variability. When utilizing Bjerknes coupled stability (BJ) index analysis, enhanced errors in ENSO amplitude with forecast lead times are found to be well represented by those in the growth rate estimated by the BJ index. ENSO amplitude forecast errors are most strongly associated with the errors in both the thermocline slope response and surface wind response to forcing over the tropical Pacific, leading to errors in thermocline feedback. This study concludes that upper ocean temperature bias in the equatorial Pacific, which becomes more intense with increasing lead times, is a possible cause of forecast errors in the thermocline feedback and thus in ENSO amplitude.

  7. Estimation of 1-D velocity models beneath strong-motion observation sites in the Kathmandu Valley using strong-motion records from moderate-sized earthquakes

    Science.gov (United States)

    Bijukchhen, Subeg M.; Takai, Nobuo; Shigefuji, Michiko; Ichiyanagi, Masayoshi; Sasatani, Tsutomu; Sugimura, Yokito

    2017-07-01

    The Himalayan collision zone experiences many seismic activities with large earthquakes occurring at certain time intervals. The damming of the proto-Bagmati River as a result of rapid mountain-building processes created a lake in the Kathmandu Valley that eventually dried out, leaving thick unconsolidated lacustrine deposits. Previous studies have shown that the sediments are 600 m thick in the center. A location in a seismically active region, and the possible amplification of seismic waves due to thick sediments, have made Kathmandu Valley seismically vulnerable. It has suffered devastation due to earthquakes several times in the past. The development of the Kathmandu Valley into the largest urban agglomerate in Nepal has exposed a large population to seismic hazards. This vulnerability was apparent during the Gorkha Earthquake (Mw7.8) on April 25, 2015, when the main shock and ensuing aftershocks claimed more than 1700 lives and nearly 13% of buildings inside the valley were completely damaged. Preparing safe and up-to-date building codes to reduce seismic risk requires a thorough study of ground motion amplification. Characterizing subsurface velocity structure is a step toward achieving that goal. We used the records from an array of strong-motion accelerometers installed by Hokkaido University and Tribhuvan University to construct 1-D velocity models of station sites by forward modeling of low-frequency S-waves. Filtered records (0.1-0.5 Hz) from one of the accelerometers installed at a rock site during a moderate-sized (mb4.9) earthquake on August 30, 2013, and three moderate-sized (Mw5.1, Mw5.1, and Mw5.5) aftershocks of the 2015 Gorkha Earthquake were used as input motion for modeling of low-frequency S-waves. We consulted available geological maps, cross-sections, and borehole data as the basis for initial models for the sediment sites. This study shows that the basin has an undulating topography and sediment sites have deposits of varying thicknesses

  8. Predictive models for pressure ulcers from intensive care unit electronic health records using Bayesian networks.

    Science.gov (United States)

    Kaewprag, Pacharmon; Newton, Cheryl; Vermillion, Brenda; Hyun, Sookyung; Huang, Kun; Machiraju, Raghu

    2017-07-05

    We develop predictive models enabling clinicians to better understand and explore patient clinical data along with risk factors for pressure ulcers in intensive care unit patients from electronic health record data. Identifying accurate risk factors of pressure ulcers is essential to determining appropriate prevention strategies; in this work we examine medication, diagnosis, and traditional Braden pressure ulcer assessment scale measurements as patient features. In order to predict pressure ulcer incidence and better understand the structure of related risk factors, we construct Bayesian networks from patient features. Bayesian network nodes (features) and edges (conditional dependencies) are simplified with statistical network techniques. Upon reviewing a network visualization of our model, our clinician collaborators were able to identify strong relationships between risk factors widely recognized as associated with pressure ulcers. We present a three-stage framework for predictive analysis of patient clinical data: 1) Developing electronic health record feature extraction functions with assistance of clinicians, 2) simplifying features, and 3) building Bayesian network predictive models. We evaluate all combinations of Bayesian network models from different search algorithms, scoring functions, prior structure initializations, and sets of features. From the EHRs of 7,717 ICU patients, we construct Bayesian network predictive models from 86 medication, diagnosis, and Braden scale features. Our model not only identifies known and suspected high PU risk factors, but also substantially increases sensitivity of the prediction - nearly three times higher comparing to logistical regression models - without sacrificing the overall accuracy. We visualize a representative model with which our clinician collaborators identify strong relationships between risk factors widely recognized as associated with pressure ulcers. Given the strong adverse effect of pressure ulcers

  9. A complex-plane strategy for computing rotating polytropic models - Numerical results for strong and rapid differential rotation

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1990-01-01

    In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs

  10. Data-based Modeling of the Dynamical Inner Magnetosphere During Strong Geomagnetic Storms

    Science.gov (United States)

    Tsyganenko, N.; Sitnov, M.

    2004-12-01

    This work builds on and extends our previous effort [Tsyganenko et al., 2003] to develop a dynamical model of the storm-time geomagnetic field in the inner magnetosphere, using space magnetometer data taken during 37 major events in 1996--2000 and concurrent observations of the solar wind and IMF. The essence of the approach is to derive from the data the temporal variation of all major current systems contributing to the geomagnetic field during the entire storm cycle, using a simple model of their growth and decay. Each principal source of the external magnetic field (magnetopause, cross-tail current sheet, axisymmetric and partial ring currents, Birkeland currents) is controlled by a separate driving variable that includes a combination of geoeffective parameters in the form Nλ Vβ Bsγ , where N, V, and Bs are the solar wind density, speed, and the magnitude of the southward component of the IMF, respectively. Each source was also assumed to have an individual relaxation timescale and residual quiet-time strength, so that its partial contribution to the total field was calculated for any moment as a time integral, taking into account the entire history of the external driving of the magnetosphere during each storm. In addition, the magnitudes of the principal field sources were assumed to saturate during extremely large storms with abnormally strong external driving. All the parameters of the model field sources, including their magnitudes, geometrical characteristics, solar wind/IMF driving functions, decay timescales, and saturation thresholds were treated as free variables, to be derived from the data by the least squares. The relaxation timescales of the individual magnetospheric field sources were found to largely differ between each other, from as large as ˜30 hours for the symmetrical ring current to only ˜50 min for the region~1 Birkeland current. The total magnitudes of the currents were also found to dramatically vary in the course of major storms

  11. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  12. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  13. Modeling and notation of DEA with strong and weak disposable outputs.

    Science.gov (United States)

    Kuntz, Ludwig; Sülz, Sandra

    2011-12-01

    Recent articles published in Health Care Management Science have described DEA applications under the assumption of strong and weak disposable outputs. As we confidently assume that these papers include some methodical deficiencies, we aim to illustrate a revised approach.

  14. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  15. The differences in hadronic cross-sections and the residues of secondary reggeons in the quark-gluon model for strong interactions

    International Nuclear Information System (INIS)

    Kaidalov, A.B.; Volkovitsky, P.E.

    1981-01-01

    In the framework of the quark-gluon picture for strong interactions based on the topological expansion and the string model, the relations between t differences of hadronic cross- section are obtained. The system of equations for the contribution of secondary reggeons (rho, ω, f, A 2 and phi and f' poles) to the elastic scattering amplitudes for arbitrary hadrons is derived. It is shown that this system has a factorized solution and the secondary reggeon residues for all hadrons are expressed in terms of the universal function g(t). The model predictions are in a good agreement with experimental data [ru

  16. Predictive Modelling of Heavy Metals in Urban Lakes

    OpenAIRE

    Lindström, Martin

    2000-01-01

    Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes. Sediment and water data for this study were collected from ten small lakes in the ...

  17. SHORT-TERM PRECIPITATION OCCURRENCE PREDICTION FOR STRONG CONVECTIVE WEATHER USING FY2-G SATELLITE DATA: A CASE STUDY OF SHENZHEN,SOUTH CHINA

    Directory of Open Access Journals (Sweden)

    K. Chen

    2016-06-01

    Full Text Available Short-term precipitation commonly occurs in south part of China, which brings intensive precipitation in local region for very short time. Massive water would cause the intensive flood inside of city when precipitation amount beyond the capacity of city drainage system. Thousands people’s life could be influenced by those short-term disasters and the higher city managements are required to facing these challenges. How to predict the occurrence of heavy precipitation accurately is one of the worthwhile scientific questions in meteorology. According to recent studies, the accuracy of short-term precipitation prediction based on numerical simulation model still remains low reliability, in some area where lack of local observations, the accuracy may be as low as 10%. The methodology for short term precipitation occurrence prediction still remains a challenge. In this paper, a machine learning method based on SVM was presented to predict short-term precipitation occurrence by using FY2-G satellite imagery and ground in situ observation data. The results were validated by traditional TS score which commonly used in evaluation of weather prediction. The results indicate that the proposed algorithm can present overall accuracy up to 90% for one-hour to six-hour forecast. The result implies the prediction accuracy could be improved by using machine learning method combining with satellite image. This prediction model can be further used to evaluated to predicted other characteristics of weather in Shenzhen in future.

  18. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  19. MJO prediction skill of the subseasonal-to-seasonal (S2S) prediction models

    Science.gov (United States)

    Son, S. W.; Lim, Y.; Kim, D.

    2017-12-01

    The Madden-Julian Oscillation (MJO), the dominant mode of tropical intraseasonal variability, provides the primary source of tropical and extratropical predictability on subseasonal to seasonal timescales. To better understand its predictability, this study conducts quantitative evaluation of MJO prediction skill in the state-of-the-art operational models participating in the subseasonal-to-seasonal (S2S) prediction project. Based on bivariate correlation coefficient of 0.5, the S2S models exhibit MJO prediction skill ranging from 12 to 36 days. These prediction skills are affected by both the MJO amplitude and phase errors, the latter becoming more important with forecast lead times. Consistent with previous studies, the MJO events with stronger initial amplitude are typically better predicted. However, essentially no sensitivity to the initial MJO phase is observed. Overall MJO prediction skill and its inter-model spread are further related with the model mean biases in moisture fields and longwave cloud-radiation feedbacks. In most models, a dry bias quickly builds up in the deep tropics, especially across the Maritime Continent, weakening horizontal moisture gradient. This likely dampens the organization and propagation of MJO. Most S2S models also underestimate the longwave cloud-radiation feedbacks in the tropics, which may affect the maintenance of the MJO convective envelop. In general, the models with a smaller bias in horizontal moisture gradient and longwave cloud-radiation feedbacks show a higher MJO prediction skill, suggesting that improving those processes would enhance MJO prediction skill.

  20. Influence of the Human Skin Tumor Type in Photodynamic Therapy Analysed by a Predictive Model

    Directory of Open Access Journals (Sweden)

    I. Salas-García

    2012-01-01

    Full Text Available Photodynamic Therapy (PDT modeling allows the prediction of the treatment results depending on the lesion properties, the photosensitizer distribution, or the optical source characteristics. We employ a predictive PDT model and apply it to different skin tumors. It takes into account optical radiation distribution, a nonhomogeneous topical photosensitizer spatial temporal distribution, and the time-dependent photochemical interaction. The predicted singlet oxygen molecular concentrations with varying optical irradiance are compared and could be directly related with the necrosis area. The results show a strong dependence on the particular lesion. This suggests the need to design optimal PDT treatment protocols adapted to the specific patient and lesion.

  1. Composite control for raymond mill based on model predictive control and disturbance observer

    Directory of Open Access Journals (Sweden)

    Dan Niu

    2016-03-01

    Full Text Available In the raymond mill grinding process, precise control of operating load is vital for the high product quality. However, strong external disturbances, such as variations of ore size and ore hardness, usually cause great performance degradation. It is not easy to control the current of raymond mill constant. Several control strategies have been proposed. However, most of them (such as proportional–integral–derivative and model predictive control reject disturbances just through feedback regulation, which may lead to poor control performance in the presence of strong disturbances. For improving disturbance rejection, a control method based on model predictive control and disturbance observer is put forward in this article. The scheme employs disturbance observer as feedforward compensation and model predictive control controller as feedback regulation. The test results illustrate that compared with model predictive control method, the proposed disturbance observer–model predictive control method can obtain significant superiority in disturbance rejection, such as shorter settling time and smaller peak overshoot under strong disturbances.

  2. Butterfly, Recurrence, and Predictability in Lorenz Models

    Science.gov (United States)

    Shen, B. W.

    2017-12-01

    Over the span of 50 years, the original three-dimensional Lorenz model (3DLM; Lorenz,1963) and its high-dimensional versions (e.g., Shen 2014a and references therein) have been used for improving our understanding of the predictability of weather and climate with a focus on chaotic responses. Although the Lorenz studies focus on nonlinear processes and chaotic dynamics, people often apply a "linear" conceptual model to understand the nonlinear processes in the 3DLM. In this talk, we present examples to illustrate the common misunderstandings regarding butterfly effect and discuss the importance of solutions' recurrence and boundedness in the 3DLM and high-dimensional LMs. The first example is discussed with the following folklore that has been widely used as an analogy of the butterfly effect: "For want of a nail, the shoe was lost.For want of a shoe, the horse was lost.For want of a horse, the rider was lost.For want of a rider, the battle was lost.For want of a battle, the kingdom was lost.And all for the want of a horseshoe nail."However, in 2008, Prof. Lorenz stated that he did not feel that this verse described true chaos but that it better illustrated the simpler phenomenon of instability; and that the verse implicitly suggests that subsequent small events will not reverse the outcome (Lorenz, 2008). Lorenz's comments suggest that the verse neither describes negative (nonlinear) feedback nor indicates recurrence, the latter of which is required for the appearance of a butterfly pattern. The second example is to illustrate that the divergence of two nearby trajectories should be bounded and recurrent, as shown in Figure 1. Furthermore, we will discuss how high-dimensional LMs were derived to illustrate (1) negative nonlinear feedback that stabilizes the system within the five- and seven-dimensional LMs (5D and 7D LMs; Shen 2014a; 2015a; 2016); (2) positive nonlinear feedback that destabilizes the system within the 6D and 8D LMs (Shen 2015b; 2017); and (3

  3. Auditing predictive models : a case study in crop growth

    NARCIS (Netherlands)

    Metselaar, K.

    1999-01-01

    Methods were developed to assess and quantify the predictive quality of simulation models, with the intent to contribute to evaluation of model studies by non-scientists. In a case study, two models of different complexity, LINTUL and SUCROS87, were used to predict yield of forage maize

  4. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  5. Observations and Modeling of Turbulent Air-Sea Coupling in Coastal and Strongly Forced Condition

    Science.gov (United States)

    Ortiz-Suslow, David G.

    The turbulent fluxes of momentum, mass, and energy across the ocean-atmosphere boundary are fundamental to our understanding of a myriad of geophysical processes, such as wind-wave generation, oceanic circulation, and air-sea gas transfer. In order to better understand these fluxes, empirical relationships were developed to quantify the interfacial exchange rates in terms of easily observed parameters (e.g., wind speed). However, mounting evidence suggests that these empirical formulae are only valid over the relatively narrow parametric space, i.e. open ocean conditions in light to moderate winds. Several near-surface processes have been observed to cause significant variance in the air-sea fluxes not predicted by the conventional functions, such as a heterogeneous surfaces, swell waves, and wave breaking. Further study is needed to fully characterize how these types of processes can modulate the interfacial exchange; in order to achieve this, a broad investigation into air-sea coupling was undertaken. The primary focus of this work was to use a combination of field and laboratory observations and numerical modeling, in regimes where conventional theories would be expected to breakdown, namely: the nearshore and in very high winds. These seemingly disparate environments represent the marine atmospheric boundary layer at its physical limit. In the nearshore, the convergence of land, air, and sea in a depth-limited domain marks the transition from a marine to a terrestrial boundary layer. Under extreme winds, the physical nature of the boundary layer remains unknown as an intermediate substrate layer, sea spray, develops between the atmosphere and ocean surface. At these ends of the MABL physical spectrum, direct measurements of the near-surface processes were made and directly related to local sources of variance. Our results suggest that the conventional treatment of air-sea fluxes in terms of empirical relationships developed from a relatively narrow set of

  6. Statistical and Machine Learning Models to Predict Programming Performance

    OpenAIRE

    Bergin, Susan

    2006-01-01

    This thesis details a longitudinal study on factors that influence introductory programming success and on the development of machine learning models to predict incoming student performance. Although numerous studies have developed models to predict programming success, the models struggled to achieve high accuracy in predicting the likely performance of incoming students. Our approach overcomes this by providing a machine learning technique, using a set of three significant...

  7. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...

  8. Accurate and dynamic predictive model for better prediction in medicine and healthcare.

    Science.gov (United States)

    Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S

    2018-05-01

    Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.

  9. A new ensemble model for short term wind power prediction

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albu, Razvan-Daniel; Felea, Ioan

    2012-01-01

    As the objective of this study, a non-linear ensemble system is used to develop a new model for predicting wind speed in short-term time scale. Short-term wind power prediction becomes an extremely important field of research for the energy sector. Regardless of the recent advancements in the re-search...... of prediction models, it was observed that different models have different capabilities and also no single model is suitable under all situations. The idea behind EPS (ensemble prediction systems) is to take advantage of the unique features of each subsystem to detain diverse patterns that exist in the dataset...

  10. Testing the predictive power of nuclear mass models

    International Nuclear Information System (INIS)

    Mendoza-Temis, J.; Morales, I.; Barea, J.; Frank, A.; Hirsch, J.G.; Vieyra, J.C. Lopez; Van Isacker, P.; Velazquez, V.

    2008-01-01

    A number of tests are introduced which probe the ability of nuclear mass models to extrapolate. Three models are analyzed in detail: the liquid drop model, the liquid drop model plus empirical shell corrections and the Duflo-Zuker mass formula. If predicted nuclei are close to the fitted ones, average errors in predicted and fitted masses are similar. However, the challenge of predicting nuclear masses in a region stabilized by shell effects (e.g., the lead region) is far more difficult. The Duflo-Zuker mass formula emerges as a powerful predictive tool

  11. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  12. The Complexity of Developmental Predictions from Dual Process Models

    Science.gov (United States)

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  13. From Kondo model and strong coupling lattice QCD to the Isgur-Wise function

    International Nuclear Information System (INIS)

    Patel, Apoorva

    1995-01-01

    Isgur-Wise functions parametrise the leading behaviour of weak decay form factors of mesons and baryons containing a single heavy quark. The form factors for the quark mass operator are calculated in strong coupling lattice QCD, and Isgur-Wise functions extracted from them. Based on renormalisation group invariance of the operators involved, it is argued that the Isgur-Wise functions would be the same in the weak coupling continuum theory. (author)

  14. Truncated exponential-rigid-rotor model for strong electron and ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.; Fleischmann, H.H.

    1979-01-01

    A comprehensive study of exponential-rigid-rotor equilibria for strong electron and ion rings indicates the presence of a sizeable percentage of untrapped particles in all equilibria with aspect-ratios R/a approximately <4. Such aspect-ratios are required in fusion-relevant rings. Significant changes in the equilibria are observed when untrapped particles are excluded by the use of a truncated exponential-rigid-rotor distribution function. (author)

  15. Sweat loss prediction using a multi-model approach.

    Science.gov (United States)

    Xu, Xiaojiang; Santee, William R

    2011-07-01

    A new multi-model approach (MMA) for sweat loss prediction is proposed to improve prediction accuracy. MMA was computed as the average of sweat loss predicted by two existing thermoregulation models: i.e., the rational model SCENARIO and the empirical model Heat Strain Decision Aid (HSDA). Three independent physiological datasets, a total of 44 trials, were used to compare predictions by MMA, SCENARIO, and HSDA. The observed sweat losses were collected under different combinations of uniform ensembles, environmental conditions (15-40°C, RH 25-75%), and exercise intensities (250-600 W). Root mean square deviation (RMSD), residual plots, and paired t tests were used to compare predictions with observations. Overall, MMA reduced RMSD by 30-39% in comparison with either SCENARIO or HSDA, and increased the prediction accuracy to 66% from 34% or 55%. Of the MMA predictions, 70% fell within the range of mean observed value ± SD, while only 43% of SCENARIO and 50% of HSDA predictions fell within the same range. Paired t tests showed that differences between observations and MMA predictions were not significant, but differences between observations and SCENARIO or HSDA predictions were significantly different for two datasets. Thus, MMA predicted sweat loss more accurately than either of the two single models for the three datasets used. Future work will be to evaluate MMA using additional physiological data to expand the scope of populations and conditions.

  16. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  17. Model for prediction of strip temperature in hot strip steel mill

    International Nuclear Information System (INIS)

    Panjkovic, Vladimir

    2007-01-01

    Proper functioning of set-up models in a hot strip steel mill requires reliable prediction of strip temperature. Temperature prediction is particularly important for accurate calculation of rolling force because of strong dependence of yield stress and strip microstructure on temperature. A comprehensive model was developed to replace an obsolete model in the Western Port hot strip mill of BlueScope Steel. The new model predicts the strip temperature evolution from the roughing mill exit to the finishing mill exit. It takes into account the radiative and convective heat losses, forced flow boiling and film boiling of water at strip surface, deformation heat in the roll gap, frictional sliding heat, heat of scale formation and the heat transfer between strip and work rolls through an oxide layer. The significance of phase transformation was also investigated. Model was tested with plant measurements and benchmarked against other models in the literature, and its performance was very good

  18. Model for prediction of strip temperature in hot strip steel mill

    Energy Technology Data Exchange (ETDEWEB)

    Panjkovic, Vladimir [BlueScope Steel, TEOB, 1 Bayview Road, Hastings Vic. 3915 (Australia)]. E-mail: Vladimir.Panjkovic@BlueScopeSteel.com

    2007-10-15

    Proper functioning of set-up models in a hot strip steel mill requires reliable prediction of strip temperature. Temperature prediction is particularly important for accurate calculation of rolling force because of strong dependence of yield stress and strip microstructure on temperature. A comprehensive model was developed to replace an obsolete model in the Western Port hot strip mill of BlueScope Steel. The new model predicts the strip temperature evolution from the roughing mill exit to the finishing mill exit. It takes into account the radiative and convective heat losses, forced flow boiling and film boiling of water at strip surface, deformation heat in the roll gap, frictional sliding heat, heat of scale formation and the heat transfer between strip and work rolls through an oxide layer. The significance of phase transformation was also investigated. Model was tested with plant measurements and benchmarked against other models in the literature, and its performance was very good.

  19. Modeling of Complex Life Cycle Prediction Based on Cell Division

    Directory of Open Access Journals (Sweden)

    Fucheng Zhang

    2017-01-01

    Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.

  20. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  1. Predictive modeling and reducing cyclic variability in autoignition engines

    Science.gov (United States)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  2. Statistical Models for Predicting Threat Detection From Human Behavior

    Science.gov (United States)

    Kelley, Timothy; Amon, Mary J.; Bertenthal, Bennett I.

    2018-01-01

    attacks. Participant accuracy in identifying spoof and non-spoof websites was best captured using a model that included real-time indicators of decision-making behavior, as compared to two-factor and survey-based models. Findings validate three widely applicable measures of user behavior derived from mouse tracking recordings, which can be utilized in cyber security and user intervention research. Survey data alone are not as strong at predicting risky Internet behavior as models that incorporate real-time measures of user behavior, such as mouse tracking. PMID:29713296

  3. Statistical Models for Predicting Threat Detection From Human Behavior

    Directory of Open Access Journals (Sweden)

    Timothy Kelley

    2018-04-01

    during phishing attacks. Participant accuracy in identifying spoof and non-spoof websites was best captured using a model that included real-time indicators of decision-making behavior, as compared to two-factor and survey-based models. Findings validate three widely applicable measures of user behavior derived from mouse tracking recordings, which can be utilized in cyber security and user intervention research. Survey data alone are not as strong at predicting risky Internet behavior as models that incorporate real-time measures of user behavior, such as mouse tracking.

  4. Statistical Models for Predicting Threat Detection From Human Behavior.

    Science.gov (United States)

    Kelley, Timothy; Amon, Mary J; Bertenthal, Bennett I

    2018-01-01

    . Participant accuracy in identifying spoof and non-spoof websites was best captured using a model that included real-time indicators of decision-making behavior, as compared to two-factor and survey-based models. Findings validate three widely applicable measures of user behavior derived from mouse tracking recordings, which can be utilized in cyber security and user intervention research. Survey data alone are not as strong at predicting risky Internet behavior as models that incorporate real-time measures of user behavior, such as mouse tracking.

  5. Dynamic Simulation of Human Gait Model With Predictive Capability.

    Science.gov (United States)

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  6. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  7. A model to predict the power output from wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Riso National Lab., Roskilde (Denmark)

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  8. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  9. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  10. A mathematical model for predicting earthquake occurrence ...

    African Journals Online (AJOL)

    We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...

  11. Model for predicting the injury severity score.

    Science.gov (United States)

    Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi

    2015-07-01

    To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P  Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.

  12. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  13. Predicting weed problems in maize cropping by species distribution modelling

    Directory of Open Access Journals (Sweden)

    Bürger, Jana

    2014-02-01

    Full Text Available Increasing maize cultivation and changed cropping practices promote the selection of typical maize weeds that may also profit strongly from climate change. Predicting potential weed problems is of high interest for plant production. Within the project KLIFF, experiments were combined with species distribution modelling for this task in the region of Lower Saxony, Germany. For our study, we modelled ecological and damage niches of nine weed species that are significant and wide spread in maize cropping in a number of European countries. Species distribution models describe the ecological niche of a species, these are the environmental conditions under which a species can maintain a vital population. It is also possible to estimate a damage niche, i.e. the conditions under which a species causes damage in agricultural crops. For this, we combined occurrence data of European national data bases with high resolution climate, soil and land use data. Models were also projected to simulated climate conditions for the time horizon 2070 - 2100 in order to estimate climate change effects. Modelling results indicate favourable conditions for typical maize weed occurrence virtually all over the study region, but only a few species are important in maize cropping. This is in good accordance with the findings of an earlier maize weed monitoring. Reaction to changing climate conditions is species-specific, for some species neutral (E. crus-galli, other species may gain (Polygonum persicaria or loose (Viola arvensis large areas of suitable habitats. All species with damage potential under present conditions will remain important in maize cropping, some more species will gain regional importance (Calystegia sepium, Setara viridis.

  14. Predictive models for monitoring and analysis of the total zooplankton

    Directory of Open Access Journals (Sweden)

    Obradović Milica

    2014-01-01

    Full Text Available In recent years, modeling and prediction of total zooplankton abundance have been performed by various tools and techniques, among which data mining tools have been less frequent. The purpose of this paper is to automatically determine the dependency degree and the influence of physical, chemical and biological parameters on the total zooplankton abundance, through design of the specific data mining models. For this purpose, the analysis of key influencers was used. The analysis is based on the data obtained from the SeLaR information system - specifically, the data from the two reservoirs (Gruža and Grošnica with different morphometric characteristics and trophic state. The data is transformed into optimal structure for data analysis, upon which, data mining model based on the Naïve Bayes algorithm is constructed. The results of the analysis imply that in both reservoirs, parameters of groups and species of zooplankton have the greatest influence on the total zooplankton abundance. If these inputs (group and zooplankton species are left out, differences in the impact of physical, chemical and other biological parameters in dependences of reservoirs can be noted. In the Grošnica reservoir, analysis showed that the temporal dimension (months, nitrates, water temperature, chemical oxygen demand, chlorophyll and chlorides, had the key influence with strong relative impact. In the Gruža reservoir, key influence parameters for total zooplankton are: spatial dimension (location, water temperature and physiological groups of bacteria. The results show that the presented data mining model is usable on any kind of aquatic ecosystem and can also serve for the detection of inputs which could be the basis for the future analysis and modeling.

  15. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  16. A predictive pilot model for STOL aircraft landing

    Science.gov (United States)

    Kleinman, D. L.; Killingsworth, W. R.

    1974-01-01

    An optimal control approach has been used to model pilot performance during STOL flare and landing. The model is used to predict pilot landing performance for three STOL configurations, each having a different level of automatic control augmentation. Model predictions are compared with flight simulator data. It is concluded that the model can be effective design tool for studying analytically the effects of display modifications, different stability augmentation systems, and proposed changes in the landing area geometry.

  17. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  18. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions...

  19. Wind turbine control and model predictive control for uncertain systems

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz

    as disturbance models for controller design. The theoretical study deals with Model Predictive Control (MPC). MPC is an optimal control method which is characterized by the use of a receding prediction horizon. MPC has risen in popularity due to its inherent ability to systematically account for time...

  20. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  1. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  2. Models for predicting fuel consumption in sagebrush-dominated ecosystems

    Science.gov (United States)

    Clinton S. Wright

    2013-01-01

    Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....

  3. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  4. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  5. Prediction models for successful external cephalic version: a systematic review

    NARCIS (Netherlands)

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M.; Molkenboer, Jan F. M.; van der Post, Joris A. M.; Mol, Ben W.; Kok, Marjolein

    2015-01-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015.

  6. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  7. Mathematical model for dissolved oxygen prediction in Cirata ...

    African Journals Online (AJOL)

    This paper presents the implementation and performance of mathematical model to predict theconcentration of dissolved oxygen in Cirata Reservoir, West Java by using Artificial Neural Network (ANN). The simulation program was created using Visual Studio 2012 C# software with ANN model implemented in it. Prediction ...

  8. A strongly nonlinear reaction-diffusion model for a deterministic diffusive epidemic

    International Nuclear Information System (INIS)

    Kirane, M.; Kouachi, S.

    1992-10-01

    In the present paper the mathematical validity of a model on the spread of an infectious disease is proved. This model was proposed by Bailey. The mathematical validity is proved by means of a positivity, uniqueness and existence theorem. In spite of the apparent simplicity of the problem, the solution requires a delicate set of techniques. It seems very difficult to extend these techniques to a model in more than one dimension without imposing conditions on the diffusivities. (author). 7 refs

  9. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  10. The strong non-reciprocity of metamaterial absorber: characteristic, interpretation and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Li Yuanxun; Xie Yunsong; Zhang Huaiwu; Liu Yingli; Wen Qiye; Ling Weiwei, E-mail: liyuanxun@uestc.edu.c [State Key Laboratory of Electronic Thin Film and Integrated Devices, University of Electronic Science and Technology of China, Chengdu, 610054 (China)

    2009-05-07

    We simulated the metamaterial absorbers in two propagation conditions and observed the universal phenomenon of strong non-reciprocity. It is found that this non-reciprocity cannot be well interpreted using the effective medium theory, which indicates that the designing and understanding for the metamaterial absorber based on the proposed effective medium theory could not be applicable. The reason is pointed out that the metamaterial absorber does not satisfy the homogeneous-effective limit. So we put forward a three-parameter modified effective medium theory to fully describe the metamaterial absorbers. We have also investigated the relationships of S-parameters and absorptance among the metamaterial absorbers and the two components inside. Then the power absorption distributions in these three structures are discussed in detail. It can be concluded that the absorption is derived from the ERR structure and is enhanced largely by the coupling mechanism, and the strong non-reciprocity results from the different roles which wire structure plays in both propagation conditions.

  11. The strong non-reciprocity of metamaterial absorber: characteristic, interpretation and modelling

    International Nuclear Information System (INIS)

    Li Yuanxun; Xie Yunsong; Zhang Huaiwu; Liu Yingli; Wen Qiye; Ling Weiwei

    2009-01-01

    We simulated the metamaterial absorbers in two propagation conditions and observed the universal phenomenon of strong non-reciprocity. It is found that this non-reciprocity cannot be well interpreted using the effective medium theory, which indicates that the designing and understanding for the metamaterial absorber based on the proposed effective medium theory could not be applicable. The reason is pointed out that the metamaterial absorber does not satisfy the homogeneous-effective limit. So we put forward a three-parameter modified effective medium theory to fully describe the metamaterial absorbers. We have also investigated the relationships of S-parameters and absorptance among the metamaterial absorbers and the two components inside. Then the power absorption distributions in these three structures are discussed in detail. It can be concluded that the absorption is derived from the ERR structure and is enhanced largely by the coupling mechanism, and the strong non-reciprocity results from the different roles which wire structure plays in both propagation conditions.

  12. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  13. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  14. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  15. NOx PREDICTION FOR FBC BOILERS USING EMPIRICAL MODELS

    Directory of Open Access Journals (Sweden)

    Jiří Štefanica

    2014-02-01

    Full Text Available Reliable prediction of NOx emissions can provide useful information for boiler design and fuel selection. Recently used kinetic prediction models for FBC boilers are overly complex and require large computing capacity. Even so, there are many uncertainties in the case of FBC boilers. An empirical modeling approach for NOx prediction has been used exclusively for PCC boilers. No reference is available for modifying this method for FBC conditions. This paper presents possible advantages of empirical modeling based prediction of NOx emissions for FBC boilers, together with a discussion of its limitations. Empirical models are reviewed, and are applied to operation data from FBC boilers used for combusting Czech lignite coal or coal-biomass mixtures. Modifications to the model are proposed in accordance with theoretical knowledge and prediction accuracy.

  16. Complex versus simple models: ion-channel cardiac toxicity prediction.

    Science.gov (United States)

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  17. Complex versus simple models: ion-channel cardiac toxicity prediction

    Directory of Open Access Journals (Sweden)

    Hitesh B. Mistry

    2018-02-01

    Full Text Available There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model Bnet was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the Bnet model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  18. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  19. A New Material Constitutive Model for Predicting Cladding Failure

    Energy Technology Data Exchange (ETDEWEB)

    Rashid, Joe; Dunham, Robert [ANATECH Corp., San Diego, CA (United States); Rashid, Mark [University of California Davis, Davis, CA (United States); Machiels, Albert [EPRI, Palo Alto, CA (United States)

    2009-06-15

    following dry storage, for which the development of valid failure criteria would require major efforts. 2. Cladding as a Three-Phase Composite with Constitutively Modeled Damage: In a three-phase composite, there are two distinct damage mechanisms represented in the model: in the first, strain normal to the platelets results in a progressive loss of stress-carrying capacity in both the matrix and the platelet phases, on the plane of the platelets. In the second damage mechanism, the degree of damage associated with one hydride orientation is dependent upon the amount of hydrides in the other, perpendicular orientation. This introduces coupling between the radial and circumferential hydrides that depends on the relative volume fraction of each. This coupled inter-dependence of damage is mathematically formulated as a continuous process assuming both hydride orientations to be always present, even if one of the hydride phases has zero volume-fraction. This mathematical construct allows the evolution of damage to gradually shift from one phase orientation to another, and the magnitude of this shift would depend on the relative volume fraction of each phase. 3. Model Predictions and Comparison to Data: The model is applied to the analysis of a cladding specimen irradiated to an average hydrogen concentration of 206 ppm and pressure tested at 25 deg. C. The second example is a hydrided cladding specimen with uniform distribution of circumferential hydrides, subjected to radial hydride treatment by cooling from 300 deg. C under a hoop stress of 225 MPa. The measured stress-strain response is not available, but the total elongation data are {approx}17.5% in the axial direction and {approx}1.5% in the hoop direction, which illustrates the strong effect of radial hydrides. (authors)

  20. [Application of ARIMA model on prediction of malaria incidence].

    Science.gov (United States)

    Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai

    2016-01-29

    To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.

  1. Mobility Modelling through Trajectory Decomposition and Prediction

    OpenAIRE

    Faghihi, Farbod

    2017-01-01

    The ubiquity of mobile devices with positioning sensors make it possible to derive user's location at any time. However, constantly sensing the position in order to track the user's movement is not feasible, either due to the unavailability of sensors, or computational and storage burdens. In this thesis, we present and evaluate a novel approach for efficiently tracking user's movement trajectories using decomposition and prediction of trajectories. We facilitate tracking by taking advantage ...

  2. Seismic attenuation relationship with homogeneous and heterogeneous prediction-error variance models

    Science.gov (United States)

    Mu, He-Qing; Xu, Rong-Rong; Yuen, Ka-Veng

    2014-03-01

    Peak ground acceleration (PGA) estimation is an important task in earthquake engineering practice. One of the most well-known models is the Boore-Joyner-Fumal formula, which estimates the PGA using the moment magnitude, the site-to-fault distance and the site foundation properties. In the present study, the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an efficiency-robustness balanced formula is proposed. For this purpose, a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship. In this approach, each model class (a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data. The one with the highest plausibility is robust since it possesses the optimal balance between the data fitting capability and the sensitivity to noise. A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis. The optimal predictive formula is proposed based on this database. It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore, Joyner and Fumal (1993).

  3. Questioning the quark model. Strong interaction, gravitation and time arrows. An approach to asymptotic freedom

    International Nuclear Information System (INIS)

    Basini, G.

    2003-01-01

    Asymptotic freedom, as a natural result of a theory based on a general approach, derived by a new interpretation of phenomena like the EPR paradox, the black-hole formation and the absence of primary cosmic antimatter is presented. In this approach, conservation laws are considered always and absolutely valid, leading to the possibility of topology changes, and recovering the mutual influence between fundamental forces. Moreover, a new consideration of time arrows leads to asymptotic freedom as a necessary consequence. In fact, asymptotic freedom of strong interactions seems to be a feature common also to gravitational interaction, if induced-gravity theories (t → ∞) are taken into account and a symmetric-time dynamics is recovered in the light of a general conservation principle. (authors)

  4. Questioning the quark model. Strong interaction, gravitation and time arrows. An approach to asymptotic freedom

    Energy Technology Data Exchange (ETDEWEB)

    Basini, G. [Istituto Nazionale di Fisica Nucleare, Frascati (Italy). Lab. Nazionale di Frascati; Capozziello, S. [E.R. Caianiello, Dipt. di Fisica, Roma (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Universita di Salerno, Boronissi, SA (Italy)

    2003-09-01

    Asymptotic freedom, as a natural result of a theory based on a general approach, derived by a new interpretation of phenomena like the EPR paradox, the black-hole formation and the absence of primary cosmic antimatter is presented. In this approach, conservation laws are considered always and absolutely valid, leading to the possibility of topology changes, and recovering the mutual influence between fundamental forces. Moreover, a new consideration of time arrows leads to asymptotic freedom as a necessary consequence. In fact, asymptotic freedom of strong interactions seems to be a feature common also to gravitational interaction, if induced-gravity theories (t {yields} {infinity}) are taken into account and a symmetric-time dynamics is recovered in the light of a general conservation principle. (authors)

  5. Field-theoretic Methods in Strongly-Coupled Models of General Gauge Mediation

    CERN Document Server

    Fortin, Jean-Francois

    2013-01-01

    An often-exploited feature of the operator product expansion (OPE) is that it incorporates a splitting of ultraviolet and infrared physics. In this paper we use this feature of the OPE to perform simple, approximate computations of soft masses in gauge-mediated supersymmetry breaking. The approximation amounts to truncating the OPEs for hidden-sector current-current operator products. Our method yields visible-sector superpartner spectra in terms of vacuum expectation values of a few hidden-sector IR elementary fields. We manage to obtain reasonable approximations to soft masses, even when the hidden sector is strongly coupled. We demonstrate our techniques in several examples, including a new framework where supersymmetry-breaking arises both from a hidden sector and dynamically.

  6. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  7. Functional integral and effective Hamiltonian t-J-V model of strongly correlated electron system

    International Nuclear Information System (INIS)

    Belinicher, V.I.; Chertkov, M.V.

    1990-09-01

    The functional integral representation for the generating functional of t-J-V model is obtained. In the case close to half filling this functional integral representation reduces the conventional Hamiltonian of t-J-V model to the Hamiltonian of the system containing holes and spins 1/2 at each lattice size. This effective Hamiltonian coincides with that one obtained one of the authors by different method. This Hamiltonian and its dynamical variables can be used for description of different magnetic phases of t-J-V model. (author). 16 refs

  8. A strongly nonlinear reaction diffusion model for a deterministic diffusive epidemic

    International Nuclear Information System (INIS)

    Kirane, M.; Kouachi, S.

    1993-04-01

    In the present paper the mathematical validity of a model on the spread of an infectious disease is proved. This model was proposed by Bailey. The mathematical validity is proved by means of a positivity, uniqueness and existence theorem. Moreover the large time behaviour of the global solutions is analyzed. In spite of the apparent simplicity of the problem, the solution requires a delicate set of techniques. It seems very difficult to extend these techniques to a model in more than one dimension without imposing conditions on the data. (author). 9 refs

  9. Bentonite swelling pressure in strong NaCl solutions. Correlation between model calculations and experimentally determined data

    Energy Technology Data Exchange (ETDEWEB)

    Karnland, O. [Clay Technology, Lund (Sweden)

    1997-12-01

    A number of quite different quantitative models concerning swelling pressure in bentonite clay have been proposed by different researchers over the years. The present report examines some of the models which possibly may be used also for saline conditions. A discrepancy between calculated and measured values was noticed for all models at brine conditions. In general the models predicted a too low swelling pressure compared to what was experimentally found. An osmotic component in the clay/water system is proposed in order to improve the previous conservative use of the thermodynamic model. Calculations of this osmotic component is proposed to be made by use of the clay cation exchange capacity and Donnan equilibrium. Calculations made by this approach showed considerably better correlation to literature laboratory data, compared to calculations made by the previous conservative use of the thermodynamic model. A few verifying laboratory tests were made and are briefly described in the report. The improved thermodynamic model predicts substantial bentonite swelling pressures also in saturated sodium chloride solution if the density of the system is high enough. In practice, the model predicts a substantial swelling pressure for the buffer in a KBS-3 repository if the system is exposed to brines, but the positive effects of mixing bentonite into a backfill material will be lost, since the available compaction technique does not give a sufficiently high bentonite density 37 refs, 15 figs

  10. Bentonite swelling pressure in strong NaCl solutions. Correlation between model calculations and experimentally determined data

    International Nuclear Information System (INIS)

    Karnland, O.

    1997-12-01

    A number of quite different quantitative models concerning swelling pressure in bentonite clay have been proposed by different researchers over the years. The present report examines some of the models which possibly may be used also for saline conditions. A discrepancy between calculated and measured values was noticed for all models at brine conditions. In general the models predicted a too low swelling pressure compared to what was experimentally found. An osmotic component in the clay/water system is proposed in order to improve the previous conservative use of the thermodynamic model. Calculations of this osmotic component is proposed to be made by use of the clay cation exchange capacity and Donnan equilibrium. Calculations made by this approach showed considerably better correlation to literature laboratory data, compared to calculations made by the previous conservative use of the thermodynamic model. A few verifying laboratory tests were made and are briefly described in the report. The improved thermodynamic model predicts substantial bentonite swelling pressures also in saturated sodium chloride solution if the density of the system is high enough. In practice, the model predicts a substantial swelling pressure for the buffer in a KBS-3 repository if the system is exposed to brines, but the positive effects of mixing bentonite into a backfill material will be lost, since the available compaction technique does not give a sufficiently high bentonite density

  11. Predicting birth weight with conditionally linear transformation models.

    Science.gov (United States)

    Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten

    2016-12-01

    Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.

  12. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  13. On a low energy, strong interaction model, unifying mesons and baryons

    International Nuclear Information System (INIS)

    Kalafatis, D.

    1993-03-01

    This thesis is concerned with the study of a unified theory of mesons and baryons. An effective Lagrangian with the low mass mesons, generalizing the Skyrme model, is constructed. The vector meson fields are introduced as gauge fields in the linear sigma model instead of the non linear sigma model. Topological soliton solutions of the model are examined and the nucleon-nucleon interaction in the product approximation is investigated. The leading correction to the classical skyrmion mass, the Casimir energy, is evaluated. The problem of the stability of topological solitons when vector fields enter the chiral Lagrangian is also studied. It is shown that the soliton is stable in very much the same way as with the ωmeson and that peculiar classical doublet solutions do not exist

  14. A model-independent description of few-body system with strong interaction

    International Nuclear Information System (INIS)

    Simenog, I.V.

    1985-01-01

    In this contribution, the authors discuss the formulation of equations that provide model-independent description of systems of three and more nucleons irrespective of the details of the interaction, substantiate the approach, estimate the correction terms with respect to the force range, and give basic qualitative results obtained by means of the model-independent procedure. They consider three nucleons in the doublet state (spin S=I/2) taking into account only S-interaction. The elastic nd-scattering amplitude may be found from the model-independent equations that follow from the Faddeev equations in the short-range-force limit. They note that the solutions of several model-independent equations and basic results obtained with the use of this approach may serve both as a standard solution and starting point in the discussion of various conceptions concerning the details of nuclear interactions

  15. Oblique corrections in a model with neutrino masses and strong C P resolution

    International Nuclear Information System (INIS)

    Natale, A.A.; Rodrigues da Silva, P.S.

    1994-01-01

    Our intention in this work is to verify what is the order of the limits we obtain on the light neutrino masses, through the calculation and comparison of the oblique corrections with the experimental data. The calculation will be performed for a specific model, although we expect it to be sufficiently general to give one idea of the limits that can be obtained on neutrino masses in this class of models. (author)

  16. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    J. Earth Syst. Sci. (2017) 126: 33 ... ogy, climate change, glaciology and crop models in agriculture. Different ... In areas where local topography strongly influences precipitation .... (vii) cloud amount, (viii) cloud type and (ix) sun shine hours.

  17. A Comprehensive Analysis of Jet Quenching via a Hybrid Strong/Weak Coupling Model for Jet-Medium Interactions

    Energy Technology Data Exchange (ETDEWEB)

    Casalderrey-Solana, Jorge [Departament d' Estructura i Constituents de la Matèria and Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP (United Kingdom); Gulhan, Doga Can [Laboratory for Nuclear Science and Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Milhano, José Guilherme [CENTRA, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, P-1049-001 Lisboa (Portugal); Physics Department, Theory Unit, CERN, CH-1211 Genève 23 (Switzerland); Pablos, Daniel [Departament d' Estructura i Constituents de la Matèria and Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Rajagopal, Krishna [Laboratory for Nuclear Science and Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2016-12-15

    Within a hybrid strong/weak coupling model for jets in strongly coupled plasma, we explore jet modifications in ultra-relativistic heavy ion collisions. Our approach merges the perturbative dynamics of hard jet evolution with the strongly coupled dynamics which dominates the soft exchanges between the fast partons in the jet shower and the strongly coupled plasma itself. We implement this approach in a Monte Carlo, which supplements the DGLAP shower with the energy loss dynamics as dictated by holographic computations, up to a single free parameter that we fit to data. We then augment the model by incorporating the transverse momentum picked up by each parton in the shower as it propagates through the medium, at the expense of adding a second free parameter. We use this model to discuss the influence of the transverse broadening of the partons in a jet on intra-jet observables. In addition, we explore the sensitivity of such observables to the back-reaction of the plasma to the passage of the jet.

  18. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  19. Model predictive control of a crude oil distillation column

    Directory of Open Access Journals (Sweden)

    Morten Hovd

    1999-04-01

    Full Text Available The project of designing and implementing model based predictive control on the vacuum distillation column at the Nynäshamn Refinery of Nynäs AB is described in this paper. The paper describes in detail the modeling for the model based control, covers the controller implementation, and documents the benefits gained from the model based controller.

  20. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    T. Wu; E. Lester; M. Cloke [University of Nottingham, Nottingham (United Kingdom). Nottingham Energy and Fuel Centre

    2005-07-01

    Poor burnout in a coal-fired power plant has marked penalties in the form of reduced energy efficiency and elevated waste material that can not be utilized. The prediction of coal combustion behaviour in a furnace is of great significance in providing valuable information not only for process optimization but also for coal buyers in the international market. Coal combustion models have been developed that can make predictions about burnout behaviour and burnout potential. Most of these kinetic models require standard parameters such as volatile content, particle size and assumed char porosity in order to make a burnout prediction. This paper presents a new model called the Char Burnout Model (ChB) that also uses detailed information about char morphology in its prediction. The model can use data input from one of two sources. Both sources are derived from image analysis techniques. The first from individual analysis and characterization of real char types using an automated program. The second from predicted char types based on data collected during the automated image analysis of coal particles. Modelling results were compared with a different carbon burnout kinetic model and burnout data from re-firing the chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen across several residence times. An improved agreement between ChB model and DTF experimental data proved that the inclusion of char morphology in combustion models can improve model predictions. 27 refs., 4 figs., 4 tabs.

  1. Questioning the Faith - Models and Prediction in Stream Restoration (Invited)

    Science.gov (United States)

    Wilcock, P.

    2013-12-01

    River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.

  2. Predicting Magazine Audiences with a Loglinear Model.

    Science.gov (United States)

    1987-07-01

    TITLE (InciudeSecuirty Clauificalson, Predicting !iagaz:ine Atidiences with a Loglinvar \\lode] * 12. PERSONAL AUTHOR(S) * Peter J.1 .:)anahel 1 3&. TYPE...important use of e.d. estimates is in media selection ( Aaker 1975; Lee 1962, 1963; Little and Lodish 1969). All advertising campaigns have a budget. It...BBD we obtain the modified BBD (MBBD). Let X be the number of exposures a person has to k insertions in a single magazine. The mass function of the

  3. Developing models for the prediction of hospital healthcare waste generation rate.

    Science.gov (United States)

    Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe

    2016-01-01

    An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. © The Author(s) 2015.

  4. Predicting and Modelling of Survival Data when Cox's Regression Model does not hold

    DEFF Research Database (Denmark)

    Scheike, Thomas H.; Zhang, Mei-Jie

    2002-01-01

    Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects......Aalen model; additive risk model; counting processes; competing risk; Cox regression; flexible modeling; goodness of fit; prediction of survival; survival analysis; time-varying effects...

  5. In vitro fertilization and embryo culture strongly impact the placental transcriptome in the mouse model.

    Directory of Open Access Journals (Sweden)

    Patricia Fauque

    Full Text Available BACKGROUND: Assisted Reproductive Technologies (ART are increasingly used in humans; however, their impact is now questioned. At blastocyst stage, the trophectoderm is directly in contact with an artificial medium environment, which can impact placental development. This study was designed to carry out an in-depth analysis of the placental transcriptome after ART in mice. METHODOLOGY/PRINCIPAL FINDINGS: Blastocysts were transferred either (1 after in vivo fertilization and development (control group or (2 after in vitro fertilization and embryo culture. Placentas were then analyzed at E10.5. Six percent of transcripts were altered at the two-fold threshold in placentas of manipulated embryos, 2/3 of transcripts being down-regulated. Strikingly, the X-chromosome harbors 11% of altered genes, 2/3 being induced. Imprinted genes were modified similarly to the X. Promoter composition analysis indicates that FOXA transcription factors may be involved in the transcriptional deregulations. CONCLUSIONS: For the first time, our study shows that in vitro fertilization associated with embryo culture strongly modify the placental expression profile, long after embryo manipulations, meaning that the stress of artificial environment is memorized after implantation. Expression of X and imprinted genes is also greatly modulated probably to adapt to adverse conditions. Our results highlight the importance of studying human placentas from ART.

  6. Predicting Error Bars for QSAR Models

    International Nuclear Information System (INIS)

    Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Ter Laak, Antonius; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Mueller, Klaus-Robert

    2007-01-01

    Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D 7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches

  7. Prediction models for successful external cephalic version: a systematic review.

    Science.gov (United States)

    Velzel, Joost; de Hundt, Marcella; Mulder, Frederique M; Molkenboer, Jan F M; Van der Post, Joris A M; Mol, Ben W; Kok, Marjolein

    2015-12-01

    To provide an overview of existing prediction models for successful ECV, and to assess their quality, development and performance. We searched MEDLINE, EMBASE and the Cochrane Library to identify all articles reporting on prediction models for successful ECV published from inception to January 2015. We extracted information on study design, sample size, model-building strategies and validation. We evaluated the phases of model development and summarized their performance in terms of discrimination, calibration and clinical usefulness. We collected different predictor variables together with their defined significance, in order to identify important predictor variables for successful ECV. We identified eight articles reporting on seven prediction models. All models were subjected to internal validation. Only one model was also validated in an external cohort. Two prediction models had a low overall risk of bias, of which only one showed promising predictive performance at internal validation. This model also completed the phase of external validation. For none of the models their impact on clinical practice was evaluated. The most important predictor variables for successful ECV described in the selected articles were parity, placental location, breech engagement and the fetal head being palpable. One model was assessed using discrimination and calibration using internal (AUC 0.71) and external validation (AUC 0.64), while two other models were assessed with discrimination and calibration, respectively. We found one prediction model for breech presentation that was validated in an external cohort and had acceptable predictive performance. This model should be used to council women considering ECV. Copyright © 2015. Published by Elsevier Ireland Ltd.

  8. Risk Prediction Model for Severe Postoperative Complication in Bariatric Surgery.

    Science.gov (United States)

    Stenberg, Erik; Cao, Yang; Szabo, Eva; Näslund, Erik; Näslund, Ingmar; Ottosson, Johan

    2018-01-12

    Factors associated with risk for adverse outcome are important considerations in the preoperative assessment of patients for bariatric surgery. As yet, prediction models based on preoperative risk factors have not been able to predict adverse outcome sufficiently. This study aimed to identify preoperative risk factors and to construct a risk prediction model based on these. Patients who underwent a bariatric surgical procedure in Sweden between 2010 and 2014 were identified from the Scandinavian Obesity Surgery Registry (SOReg). Associations between preoperative potential risk factors and severe postoperative complications were analysed using a logistic regression model. A multivariate model for risk prediction was created and validated in the SOReg for patients who underwent bariatric surgery in Sweden, 2015. Revision surgery (standardized OR 1.19, 95% confidence interval (CI) 1.14-0.24, p prediction model. Despite high specificity, the sensitivity of the model was low. Revision surgery, high age, low BMI, large waist circumference, and dyspepsia/GERD were associated with an increased risk for severe postoperative complication. The prediction model based on these factors, however, had a sensitivity that was too low to predict risk in the individual patient case.

  9. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  10. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Directory of Open Access Journals (Sweden)

    Bang Wool Eom

    Full Text Available Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea.Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope.During a median of 11.4 years of follow-up, 19,465 (1.4% and 5,579 (0.7% newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women.In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  11. Characteristic Model-Based Robust Model Predictive Control for Hypersonic Vehicles with Constraints

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2017-06-01

    Full Text Available Designing robust control for hypersonic vehicles in reentry is difficult, due to the features of the vehicles including strong coupling, non-linearity, and multiple constraints. This paper proposed a characteristic model-based robust model predictive control (MPC for hypersonic vehicles with reentry constraints. First, the hypersonic vehicle is modeled by a characteristic model composed of a linear time-varying system and a lumped disturbance. Then, the identification data are regenerated by the accumulative sum idea in the gray theory, which weakens effects of the random noises and strengthens regularity of the identification data. Based on the regenerated data, the time-varying parameters and the disturbance are online estimated according to the gray identification. At last, the mixed H2/H∞ robust predictive control law is proposed based on linear matrix inequalities (LMIs and receding horizon optimization techniques. Using active tackling system constraints of MPC, the input and state constraints are satisfied in the closed-loop control system. The validity of the proposed control is verified theoretically according to Lyapunov theory and illustrated by simulation results.

  12. Kosterlitz-Thouless transitions in simple spin-models with strongly varying vortex densities

    NARCIS (Netherlands)

    Himbergen, J.E.J.M. van

    1985-01-01

    A generalized XY-model, consisting of a family of nearest neighbour potentials of varying shape, for classical planar spins on a two-dimensional square lattice is analysed by a combination of Migdal-Kadanoff real-space renormalization and Monte Carlo simulations on a sequence of finite lattices of

  13. Strong time-consistency in the cartel-versus-fringe model

    NARCIS (Netherlands)

    Groot, F.; Withagen, C.A.A.M.; Zeeuw, de A.J.

    2003-01-01

    Due to developments on the oil market in the 1970s, the theory of exhaustible resources was extended with the cartel-versus-fringe model to characterize markets with one big coherent cartel and a large number of small suppliers called the fringe. Because cartel and fringe are leader and follower,

  14. A model for quasi parity-doublet spectra with strong coriolis mixing

    International Nuclear Information System (INIS)

    Minkov, N.; Drenska, S.; Strecker, M.

    2013-01-01

    The model of coherent quadrupole and octupole motion (CQOM) is combined with the reflection-asymmetric deformed shell model (DSM) in a way allowing fully microscopic description of the Coriolis decoupling and K-mixing effects in the quasi parity-doublet spectra of odd-mass nuclei. In this approach the even-even core is considered within the CQOM model, while the odd nucleon is described within DSM with pairing interaction. The Coriolis decoupling/mixing factors are calculated through a parity-projection of the single-particle wave function. Expressions for the Coriolis mixed quasi parity-doublet levels are obtained in the second order of perturbation theory, while the K-mixed core plus particle wave function is obtained in the first order. Expressions for the B(E1), B(E2) and B(E3) reduced probabilities for transitions within and between different quasi-doublets are obtained by using the total K-mixed wave function. The model scheme is elaborated in a form capable of describing the yrast and non-yrast quasi parity-doublet spectra in odd-mass nuclei. (author)

  15. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Model predictions for auxiliary heating in spheromaks

    International Nuclear Information System (INIS)

    Fauler, T.K.; Khua, D.D.

    1997-01-01

    Calculations are presented of the plasma temperature waited for under auxiliary heating in spheromaks. A model, ensuring good agreement of earlier experiments with joule heating results, is used. The model includes heat losses due to magnetic fluctuations and shows that the plasma temperatures of the kilo-electron-volt order may be achieved in a small device with the radius of 0.3 m only

  17. Validating predictions from climate envelope models.

    Directory of Open Access Journals (Sweden)

    James I Watling

    Full Text Available Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species' distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967-1971 (t1 and evaluated using occurrence data from 1998-2002 (t2. Model sensitivity (the ability to correctly classify species presences was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on

  18. Validating predictions from climate envelope models

    Science.gov (United States)

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  19. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  20. A neighborhood statistics model for predicting stream pathogen indicator levels.

    Science.gov (United States)

    Pandey, Pramod K; Pasternack, Gregory B; Majumder, Mahbubul; Soupir, Michelle L; Kaiser, Mark S

    2015-03-01

    Because elevated levels of water-borne Escherichia coli in streams are a leading cause of water quality impairments in the U.S., water-quality managers need tools for predicting aqueous E. coli levels. Presently, E. coli levels may be predicted using complex mechanistic models that have a high degree of unchecked uncertainty or simpler statistical models. To assess spatio-temporal patterns of instream E. coli levels, herein we measured E. coli, a pathogen indicator, at 16 sites (at four different times) within the Squaw Creek watershed, Iowa, and subsequently, the Markov Random Field model was exploited to develop a neighborhood statistics model for predicting instream E. coli levels. Two observed covariates, local water temperature (degrees Celsius) and mean cross-sectional depth (meters), were used as inputs to the model. Predictions of E. coli levels in the water column were compared with independent observational data collected from 16 in-stream locations. The results revealed that spatio-temporal averages of predicted and observed E. coli levels were extremely close. Approximately 66 % of individual predicted E. coli concentrations were within a factor of 2 of the observed values. In only one event, the difference between prediction and observation was beyond one order of magnitude. The mean of all predicted values at 16 locations was approximately 1 % higher than the mean of the observed values. The approach presented here will be useful while assessing instream contaminations such as pathogen/pathogen indicator levels at the watershed scale.

  1. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  2. Comparison of Vibrational Relaxation Modeling for Strongly Non-Equilibrium Flows

    Science.gov (United States)

    2014-01-01

    important pro- cess in a wide range of high speed flows. High temperature shock layers that form in front of hypersonic vehicles can lead to significant...continuum flows for use in traditional Computational Fluid Dynamics ( CFD ) and non-continuum flows for use with rarefied flow de- scriptions, such as the...145 .98 4396 V. Summary and Conclusions The form of two vibrational relaxation models that are commonly used in DSMC and CFD simula- tions have been

  3. Breaking of SU(4) symmetry and interplay between strongly-correlated phases in the Hubbard model

    Czech Academy of Sciences Publication Activity Database

    Golubeva, A.; Sotnikov, A.; Cichy, A.; Kuneš, Jan; Hofstetter, W.

    2017-01-01

    Roč. 95, č. 12 (2017), s. 1-7, č. článku 125108. ISSN 2469-9950 EU Projects: European Commission(XE) 646807 - EXMAG Institutional support: RVO:68378271 Keywords : Hubbard model * SU(4) Subject RIV: BE - Theoretical Physics OBOR OECD: Atomic, molecular and chemical physics (physics of atoms and molecules including collision, interaction with radiation, magnetic resonances, Mössbauer effect) Impact factor: 3.836, year: 2016

  4. Strong constraint on modelled global carbon uptake using solar-induced chlorophyll fluorescence data.

    Science.gov (United States)

    MacBean, Natasha; Maignan, Fabienne; Bacour, Cédric; Lewis, Philip; Peylin, Philippe; Guanter, Luis; Köhler, Philipp; Gómez-Dans, Jose; Disney, Mathias

    2018-01-31

    Accurate terrestrial biosphere model (TBM) simulations of gross carbon uptake (gross primary productivity - GPP) are essential for reliable future terrestrial carbon sink projections. However, uncertainties in TBM GPP estimates remain. Newly-available satellite-derived sun-induced chlorophyll fluorescence (SIF) data offer a promising direction for addressing this issue by constraining regional-to-global scale modelled GPP. Here, we use monthly 0.5° GOME-2 SIF data from 2007 to 2011 to optimise GPP parameters of the ORCHIDEE TBM. The optimisation reduces GPP magnitude across all vegetation types except C4 plants. Global mean annual GPP therefore decreases from 194 ± 57 PgCyr -1 to 166 ± 10 PgCyr -1 , bringing the model more in line with an up-scaled flux tower estimate of 133 PgCyr -1 . Strongest reductions in GPP are seen in boreal forests: the result is a shift in global GPP distribution, with a ~50% increase in the tropical to boreal productivity ratio. The optimisation resulted in a greater reduction in GPP than similar ORCHIDEE parameter optimisation studies using satellite-derived NDVI from MODIS and eddy covariance measurements of net CO 2 fluxes from the FLUXNET network. Our study shows that SIF data will be instrumental in constraining TBM GPP estimates, with a consequent improvement in global carbon cycle projections.

  5. Preclinical models used for immunogenicity prediction of therapeutic proteins.

    Science.gov (United States)

    Brinks, Vera; Weinbuch, Daniel; Baker, Matthew; Dean, Yann; Stas, Philippe; Kostense, Stefan; Rup, Bonita; Jiskoot, Wim

    2013-07-01

    All therapeutic proteins are potentially immunogenic. Antibodies formed against these drugs can decrease efficacy, leading to drastically increased therapeutic costs and in rare cases to serious and sometimes life threatening side-effects. Many efforts are therefore undertaken to develop therapeutic proteins with minimal immunogenicity. For this, immunogenicity prediction of candidate drugs during early drug development is essential. Several in silico, in vitro and in vivo models are used to predict immunogenicity of drug leads, to modify potentially immunogenic properties and to continue development of drug candidates with expected low immunogenicity. Despite the extensive use of these predictive models, their actual predictive value varies. Important reasons for this uncertainty are the limited/insufficient knowledge on the immune mechanisms underlying immunogenicity of therapeutic proteins, the fact that different predictive models explore different components of the immune system and the lack of an integrated clinical validation. In this review, we discuss the predictive models in use, summarize aspects of immunogenicity that these models predict and explore the merits and the limitations of each of the models.

  6. Development of Interpretable Predictive Models for BPH and Prostate Cancer.

    Science.gov (United States)

    Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, J A

    2015-01-01

    Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. Statistical dependence with PC and BPH was found for prostate volume (P-value BPH prediction. PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced.

  7. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing so...... decisions need to be made in terms of statistical distributions of walking parameters and in terms of the parameters describing the statistical distributions. The paper explores how sensitive computations of bridge response are to some of the decisions to be made in this respect. This is useful...

  8. A COMPARISON BETWEEN THREE PREDICTIVE MODELS OF COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    DUMITRU CIOBANU

    2013-12-01

    Full Text Available Time series prediction is an open problem and many researchers are trying to find new predictive methods and improvements for the existing ones. Lately methods based on neural networks are used extensively for time series prediction. Also, support vector machines have solved some of the problems faced by neural networks and they began to be widely used for time series prediction. The main drawback of those two methods is that they are global models and in the case of a chaotic time series it is unlikely to find such model. In this paper it is presented a comparison between three predictive from computational intelligence field one based on neural networks one based on support vector machine and another based on chaos theory. We show that the model based on chaos theory is an alternative to the other two methods.

  9. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  10. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  11. A burnout prediction model based around char morphology

    Energy Technology Data Exchange (ETDEWEB)

    Tao Wu; Edward Lester; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical, Environmental and Mining Engineering

    2006-05-15

    Several combustion models have been developed that can make predictions about coal burnout and burnout potential. Most of these kinetic models require standard parameters such as volatile content and particle size to make a burnout prediction. This article presents a new model called the char burnout (ChB) model, which also uses detailed information about char morphology in its prediction. The input data to the model is based on information derived from two different image analysis techniques. One technique generates characterization data from real char samples, and the other predicts char types based on characterization data from image analysis of coal particles. The pyrolyzed chars in this study were created in a drop tube furnace operating at 1300{sup o}C, 200 ms, and 1% oxygen. Modeling results were compared with a different carbon burnout kinetic model as well as the actual burnout data from refiring the same chars in a drop tube furnace operating at 1300{sup o}C, 5% oxygen, and residence times of 200, 400, and 600 ms. A good agreement between ChB model and experimental data indicates that the inclusion of char morphology in combustion models could well improve model predictions. 38 refs., 5 figs., 6 tabs.

  12. Building a Unified Computational Model for the Resonant X-Ray Scattering of Strongly Correlated Materials

    Energy Technology Data Exchange (ETDEWEB)

    Bansil, Arun [Northeastern Univ., Boston, MA (United States)

    2016-12-01

    Basic-Energy Sciences of the Department of Energy (BES/DOE) has made large investments in x-ray sources in the U.S. (NSLS-II, LCLS, NGLS, ALS, APS) as powerful enabling tools for opening up unprecedented new opportunities for exploring properties of matter at various length and time scales. The coming online of the pulsed photon source literally allows us to see and follow the dynamics of processes in materials at their natural timescales. There is an urgent need therefore to develop theoretical methodologies and computational models for understanding how x-rays interact with matter and the related spectroscopies of materials. The present project addressed aspects of this grand challenge of X-ray science. In particular, our Collaborative Research Team (CRT) focused on understanding and modeling of elastic and inelastic resonant X-ray scattering processes. We worked to unify the three different computational approaches currently used for modeling X-ray scattering—density functional theory, dynamical mean-field theory, and small-cluster exact diagonalization—to achieve a more realistic material-specific picture of the interaction between X-rays and complex matter. To achieve a convergence in the interpretation and to maximize complementary aspects of different theoretical methods, we concentrated on the cuprates, where most experiments have been performed. Our team included both US and international researchers, and it fostered new collaborations between researchers currently working with different approaches. In addition, we developed close relationships with experimental groups working in the area at various synchrotron facilities in the US. Our CRT thus helped toward enabling the US to assume a leadership role in the theoretical development of the field, and to create a global network and community of scholars dedicated to X-ray scattering research.

  13. Building a Unified Computational Model for the Resonant X-Ray Scattering of Strongly Correlated Materials

    International Nuclear Information System (INIS)

    Bansil, Arun

    2016-01-01

    Basic-Energy Sciences of the Department of Energy (BES/DOE) has made large investments in x-ray sources in the U.S. (NSLS-II, LCLS, NGLS, ALS, APS) as powerful enabling tools for opening up unprecedented new opportunities for exploring properties of matter at various length and time scales. The coming online of the pulsed photon source literally allows us to see and follow the dynamics of processes in materials at their natural timescales. There is an urgent need therefore to develop theoretical methodologies and computational models for understanding how x-rays interact with matter and the related spectroscopies of materials. The present project addressed aspects of this grand challenge of X-ray science. In particular, our Collaborative Research Team (CRT) focused on understanding and modeling of elastic and inelastic resonant X-ray scattering processes. We worked to unify the three different computational approaches currently used for modeling X-ray scattering-density functional theory, dynamical mean-field theory, and small-cluster exact diagonalization-to achieve a more realistic material-specific picture of the interaction between X-rays and complex matter. To achieve a convergence in the interpretation and to maximize complementary aspects of different theoretical methods, we concentrated on the cuprates, where most experiments have been performed. Our team included both US and international researchers, and it fostered new collaborations between researchers currently working with different approaches. In addition, we developed close relationships with experimental groups working in the area at various synchrotron facilities in the US. Our CRT thus helped toward enabling the US to assume a leadership role in the theoretical development of the field, and to create a global network and community of scholars dedicated to X-ray scattering research.

  14. ON THE USE OF FIELD THEORETICAL MODELS IN STRONG INTERACTION PHYSICS

    Energy Technology Data Exchange (ETDEWEB)

    Fubini, Sergio

    1963-06-15

    The effects of the short-range behavior in potential scattering upon the asymptotic behavior of the stronginteraction scattering amplitude and upon the validity of the methods of solution are discussed, using models. In particular, it is found that for certain singular potentials, the bound-state problem cannot be solved by a plane-wave expansion. For these singular potentials, an irtegral equation must be set up by means of an expansion in terms of eigenfunctions having the correct behavior at small distances. The study makes use of both the Schroedinger and Bethe-Salpeter equations. (T.F.H.)

  15. Modeling cavities exhibiting strong lateral confinement using open geometry Fourier modal method

    DEFF Research Database (Denmark)

    Häyrynen, Teppo; Gregersen, Niels

    2016-01-01

    We have developed a computationally efficient Fourier-Bessel expansion based open geometry formalism for modeling the optical properties of rotationally symmetric photonic nanostructures. The lateral computation domain is assumed infinite so that no artificial boundary conditions are needed. Instead,...... around a geometry specific dominant transverse wavenumber region. We will use the developed approach to investigate the Q factor and mode confinement in cavities where top DBR mirror has small rectangular defect confining the modes laterally on the defect region....

  16. A note on the strong formulation of stochastic control problems with model uncertainty

    OpenAIRE

    Sirbu, Mihai

    2014-01-01

    We consider a  Markovian stochastic control problem with  model uncertainty. The controller (intelligent player) observes only the state, and, therefore, uses feedback (closed-loop) strategies.  The adverse player (nature) who does not have a direct interest in the payoff, chooses open-loop controls that parametrize Knightian uncertainty. This creates a two-step optimization  problem (like half of a game) over feedback strategies and open-loop controls. The main result is to sh...

  17. Bayesian Age-Period-Cohort Modeling and Prediction - BAMP

    Directory of Open Access Journals (Sweden)

    Volker J. Schmid

    2007-10-01

    Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.

  18. Modeling for prediction of restrained shrinkage effect in concrete repair

    International Nuclear Information System (INIS)

    Yuan Yingshu; Li Guo; Cai Yue

    2003-01-01

    A general model of autogenous shrinkage caused by chemical reaction (chemical shrinkage) is developed by means of Arrhenius' law and a degree of chemical reaction. Models of tensile creep and relaxation modulus are built based on a viscoelastic, three-element model. Tests of free shrinkage and tensile creep were carried out to determine some coefficients in the models. Two-dimensional FEM analysis based on the models and other constitutions can predict the development of tensile strength and cracking. Three groups of patch-repaired beams were designed for analysis and testing. The prediction from the analysis shows agreement with the test results. The cracking mechanism after repair is discussed

  19. Evaluation of two models for predicting elemental accumulation by arthropods

    International Nuclear Information System (INIS)

    Webster, J.R.; Crossley, D.A. Jr.

    1978-01-01

    Two different models have been proposed for predicting elemental accumulation by arthropods. Parameters of both models can be quantified from radioisotope elimination experiments. Our analysis of the 2 models shows that both predict identical elemental accumulation for a whole organism, though differing in the accumulation in body and gut. We quantified both models with experimental data from 134 Cs and 85 Sr elimination by crickets. Computer simulations of radioisotope accumulation were then compared with actual accumulation experiments. Neither model showed exact fit to the experimental data, though both showed the general pattern of elemental accumulation

  20. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment