WorldWideScience

Sample records for model predicts strong

  1. Prediction of strong earthquake motions on rock surface using evolutionary process models

    International Nuclear Information System (INIS)

    Kameda, H.; Sugito, M.

    1984-01-01

    Stochastic process models are developed for prediction of strong earthquake motions for engineering design purposes. Earthquake motions with nonstationary frequency content are modeled by using the concept of evolutionary processes. Discussion is focused on the earthquake motions on bed rocks which are important for construction of nuclear power plants in seismic regions. On this basis, two earthquake motion prediction models are developed, one (EMP-IB Model) for prediction with given magnitude and epicentral distance, and the other (EMP-IIB Model) to account for the successive fault ruptures and the site location relative to the fault of great earthquakes. (Author) [pt

  2. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches

    International Nuclear Information System (INIS)

    Berge-Thierry, C.

    2007-05-01

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  3. Enhanced outage prediction modeling for strong extratropical storms and hurricanes in the Northeastern United States

    Science.gov (United States)

    Cerrai, D.; Anagnostou, E. N.; Wanik, D. W.; Bhuiyan, M. A. E.; Zhang, X.; Yang, J.; Astitha, M.; Frediani, M. E.; Schwartz, C. S.; Pardakhti, M.

    2016-12-01

    The overwhelming majority of human activities need reliable electric power. Severe weather events can cause power outages, resulting in substantial economic losses and a temporary worsening of living conditions. Accurate prediction of these events and the communication of forecasted impacts to the affected utilities is necessary for efficient emergency preparedness and mitigation. The University of Connecticut Outage Prediction Model (OPM) uses regression tree models, high-resolution weather reanalysis and real-time weather forecasts (WRF and NCAR ensemble), airport station data, vegetation and electric grid characteristics and historical outage data to forecast the number and spatial distribution of outages in the power distribution grid located within dense vegetation. Recent OPM improvements consist of improved storm classification and addition of new predictive weather-related variables and are demonstrated using a leave-one-storm-out cross-validation based on 130 severe extratropical storms and two hurricanes (Sandy and Irene) in the Northeast US. We show that it is possible to predict the number of trouble spots causing outages in the electric grid with a median absolute percentage error as low as 27% for some storm types, and at most around 40%, in a scale that varies between four orders of magnitude, from few outages to tens of thousands. This outage information can be communicated to the electric utility to manage allocation of crews and equipment and minimize the recovery time for an upcoming storm hazard.

  4. Predicting long-term recovery of a strongly acidified stream using MAGIC and climate models (Litavka, Czech Republic

    Directory of Open Access Journals (Sweden)

    D. W. Hardekopf

    2008-03-01

    Full Text Available Two branches forming the headwaters of a stream in the Czech Republic were studied. Both streams have similar catchment characteristics and historical deposition; however one is rain-fed and strongly affected by acid atmospheric deposition, the other spring-fed and only moderately acidified. The MAGIC model was used to reconstruct past stream water and soil chemistry of the rain-fed branch, and predict future recovery up to 2050 under current proposed emissions levels. A future increase in air temperature calculated by a regional climate model was then used to derive climate-related scenarios to test possible factors affecting chemical recovery up to 2100. Macroinvertebrates were sampled from both branches, and differences in stream chemistry were reflected in the community structures. According to modelled forecasts, recovery of the rain-fed branch will be gradual and limited, and continued high levels of sulphate release from the soils will continue to dominate stream water chemistry, while scenarios related to a predicted increase in temperature will have little impact. The likelihood of colonization of species from the spring-fed branch was evaluated considering the predicted extent of chemical recovery. The results suggest that the possibility of colonization of species from the spring-fed branch to the rain-fed will be limited to only the acid-tolerant stonefly, caddisfly and dipteran taxa in the modelled period.

  5. Predictive Modeling for Strongly Correlated f-electron Systems: A first-principles and database driven machine learning approach

    Science.gov (United States)

    Ahmed, Towfiq; Khair, Adnan; Abdullah, Mueen; Harper, Heike; Eriksson, Olle; Wills, John; Zhu, Jian-Xin; Balatsky, Alexander

    Data driven computational tools are being developed for theoretical understanding of electronic properties in f-electron based materials, e.g., Lanthanides and Actnides compounds. Here we show our preliminary work on Ce compounds. Due to a complex interplay among the hybridization of f-electrons to non-interacting conduction band, spin-orbit coupling, and strong coulomb repulsion of f-electrons, no model or first-principles based theory can fully explain all the structural and functional phases of f-electron systems. Motivated by the large need in predictive modeling of actinide compounds, we adopted a data-driven approach. We found negative correlation between the hybridization and atomic volume. Mutual information between these two features were also investigated. In order to extend our search space with more features and predictability of new compounds, we are currently developing electronic structure database. Our f-electron database will be potentially aided by machine learning (ML) algorithm to extract complex electronic, magnetic and structural properties in f-electron system, and thus, will open up new pathways for predictive capabilities and design principles of complex materials. NSEC, IMS at LANL.

  6. Strong interactions - quark models

    International Nuclear Information System (INIS)

    Goto, M.; Ferreira, P.L.

    1979-01-01

    The variational method is used for the PSI and upsilon family spectra reproduction from the quark model, through several phenomenological potentials, viz.: linear, linear plus coulomb term and logarithmic. (L.C.) [pt

  7. Right Heart End-Systolic Remodeling Index Strongly Predicts Outcomes in Pulmonary Arterial Hypertension: Comparison With Validated Models.

    Science.gov (United States)

    Amsallem, Myriam; Sweatt, Andrew J; Aymami, Marie C; Kuznetsova, Tatiana; Selej, Mona; Lu, HongQuan; Mercier, Olaf; Fadel, Elie; Schnittger, Ingela; McConnell, Michael V; Rabinovitch, Marlene; Zamanian, Roham T; Haddad, Francois

    2017-06-01

    Right ventricular (RV) end-systolic dimensions provide information on both size and function. We investigated whether an internally scaled index of end-systolic dimension is incremental to well-validated prognostic scores in pulmonary arterial hypertension. From 2005 to 2014, 228 patients with pulmonary arterial hypertension were prospectively enrolled. RV end-systolic remodeling index (RVESRI) was defined by lateral length divided by septal height. The incremental values of RV free wall longitudinal strain and RVESRI to risk scores were determined. Mean age was 49±14 years, 78% were female, 33% had connective tissue disease, 52% were in New York Heart Association class ≥III, and mean pulmonary vascular resistance was 11.2±6.4 WU. RVESRI and right atrial area were strongly connected to the other right heart metrics. Three zones of adaptation (adapted, maladapted, and severely maladapted) were identified based on the RVESRI to RV systolic pressure relationship. During a mean follow-up of 3.9±2.4 years, the primary end point of death, transplant, or admission for heart failure was reached in 88 patients. RVESRI was incremental to risk prediction scores in pulmonary arterial hypertension, including the Registry to Evaluate Early and Long-Term PAH Disease Management score, the Pulmonary Hypertension Connection equation, and the Mayo Clinic model. Using multivariable analysis, New York Heart Association class III/IV, RVESRI, and log NT-proBNP (N-Terminal Pro-B-Type Natriuretic Peptide) were retained (χ 2 , 62.2; P right heart metrics, RVESRI demonstrated the best test-retest characteristics. RVESRI is a simple reproducible prognostic marker in patients with pulmonary arterial hypertension. © 2017 American Heart Association, Inc.

  8. Strong ground motion prediction using virtual earthquakes.

    Science.gov (United States)

    Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C

    2014-01-24

    Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.

  9. Is It Possible to Predict Strong Earthquakes?

    Science.gov (United States)

    Polyakov, Y. S.; Ryabinin, G. V.; Solovyeva, A. B.; Timashev, S. F.

    2015-07-01

    The possibility of earthquake prediction is one of the key open questions in modern geophysics. We propose an approach based on the analysis of common short-term candidate precursors (2 weeks to 3 months prior to strong earthquake) with the subsequent processing of brain activity signals generated in specific types of rats (kept in laboratory settings) who reportedly sense an impending earthquake a few days prior to the event. We illustrate the identification of short-term precursors using the groundwater sodium-ion concentration data in the time frame from 2010 to 2014 (a major earthquake occurred on 28 February 2013) recorded at two different sites in the southeastern part of the Kamchatka Peninsula, Russia. The candidate precursors are observed as synchronized peaks in the nonstationarity factors, introduced within the flicker-noise spectroscopy framework for signal processing, for the high-frequency component of both time series. These peaks correspond to the local reorganizations of the underlying geophysical system that are believed to precede strong earthquakes. The rodent brain activity signals are selected as potential "immediate" (up to 2 weeks) deterministic precursors because of the recent scientific reports confirming that rodents sense imminent earthquakes and the population-genetic model of K irshvink (Soc Am 90, 312-323, 2000) showing how a reliable genetic seismic escape response system may have developed over the period of several hundred million years in certain animals. The use of brain activity signals, such as electroencephalograms, in contrast to conventional abnormal animal behavior observations, enables one to apply the standard "input-sensor-response" approach to determine what input signals trigger specific seismic escape brain activity responses.

  10. Predictions for Boson-Jet Observables and Fragmentation Function Ratios from a Hybrid Strong/Weak Coupling Model for Jet Quenching

    CERN Document Server

    Casalderrey-Solana, Jorge; Milhano, José Guilherme; Pablos, Daniel; Rajagopal, Krishna

    2016-01-01

    We have previously introduced a hybrid strong/weak coupling model for jet quenching in heavy ion collisions that describes the production and fragmentation of jets at weak coupling, using PYTHIA, and describes the rate at which each parton in the jet shower loses energy as it propagates through the strongly coupled plasma, dE/dx, using an expression computed holographically at strong coupling. The model has a single free parameter that we fit to a single experimental measurement. We then confront our model with experimental data on many other jet observables, focusing here on boson-jet observables, finding that it provides a good description of present jet data. Next, we provide the predictions of our hybrid model for many measurements to come, including those for inclusive jet, dijet, photon-jet and Z-jet observables in heavy ion collisions with energy $\\sqrt{s}=5.02$ ATeV coming soon at the LHC. As the statistical uncertainties on near-future measurements of photon-jet observables are expected to be much sm...

  11. Strong earthquakes can be predicted: a multidisciplinary method for strong earthquake prediction

    Directory of Open Access Journals (Sweden)

    J. Z. Li

    2003-01-01

    Full Text Available The imminent prediction on a group of strong earthquakes that occurred in Xinjiang, China in April 1997 is introduced in detail. The prediction was made on the basis of comprehensive analyses on the results obtained by multiple innovative methods including measurements of crustal stress, observation of infrasonic wave in an ultra low frequency range, and recording of abnormal behavior of certain animals. Other successful examples of prediction are also enumerated. The statistics shows that above 40% of 20 total predictions jointly presented by J. Z. Li, Z. Q. Ren and others since 1995 can be regarded as effective. With the above methods, precursors of almost every strong earthquake around the world that occurred in recent years were recorded in our laboratory. However, the physical mechanisms of the observed precursors are yet impossible to explain at this stage.

  12. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu [University of Michigan, Department of Astronomy, 1085 South University Avenue, Ann Arbor, MI 48109-1107 (United States)

    2016-11-20

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  13. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  14. Seismic rupture modelling, strong motion prediction and seismic hazard assessment: fundamental and applied approaches; Modelisation de la rupture sismique, prediction du mouvement fort, et evaluation de l'alea sismique: approches fondamentale et appliquee

    Energy Technology Data Exchange (ETDEWEB)

    Berge-Thierry, C

    2007-05-15

    The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)

  15. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  16. Earthquake source model using strong motion displacement

    Indian Academy of Sciences (India)

    The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...

  17. Morphological modelling of strongly curved islands

    NARCIS (Netherlands)

    Roelvink, D.; Den Heijer, C.; Van Thiel De Vries, J.S.M.

    2013-01-01

    Land reclamations and island coasts often involve strongly curved shorelines, which are challenging to be properly modeled by numerical morphological models. Evaluation of the long term development of these types of coasts as well as their response to storm conditions requires proper representation

  18. Dynamic Heat Transfer Model of Refrigerated Foodstuff<strong> strong>

    DEFF Research Database (Denmark)

    Cai, Junping; Risum, Jørgen; Thybo, Claus

    2006-01-01

    condition. The influence of different factors such as air velocity, type of food, size of food, or food package are investigated, the question such as what kind of food are more sensitive to the surrounding temperature change is answered. This model can serve as a prerequisite for modelling of food quality...

  19. Prediction of the occurrence of related strong earthquakes in Italy

    International Nuclear Information System (INIS)

    Vorobieva, I.A.; Panza, G.F.

    1993-06-01

    In the seismic flow it is often observed that a Strong Earthquake (SE), is followed by Related Strong Earthquakes (RSEs), which occur near the epicentre of the SE with origin time rather close to the origin time of the SE. The algorithm for the prediction of the occurrence of a RSE has been developed and applied for the first time to the seismicity data of the California-Nevada region and has been successfully tested in several regions of the World, the statistical significance of the result being 97%. So far, it has been possible to make five successful forward predictions, with no false alarms or failures to predict. The algorithm is applied here to the Italian territory, where the occurrence of RSEs is a particularly rare phenomenon. Our results show that the standard algorithm is successfully directly applicable without any adjustment of the parameters. Eleven SEs are considered. Of them, three are followed by a RSE, as predicted by the algorithm, eight SEs are not followed by a RSE, and the algorithm predicts this behaviour for seven of them, giving rise to only one false alarm. Since, in Italy, quite often the series of strong earthquakes are relatively short, the algorithm has been extended to handle such situation. The result of this experiment indicates that it is possible to attempt to test a SE, for the occurrence of a RSE, soon after the occurrence of the SE itself, performing timely ''preliminary'' recognition on reduced data sets. This fact, the high confidence level of the retrospective analysis, and the first successful forward predictions, made in different parts of the World, indicates that, even if additional tests are desirable, the algorithm can already be considered for routine application to Civil Defence. (author). Refs, 3 figs, 7 tabs

  20. Electroweak and Strong Interactions Phenomenology, Concepts, Models

    CERN Document Server

    Scheck, Florian

    2012-01-01

    Electroweak and Strong Interaction: Phenomenology, Concepts, Models, begins with relativistic quantum mechanics and some quantum field theory which lay the foundation for the rest of the text. The phenomenology and the physics of the fundamental interactions are emphasized through a detailed discussion of the empirical fundamentals of unified theories of strong, electromagnetic, and weak interactions. The principles of local gauge theories are described both in a heuristic and a geometric framework. The minimal standard model of the fundamental interactions is developed in detail and characteristic applications are worked out. Possible signals of physics beyond that model, notably in the physics of neutrinos are also discussed. Among the applications scattering on nucleons and on nuclei provide salient examples. Numerous exercises with solutions make the text suitable for advanced courses or individual study. This completely updated revised new edition contains an enlarged chapter on quantum chromodynamics an...

  1. Stochastic finite-fault modelling of strong earthquakes in Narmada ...

    Indian Academy of Sciences (India)

    Stochastic finite fault modelling of strong earthquakes. 839. 1983). It has been widely used to predict the ground motion around the globe where earthquake recordings are scanty. The conventional point source approximation is unable to characterize key features of ground motions from large earthquakes, such as their ...

  2. Model predictions of the results of interferometric observations for stars under conditions of strong gravitational scattering by black holes and wormholes

    International Nuclear Information System (INIS)

    Shatskiy, A. A.; Kovalev, Yu. Yu.; Novikov, I. D.

    2015-01-01

    The characteristic and distinctive features of the visibility amplitude of interferometric observations for compact objects like stars in the immediate vicinity of the central black hole in our Galaxy are considered. These features are associated with the specifics of strong gravitational scattering of point sources by black holes, wormholes, or black-white holes. The revealed features will help to determine the most important topological characteristics of the central object in our Galaxy: whether this object possesses the properties of only a black hole or also has characteristics unique to wormholes or black-white holes. These studies can be used to interpret the results of optical, infrared, and radio interferometric observations

  3. Model predictions of the results of interferometric observations for stars under conditions of strong gravitational scattering by black holes and wormholes

    Energy Technology Data Exchange (ETDEWEB)

    Shatskiy, A. A., E-mail: shatskiy@asc.rssi.ru; Kovalev, Yu. Yu.; Novikov, I. D. [Russian Academy of Sciences, Astro Space Center, Lebedev Physical Institute (Russian Federation)

    2015-05-15

    The characteristic and distinctive features of the visibility amplitude of interferometric observations for compact objects like stars in the immediate vicinity of the central black hole in our Galaxy are considered. These features are associated with the specifics of strong gravitational scattering of point sources by black holes, wormholes, or black-white holes. The revealed features will help to determine the most important topological characteristics of the central object in our Galaxy: whether this object possesses the properties of only a black hole or also has characteristics unique to wormholes or black-white holes. These studies can be used to interpret the results of optical, infrared, and radio interferometric observations.

  4. Convex Modeling of Interactions with Strong Heredity.

    Science.gov (United States)

    Haris, Asad; Witten, Daniela; Simon, Noah

    2016-01-01

    We consider the task of fitting a regression model involving interactions among a potentially large set of covariates, in which we wish to enforce strong heredity. We propose FAMILY, a very general framework for this task. Our proposal is a generalization of several existing methods, such as VANISH [Radchenko and James, 2010], hierNet [Bien et al., 2013], the all-pairs lasso, and the lasso using only main effects. It can be formulated as the solution to a convex optimization problem, which we solve using an efficient alternating directions method of multipliers (ADMM) algorithm. This algorithm has guaranteed convergence to the global optimum, can be easily specialized to any convex penalty function of interest, and allows for a straightforward extension to the setting of generalized linear models. We derive an unbiased estimator of the degrees of freedom of FAMILY, and explore its performance in a simulation study and on an HIV sequence data set.

  5. Strong coupling from the Hubbard model

    Science.gov (United States)

    Minahan, Joseph A.

    2006-10-01

    It was recently observed that the one-dimensional half-filled Hubbard model reproduces the known part of the perturbative spectrum of planar {\\cal N}=4 super Yang Mills in the SU(2) sector. Assuming that this identification is valid beyond perturbation theory, we investigate the behaviour of this spectrum as the 't Hooft parameter λ becomes large. We show that the full dimension Δ of the Konishi superpartner is the solution of a sixth-order polynomial while Δ for a bare dimension 5 operator is the solution of a cubic. In both cases, the equations can be solved easily as a series expansion for both small and large λ and the equations can be inverted to express λ as an explicit function of Δ. We then consider more general operators and show how Δ depends on λ in the strong coupling limit. We are also able to distinguish those states in the Hubbard model which correspond to the gauge-invariant operators for all values of λ. Finally, we compare our results with known results for strings on AdS5 × S5, where we find agreement for a range of R-charges.

  6. Diagnosing a Strong-Fault Model by Conflict and Consistency.

    Science.gov (United States)

    Zhang, Wenfeng; Zhao, Qi; Zhao, Hongbo; Zhou, Gan; Feng, Wenquan

    2018-03-29

    The diagnosis method for a weak-fault model with only normal behaviors of each component has evolved over decades. However, many systems now demand a strong-fault models, the fault modes of which have specific behaviors as well. It is difficult to diagnose a strong-fault model due to its non-monotonicity. Currently, diagnosis methods usually employ conflicts to isolate possible fault and the process can be expedited when some observed output is consistent with the model's prediction where the consistency indicates probably normal components. This paper solves the problem of efficiently diagnosing a strong-fault model by proposing a novel Logic-based Truth Maintenance System (LTMS) with two search approaches based on conflict and consistency. At the beginning, the original a strong-fault model is encoded by Boolean variables and converted into Conjunctive Normal Form (CNF). Then the proposed LTMS is employed to reason over CNF and find multiple minimal conflicts and maximal consistencies when there exists fault. The search approaches offer the best candidate efficiency based on the reasoning result until the diagnosis results are obtained. The completeness, coverage, correctness and complexity of the proposals are analyzed theoretically to show their strength and weakness. Finally, the proposed approaches are demonstrated by applying them to a real-world domain-the heat control unit of a spacecraft-where the proposed methods are significantly better than best first and conflict directly with A* search methods.

  7. The hadronic standard model for strong and electroweak interactions

    Energy Technology Data Exchange (ETDEWEB)

    Raczka, R. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)

    1993-12-31

    We propose a new model for strong and electro-weak interactions. First, we review various QCD predictions for hadron-hadron and lepton-hadron processes. We indicate that the present formulation of strong interactions in the frame work of Quantum Chromodynamics encounters serious conceptual and numerical difficulties in a reliable description of hadron-hadron and lepton-hadron interactions. Next we propose to replace the strong sector of Standard Model based on unobserved quarks and gluons by the strong sector based on the set of the observed baryons and mesons determined by the spontaneously broken SU(6) gauge field theory model. We analyse various properties of this model such as asymptotic freedom, Reggeization of gauge bosons and fundamental fermions, baryon-baryon and meson-baryon high energy scattering, generation of {Lambda}-polarization in inclusive processes and others. Finally we extend this model by electro-weak sector. We demonstrate a remarkable lepton and hadron anomaly cancellation and we analyse a series of important lepton-hadron and hadron-hadron processes such as e{sup +} + e{sup -} {yields} hadrons, e{sup +} + e{sup -} {yields} W{sup +} + W{sup -}, e{sup +} + e{sup -} {yields} p + anti-p, e + p {yields} e + p and p + anti-p {yields} p + anti-p processes. We obtained a series of interesting new predictions in this model especially for processes with polarized particles. We estimated the value of the strong coupling constant {alpha}(M{sub z}) and we predicted the top baryon mass M{sub {Lambda}{sub t}} {approx_equal} 240 GeV. Since in our model the proton, neutron, {Lambda}-particles, vector mesons like {rho}, {omega}, {phi}, J/{psi} ect. and leptons are elementary most of experimentally analysed lepton-hadron and hadron-hadron processes in LEP1, LEP2, LEAR, HERA, HERMES, LHC and SSC experiments may be relatively easily analysed in our model. (author). 252 refs, 65 figs, 1 tab.

  8. Cultural Resource Predictive Modeling

    Science.gov (United States)

    2017-10-01

    refining formal, inductive predictive models is the quality of the archaeological and environmental data. To build models efficiently, relevant...geomorphology, and historic information . Lessons Learned: The original model was focused on the identification of prehistoric resources. This...system but uses predictive modeling informally . For example, there is no probability for buried archaeological deposits on the Burton Mesa, but there is

  9. Inter-daily variability of a strong thermally-driven wind system over the Atacama Desert of South America: synoptic forcing and short-term predictability using the GFS global model

    Science.gov (United States)

    Jacques-Coper, Martín; Falvey, Mark; Muñoz, Ricardo C.

    2015-07-01

    Crucial aspects of a strong thermally-driven wind system in the Atacama Desert in northern Chile during the extended austral winter season (May-September) are studied using 2 years of measurement data from the Sierra Gorda 80-m meteorological mast (SGO, 22° 56' 24″ S; 69° 7' 58″ W, 2,069 m above sea level (a.s.l.)). Daily cycles of atmospheric variables reveal a diurnal (nocturnal) regime, with northwesterly (easterly) flow and maximum mean wind speed of 8 m/s (13 m/s) on average. These distinct regimes are caused by pronounced topographic conditions and the diurnal cycle of the local radiative balance. Wind speed extreme events of each regime are negatively correlated at the inter-daily time scale: High diurnal wind speed values are usually observed together with low nocturnal wind speed values and vice versa. The associated synoptic conditions indicate that upper-level troughs at the coastline of southwestern South America reinforce the diurnal northwesterly wind, whereas mean undisturbed upper-level conditions favor the development of the nocturnal easterly flow. We analyze the skill of the numerical weather model Global Forecast System (GFS) in predicting wind speed at SGO. Although forecasted wind speeds at 800 hPa do show the diurnal and nocturnal phases, observations at 80 m are strongly underestimated by the model. This causes a pronounced daily cycle of root-mean-squared error (RMSE) and bias in the forecasts. After applying a simple Model Output Statistics (MOS) post-processing, we achieve a good representation of the wind speed intra-daily and inter-daily variability, a first step toward reducing the uncertainties related to potential wind energy projects in the region.

  10. Predictive modeling of complications.

    Science.gov (United States)

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  11. Modeling and synthesis of strong ground motion

    Indian Academy of Sciences (India)

    Numerical examples are shown for illustration by taking Kutch earthquake-2001 as a case study. 1. ... Ground motion; source mechanism models; empirical Green's functions; seismological models; Kutch earthquake. J. Earth Syst. Sci. 117 ..... hybrid global search method which is a combi- nation of simulated annealing and ...

  12. Archaeological predictive model set.

    Science.gov (United States)

    2015-03-01

    This report is the documentation for Task 7 of the Statewide Archaeological Predictive Model Set. The goal of this project is to : develop a set of statewide predictive models to assist the planning of transportation projects. PennDOT is developing t...

  13. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  14. Strong coupling from the Hubbard model

    OpenAIRE

    Minahan, Joseph A.

    2006-01-01

    It was recently observed that the one dimensional half-filled Hubbard model reproduces the known part of the perturbative spectrum of planar N=4 super Yang-Mills in the SU(2) sector. Assuming that this identification is valid beyond perturbation theory, we investigate the behavior of this spectrum as the 't Hooft parameter \\lambda becomes large. We show that the full dimension \\Delta of the Konishi superpartner is the solution of a sixth order polynomial while \\Delta for a bare dimension 5 op...

  15. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    utilities as partners and users. The new models are evaluated for five wind farms in Denmark as well as one wind farm in Spain. It is shown that the predictions based on conditional parametric models are superior to the predictions obatined by state-of-the-art parametric models.......This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Danish...

  16. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    only present in melanoma patients and thus were strongly associated with melanoma. The percentage of correctly classified subjects in the LR model was 74.9%, sensitivity 71%, specificity 78.7% and AUC 0.805. For the ADT percentage of correctly classified instances was 71.9%, sensitivity 71.9%, specificity 79.4% and AUC 0.808. Conclusion. Application of different models for risk assessment and prediction of melanoma should provide efficient and standardized tool in the hands of clinicians. The presented models offer effective discrimination of individuals at high risk, transparent decision making and real-time implementation suitable for clinical practice. A continuous melanoma database growth would provide for further adjustments and enhancements in model accuracy as well as offering a possibility for successful application of more advanced data mining algorithms.

  17. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  18. Earthquake source model using strong motion displacement as ...

    Indian Academy of Sciences (India)

    The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...

  19. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  20. Earthquake source model using strong motion displacement as ...

    Indian Academy of Sciences (India)

    Earthquake source model using strong motion displacement as response of finite elastic media. R N IYENGAR* and SHAILESH KR AGRAWAL**. *Department of Civil Engineering, Indian Institute of Science, Bangalore 560 012, India. e-mail: rni@civil.iisc.ernet.in. **Central Building Research Institute, Roorkee, India.

  1. Triad pattern algorithm for predicting strong promoter candidates in bacterial genomes

    Directory of Open Access Journals (Sweden)

    Sakanyan Vehary

    2008-05-01

    Full Text Available Abstract Background Bacterial promoters, which increase the efficiency of gene expression, differ from other promoters by several characteristics. This difference, not yet widely exploited in bioinformatics, looks promising for the development of relevant computational tools to search for strong promoters in bacterial genomes. Results We describe a new triad pattern algorithm that predicts strong promoter candidates in annotated bacterial genomes by matching specific patterns for the group I σ70 factors of Escherichia coli RNA polymerase. It detects promoter-specific motifs by consecutively matching three patterns, consisting of an UP-element, required for interaction with the α subunit, and then optimally-separated patterns of -35 and -10 boxes, required for interaction with the σ70 subunit of RNA polymerase. Analysis of 43 bacterial genomes revealed that the frequency of candidate sequences depends on the A+T content of the DNA under examination. The accuracy of in silico prediction was experimentally validated for the genome of a hyperthermophilic bacterium, Thermotoga maritima, by applying a cell-free expression assay using the predicted strong promoters. In this organism, the strong promoters govern genes for translation, energy metabolism, transport, cell movement, and other as-yet unidentified functions. Conclusion The triad pattern algorithm developed for predicting strong bacterial promoters is well suited for analyzing bacterial genomes with an A+T content of less than 62%. This computational tool opens new prospects for investigating global gene expression, and individual strong promoters in bacteria of medical and/or economic significance.

  2. Use of the Strong Collision Model to Calculate Spin Relaxation

    Science.gov (United States)

    Wang, D.; Chow, K. H.; Smadella, M.; Hossain, M. D.; MacFarlane, W. A.; Morris, G. D.; Ofer, O.; Morenzoni, E.; Salman, Z.; Saadaoui, H.; Song, Q.; Kiefl, R. F.

    The strong collision model is used to calculate spin relaxation of a muon or polarized radioactive nucleus in contact with a fluctuating environment. We show that on a time scale much longer than the mean time between collisions (fluctuations) the longitudinal polarization decays exponentially with a relaxation rate equal to a sum of Lorentzians-one for each frequency component in the static polarization function ps(t).

  3. On autostability of almost prime models relative to strong constructivizations

    International Nuclear Information System (INIS)

    Goncharov, Sergey S

    2011-01-01

    Questions of autostability and algorithmic dimension of models go back to papers by A.I. Malcev and by A. Froehlich and J.C. Shepherdson in which the effect of the existence of computable presentations which are non-equivalent from the viewpoint of their algorithmic properties was first discovered. Today there are many papers by various authors devoted to investigations of such questions. The present paper deals with the question of inheritance of the properties of autostability and non-autostability relative to strong constructivizations under elementary extensions for almost prime models. Bibliography: 37 titles.

  4. Prediction and discovery of extremely strong hydrodynamic instabilities due to a velocity jump: theory and experiments

    International Nuclear Information System (INIS)

    Fridman, A M

    2008-01-01

    The theory and the experimental discovery of extremely strong hydrodynamic instabilities are described, viz. the Kelvin-Helmholtz, centrifugal, and superreflection instabilities. The discovery of the last two instabilities was predicted and the Kelvin-Helmholtz instability in real systems was revised by us. (reviews of topical problems)

  5. What Factors Predict Who Will Have a Strong Social Network Following a Stroke?

    Science.gov (United States)

    Northcott, Sarah; Marshall, Jane; Hilari, Katerina

    2016-01-01

    Purpose: Measures of social networks assess the number and nature of a person's social contacts, and strongly predict health outcomes. We explored how social networks change following a stroke and analyzed concurrent and baseline predictors of social networks 6 months poststroke. Method: We conducted a prospective longitudinal observational study.…

  6. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation......, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  7. Prediction of strong ground motion based on scaling law of earthquake

    International Nuclear Information System (INIS)

    Kamae, Katsuhiro; Irikura, Kojiro; Fukuchi, Yasunaga.

    1991-01-01

    In order to predict more practically strong ground motion, it is important to study how to use a semi-empirical method in case of having no appropriate observation records for actual small-events as empirical Green's functions. We propose a prediction procedure using artificially simulated small ground motions as substitute for the actual motions. First, we simulate small-event motion by means of stochastic simulation method proposed by Boore (1983) in considering pass effects such as attenuation, and broadening of waveform envelope empirically in the objective region. Finally, we attempt to predict the strong ground motion due to a future large earthquake (M 7, Δ = 13 km) using the same summation procedure as the empirical Green's function method. We obtained the results that the characteristics of the synthetic motion using M 5 motion were in good agreement with those by the empirical Green's function method. (author)

  8. Ruling out a strongly interacting standard Higgs model

    International Nuclear Information System (INIS)

    Riesselmann, K.; Willenbrock, S.

    1997-01-01

    Previous work has suggested that perturbation theory is unreliable for Higgs- and Goldstone-boson scattering, at energies above the Higgs-boson mass, for relatively small values of the Higgs quartic coupling λ(μ). By performing a summation of nonlogarithmic terms, we show that perturbation theory is in fact reliable up to relatively large coupling. This eliminates the possibility of a strongly interacting standard Higgs model at energies above the Higgs-boson mass, complementing earlier studies which excluded strong interactions at energies near the Higgs-boson mass. The summation can be formulated in terms of an appropriate scale in the running coupling, μ=√(s)/e∼√(s)/2.7, so it can be incorporated easily in renormalization-group-improved tree-level amplitudes as well as higher-order calculations. copyright 1996 The American Physical Society

  9. Thermodynamic prediction of glass formation tendency, cluster-in-jellium model for metallic glasses, ab initio tight-binding calculations, and new density functional theory development for systems with strong electron correlation

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yongxin [Iowa State Univ., Ames, IA (United States)

    2009-01-01

    Solidification of liquid is a very rich and complicated field, although there is always a famous homogeneous nucleation theory in a standard physics or materials science text book. Depending on the material and processing condition, liquid may solidify to single crystalline, polycrystalline with different texture, quasi-crystalline, amorphous solid or glass (Glass is a kind of amorphous solid in general, which has short-range and medium-range order). Traditional oxide glass may easily be formed since the covalent directional bonded network is apt to be disturbed. In other words, the energy landcape of the oxide glass is so complicated that system need extremely long time to explore the whole configuration space. On the other hand, metallic liquid usually crystalize upon cooling because of the metallic bonding nature. However, Klement et.al., (1960) reported that Au-Si liquid underwent an amorphous or “glassy” phase transformation with rapid quenching. In recent two decades, bulk metallic glasses have also been found in several multicomponent alloys[Inoue et al., (2002)]. Both thermodynamic factors (e.g., free energy of various competitive phase, interfacial free energy, free energy of local clusters, etc.) and kinetic factors (e.g., long range mass transport, local atomic position rearrangement, etc.) play important roles in the metallic glass formation process. Metallic glass is fundamentally different from nanocrystalline alloys. Metallic glasses have to undergo a nucleation process upon heating in order to crystallize. Thus the short-range and medium-range order of metallic glasses have to be completely different from crystal. Hence a method to calculate the energetics of different local clusters in the undercooled liquid or glasses become important to set up a statistic model to describe metalllic glass formation. Scattering techniques like x-ray and neutron have widely been used to study the structues of metallic glasses. Meanwhile, computer simulation

  10. Risperidone and Venlafaxine Metabolic Ratios Strongly Predict a CYP2D6 Poor Metabolizing Genotype.

    Science.gov (United States)

    Mannheimer, Buster; Haslemo, Tore; Lindh, Jonatan D; Eliasson, Erik; Molden, Espen

    2016-02-01

    To investigate the predictive value of the risperidone and venlafaxine metabolic ratios and CYP2D6 genotype. The determination of risperidone, 9-hydroxyrisperidone, and venlafaxine, O-desmethylvenlafaxine, N-desmethylvenlafaxine and CYP2D6 genotype was performed in 425 and 491 patients, respectively. The receiver operator characteristic method and the area under the receiver operator characteristic curve were used to illustrate the predictive value of risperidone metabolic ratio for the individual CYP2D6 genotype. To evaluate the proposed cutoff levels of >1 to identify individuals with a poor CYP2D6 genotype, the sensitivity, specificity, positive predictive values, and negative predictive values were calculated. Area under the receiver operator characteristic curve to predict poor metabolizers for risperidone/9-hydroxyrisperidone and N-desmethylvenlafaxine/O-desmethylvenlafaxine ratios was 93% and 99%, respectively. The sensitivity, specificity, positive predictive value, and negative predictive value (confidence interval) of a risperidone/9-hydroxyrisperidone ratio >1 to predict a CYP2D6 poor metabolizer genotype were 91% (76%-97%), 86% (83%-89%), 35% (26%-46%), and 99% (97%-100%), respectively. The corresponding measures for N-desmethylvenlafaxine/O-desmethylvenlafaxine were 93% (76%-97%), 87% (83%-89%), 40% (32%-51%), and 99% (98%-100%). Risperidone/9-hydroxyrisperidone and N-desmethylvenlafaxine/O-desmethylvenlafaxine metabolic ratios >1 strongly predict individuals with poor metabolizer genotype, which could guide psychotropic drug treatment to avoid adverse drug reactions and to increase their therapeutic efficacy in patients prescribed these drugs.

  11. Strongly Coupled Models with a Higgs-like Boson*

    Directory of Open Access Journals (Sweden)

    Pich Antonio

    2013-11-01

    Full Text Available Considering the one-loop calculation of the oblique S and T parameters, we have presented a study of the viability of strongly-coupled scenarios of electroweak symmetry breaking with a light Higgs-like boson. The calculation has been done by using an effective Lagrangian, being short-distance constraints and dispersive relations the main ingredients of the estimation. Contrary to a widely spread believe, we have demonstrated that strongly coupled electroweak models with massive resonances are not in conflict with experimentalconstraints on these parameters and the recently observed Higgs-like resonance. So there is room for these models, but they are stringently constrained. The vector and axial-vector states should be heavy enough (with masses above the TeV scale, the mass splitting between them is highly preferred to be small and the Higgs-like scalar should have a WW coupling close to the Standard Model one. It is important to stress that these conclusions do not depend critically on the inclusion of the second Weinberg sum rule.

  12. Regional Characterization of the Crust in Metropolitan Areas for Prediction of Strong Ground Motion

    Science.gov (United States)

    Hirata, N.; Sato, H.; Koketsu, K.; Umeda, Y.; Iwata, T.; Kasahara, K.

    2003-12-01

    Introduction: After the 1995 Kobe earthquake, the Japanese government increased its focus and funding of earthquake hazards evaluation, studies of man-made structures integrity, and emergency response planning in the major urban centers. A new agency, the Ministry of Education, Science, Sports and Culture (MEXT) has started a five-year program titled as Special Project for Earthquake Disaster Mitigation in Urban Areas (abbreviated to Dai-dai-toku in Japanese) since 2002. The project includes four programs: I. Regional characterization of the crust in metropolitan areas for prediction of strong ground motion. II. Significant improvement of seismic performance of structure. III. Advanced disaster management system. IV. Investigation of earthquake disaster mitigation research results. We will present the results from the first program conducted in 2002 and 2003. Regional Characterization of the Crust in Metropolitan Areas for Prediction of Strong Ground Motion: A long-term goal is to produce map of reliable estimations of strong ground motion. This requires accurate determination of ground motion response, which includes a source process, an effect of propagation path, and near surface response. The new five-year project was aimed to characterize the "source" and "propagation path" in the Kanto (Tokyo) region and Kinki (Osaka) region. The 1923 Kanto Earthquake is one of the important targets to be addressed in the project. The proximity of the Pacific and Philippine Sea subducting plates requires study of the relationship between earthquakes and regional tectonics. This project focuses on identification and geometry of: 1) Source faults, 2) Subducting plates and mega-thrust faults, 3) Crustal structure, 4) Seismogenic zone, 5) Sedimentary basins, 6) 3D velocity properties We have conducted a series of seismic reflection and refraction experiment in the Kanto region. In 2002 we have completed to deploy seismic profiling lines in the Boso peninsula (112 km) and the

  13. Monitoring of the future strong Vrancea events by using the CN formal earthquake prediction algorithm

    International Nuclear Information System (INIS)

    Moldoveanu, C.L.; Novikova, O.V.; Panza, G.F.; Radulian, M.

    2003-06-01

    The preparation process of the strong subcrustal events originating in Vrancea region, Romania, is monitored using an intermediate-term medium-range earthquake prediction method - the CN algorithm (Keilis-Borok and Rotwain, 1990). We present the results of the monitoring of the preparation of future strong earthquakes for the time interval from January 1, 1994 (1994.1.1), to January 1, 2003 (2003.1.1) using the updated catalogue of the Romanian local network. The database considered for the CN monitoring of the preparation of future strong earthquakes in Vrancea covers the period from 1966.3.1 to 2003.1.1 and the geographical rectangle 44.8 deg - 48.4 deg N, 25.0 deg - 28.0 deg E. The algorithm correctly identifies, by retrospective prediction, the TJPs for all the three strong earthquakes (Mo=6.4) that occurred in Vrancea during this period. The cumulated duration of the TIPs represents 26.5% of the total period of time considered (1966.3.1-2003.1.1). The monitoring of current seismicity using the algorithm CN has been carried out since 1994. No strong earthquakes occurred from 1994.1.1 to 2003.1.1 but the CN declared an extended false alarm from 1999.5.1 to 2000.11.1. No alarm has currently been declared in the region (on January 1, 2003), as can be seen from the TJPs diagram shown. (author)

  14. Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century.

    Science.gov (United States)

    Ganusov, Vitaly V

    2016-01-01

    While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest "strong inference in mathematical modeling" as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century.

  15. Hirshfeld atom refinement for modelling strong hydrogen bonds.

    Science.gov (United States)

    Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon

    2014-09-01

    High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.

  16. A multifluid model extended for strong temperature nonequilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-08

    We present a multifluid model in which the material temperature is strongly affected by the degree of segregation of each material. In order to track temperatures of segregated form and mixed form of the same material, they are defined as different materials with their own energy. This extension makes it necessary to extend multifluid models to the case in which each form is defined as a separate material. Statistical variations associated with the morphology of the mixture have to be simplified. Simplifications introduced include combining all molecularly mixed species into a single composite material, which is treated as another segregated material. Relative motion within the composite material, diffusion, is represented by material velocity of each component in the composite material. Compression work, momentum and energy exchange, virtual mass forces, and dissipation of the unresolved kinetic energy have been generalized to the heterogeneous mixture in temperature nonequilibrium. The present model can be further simplified by combining all mixed forms of materials into a composite material. Molecular diffusion in this case is modeled by the Stefan-Maxwell equations.

  17. Bootstrap prediction and Bayesian prediction under misspecified models

    OpenAIRE

    Fushiki, Tadayoshi

    2005-01-01

    We consider a statistical prediction problem under misspecified models. In a sense, Bayesian prediction is an optimal prediction method when an assumed model is true. Bootstrap prediction is obtained by applying Breiman's `bagging' method to a plug-in prediction. Bootstrap prediction can be considered to be an approximation to the Bayesian prediction under the assumption that the model is true. However, in applications, there are frequently deviations from the assumed model. In this paper, bo...

  18. Strong Local-Nonlocal Coupling for Integrated Fracture Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Littlewood, David John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silling, Stewart A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, John A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Seleson, Pablo D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bond, Stephen D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Parks, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Burnett, Damon J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Gunzburger, Max [Florida State Univ., Tallahassee, FL (United States)

    2015-09-01

    Peridynamics, a nonlocal extension of continuum mechanics, is unique in its ability to capture pervasive material failure. Its use in the majority of system-level analyses carried out at Sandia, however, is severely limited, due in large part to computational expense and the challenge posed by the imposition of nonlocal boundary conditions. Combined analyses in which peridynamics is em- ployed only in regions susceptible to material failure are therefore highly desirable, yet available coupling strategies have remained severely limited. This report is a summary of the Laboratory Directed Research and Development (LDRD) project "Strong Local-Nonlocal Coupling for Inte- grated Fracture Modeling," completed within the Computing and Information Sciences (CIS) In- vestment Area at Sandia National Laboratories. A number of challenges inherent to coupling local and nonlocal models are addressed. A primary result is the extension of peridynamics to facilitate a variable nonlocal length scale. This approach, termed the peridynamic partial stress, can greatly reduce the mathematical incompatibility between local and nonlocal equations through reduction of the peridynamic horizon in the vicinity of a model interface. A second result is the formulation of a blending-based coupling approach that may be applied either as the primary coupling strategy, or in combination with the peridynamic partial stress. This blending-based approach is distinct from general blending methods, such as the Arlequin approach, in that it is specific to the coupling of peridynamics and classical continuum mechanics. Facilitating the coupling of peridynamics and classical continuum mechanics has also required innovations aimed directly at peridynamic models. Specifically, the properties of peridynamic constitutive models near domain boundaries and shortcomings in available discretization strategies have been addressed. The results are a class of position-aware peridynamic constitutive laws for

  19. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... the performance of HIRLAM in particular with respect to wind predictions. To estimate the performance of the model two spatial resolutions (0,5 Deg. and 0.2 Deg.) and different sets of HIRLAM variables were used to predict wind speed and energy production. The predictions of energy production for the wind farms...... are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production...

  20. Ising models of strongly coupled biological networks with multivariate interactions

    Science.gov (United States)

    Merchan, Lina; Nemenman, Ilya

    2013-03-01

    Biological networks consist of a large number of variables that can be coupled by complex multivariate interactions. However, several neuroscience and cell biology experiments have reported that observed statistics of network states can be approximated surprisingly well by maximum entropy models that constrain correlations only within pairs of variables. We would like to verify if this reduction in complexity results from intricacies of biological organization, or if it is a more general attribute of these networks. We generate random networks with p-spin (p > 2) interactions, with N spins and M interaction terms. The probability distribution of the network states is then calculated and approximated with a maximum entropy model based on constraining pairwise spin correlations. Depending on the M/N ratio and the strength of the interaction terms, we observe a transition where the pairwise approximation is very good to a region where it fails. This resembles the sat-unsat transition in constraint satisfaction problems. We argue that the pairwise model works when the number of highly probable states is small. We argue that many biological systems must operate in a strongly constrained regime, and hence we expect the pairwise approximation to be accurate for a wide class of problems. This research has been partially supported by the James S McDonnell Foundation grant No.220020321.

  1. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... Linear MPC. 1. Uses linear model: ˙x = Ax + Bu. 2. Quadratic cost function: F = xT Qx + uT Ru. 3. Linear constraints: Hx + Gu < 0. 4. Quadratic program. Nonlinear MPC. 1. Nonlinear model: ˙x = f(x, u). 2. Cost function can be nonquadratic: F = (x, u). 3. Nonlinear constraints: h(x, u) < 0. 4. Nonlinear program.

  2. Strongly interacting matter at high densities with a soliton model

    Science.gov (United States)

    Johnson, Charles Webster

    1998-12-01

    One of the major goals of modern nuclear physics is to explore the phase diagram of strongly interacting matter. The study of these 'extreme' conditions is the primary motivation for the construction of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory which will accelerate nuclei to a center of mass (c.m.) energy of about 200 GeV/nucleon. From a theoretical perspective, a test of quantum chromodynamics (QCD) requires the expansion of the conditions examined from one phase point to the entire phase diagram of strongly-interacting matter. In the present work we focus attention on what happens when the density is increased, at low excitation energies. Experimental results from the Brookhaven Alternating Gradient Synchrotron (AGS) indicate that this regime may be tested in the 'full stopping' (maximum energy deposition) scenario achieved at the AGS having a c.m. collision energy of about 2.5 GeV/nucleon for two equal- mass heavy nuclei. Since the solution of QCD on nuclear length-scales is computationally prohibitive even on today's most powerful computers, progress in the theoretical description of high densities has come through the application of models incorporating some of the essential features of the full theory. The simplest such model is the MIT bag model. We use a significantly more sophisticated model, a nonlocal confining soliton model developed in part at Kent. This model has proven its value in the calculation of the properties of individual mesons and nucleons. In the present application, the many-soliton problem is addressed with the same model. We describe nuclear matter as a lattice of solitons and apply the Wigner-Seitz approximation to the lattice. This means that we consider spherical cells with one soliton centered in each, corresponding to the average properties of the lattice. The average density is then varied by changing the size of the Wigner-Seitz cell. To arrive at a solution, we need to solve a coupled set of

  3. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  4. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Quantum simulation of the general semi-classical Rabi model in regimes of arbitrarily strong driving

    Science.gov (United States)

    Dai, Kunzhe; Wu, Haiteng; Zhao, Peng; Li, Mengmeng; Liu, Qiang; Xue, Guangming; Tan, Xinsheng; Yu, Haifeng; Yu, Yang

    2017-12-01

    We propose and experimentally demonstrate a scheme to simulate the interaction between a two-level system and a classical light field. Under the transversal driving of two microwave tones, the effective Hamiltonian in an appropriate rotating frame is identical to that of the general semi-classical Rabi model. We experimentally realize this Hamiltonian with a superconducting transmon qubit. By tuning the strength, phase, and frequency of the two microwave driving fields, we simulate the quantum dynamics from the weak to extremely strong driving regime. Under these conditions, we observe that, as a function of increased Rabi drive strength, the qubit evolution gradually deviates from the normal sinusoidal Rabi oscillation, in accordance with the predictions of the general semi-classical Rabi model far beyond the weak driving limit. Our scheme provides an effective approach to investigate the extremely strong interaction between a two-level system and a classical light field. Such strong interactions are usually inaccessible in experiments.

  6. Predictive models of moth development

    Science.gov (United States)

    Degree-day models link ambient temperature to insect life-stages, making such models valuable tools in integrated pest management. These models increase management efficacy by predicting pest phenology. In Wisconsin, the top insect pest of cranberry production is the cranberry fruitworm, Acrobasis v...

  7. Predictive Models and Computational Embryology

    Science.gov (United States)

    EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...

  8. Voussoir beam model for lower strong roof strata movement in longwall mining – Case study

    Directory of Open Access Journals (Sweden)

    Chuang Liu

    2017-12-01

    Full Text Available The paper presents the influence of varying immediate roof thickness on the lower strong roof strata movement and failure pattern in longwall coal mining with large mining height. The investigation is based on 58 geological drill holes and hydraulic shield pressure measurements around the longwall Panel 42105 of the Buertai Mine in Inner Mongolia Autonomous Region, China. The longwall Panel 42105 is characterized by relatively soft immediate roof strata of varying thickness superposed by strong strata, herein defined as lower strong roof. A voussoir beam model is adopted to interpret the structural movement of the lower strong roof strata and shield pressure measurements. It is shown that when the immediate roof is relatively thick, the broken overlying lower strong roof tends to form a stable voussoir beam with previously broken layer, thus not exerting high pressure on the hydraulic shield and working face. When the immediate roof is relatively thin, the broken overlying lower strong roof tends to behave as a cantilever beam, thus exerting higher pressure on the hydraulic shield and working face. Comparison of model predictions with measured time-weighted average shield pressure (TWAP shows good agreement.

  9. Predictions models with neural nets

    Directory of Open Access Journals (Sweden)

    Vladimír Konečný

    2008-01-01

    Full Text Available The contribution is oriented to basic problem trends solution of economic pointers, using neural networks. Problems include choice of the suitable model and consequently configuration of neural nets, choice computational function of neurons and the way prediction learning. The contribution contains two basic models that use structure of multilayer neural nets and way of determination their configuration. It is postulate a simple rule for teaching period of neural net, to get most credible prediction.Experiments are executed with really data evolution of exchange rate Kč/Euro. The main reason of choice this time series is their availability for sufficient long period. In carry out of experiments the both given basic kind of prediction models with most frequent use functions of neurons are verified. Achieve prediction results are presented as in numerical and so in graphical forms.

  10. Ionosphere TEC disturbances before strong earthquakes: observations, physics, modeling (Invited)

    Science.gov (United States)

    Namgaladze, A. A.

    2013-12-01

    The phenomenon of the pre-earthquake ionospheric disturbances is discussed. A number of typical TEC (Total Electron Content) relative disturbances is presented for several recent strong earthquakes occurred in different ionospheric conditions. Stable typical TEC deviations from quiet background state are observed few days before the strong seismic events in the vicinity of the earthquake epicenter and treated as ionospheric earthquake precursors. They don't move away from the source in contrast to the disturbances related with geomagnetic activity. Sunlit ionosphere approach leads to reduction of the disturbances up to their full disappearance, and effects regenerate at night. The TEC disturbances often observed in the magnetically conjugated areas as well. At low latitudes they accompany with equatorial anomaly modifications. The hypothesis about the electromagnetic channel of the pre-earthquake ionospheric disturbances' creation is discussed. The lithosphere and ionosphere are coupled by the vertical external electric currents as a result of ionization of the near-Earth air layer and vertical transport of the charged particles through the atmosphere over the fault. The external electric current densities exceeding the regular fair-weather electric currents by several orders are required to produce stable long-living seismogenic electric fields such as observed by onboard measurements of the 'Intercosmos-Bulgaria 1300' satellite over the seismic active zones. The numerical calculation results using the Upper Atmosphere Model demonstrate the ability of the external electric currents with the densities of 10-8-10-9 A/m2 to produce such electric fields. The sumulations reproduce the basic features of typical pre-earthquake TEC relative disturbances. It is shown that the plasma ExB drift under the action of the seismogenic electric field leads to the changes of the F2 region electron number density and TEC. The upward drift velocity component enhances NmF2 and TEC and

  11. Procedure to predict the storey where plastic drift dominates in two-storey building under strong ground motion

    DEFF Research Database (Denmark)

    Hibino, Y.; Ichinose, T.; Costa, J.L.D.

    2009-01-01

    A procedure is presented to predict the storey where plastic drift dominates in two-storey buildings under strong ground motion. The procedure utilizes the yield strength and the mass of each storey as well as the peak ground acceleration. The procedure is based on two different assumptions: (1......) the seismic force distribution is of inverted triangular form and (2) the rigid-plastic model represents the system. The first and the second assumptions, respectively, lead to lower and upper estimates of the base shear coefficient under which the drift of the first storey exceeds that of the second storey...

  12. CN earthquake prediction algorithm and the monitoring of the future strong Vrancea events

    International Nuclear Information System (INIS)

    Moldoveanu, C.L.; Radulian, M.; Novikova, O.V.; Panza, G.F.

    2002-01-01

    The strong earthquakes originating at intermediate-depth in the Vrancea region (located in the SE corner of the highly bent Carpathian arc) represent one of the most important natural disasters able to induce heavy effects (high tool of casualties and extensive damage) in the Romanian territory. The occurrence of these earthquakes is irregular, but not infrequent. Their effects are felt over a large territory, from Central Europe to Moscow and from Greece to Scandinavia. The largest cultural and economical center exposed to the seismic risk due to the Vrancea earthquakes is Bucharest. This metropolitan area (230 km 2 wide) is characterized by the presence of 2.5 million inhabitants (10% of the country population) and by a considerable number of high-risk structures and infrastructures. The best way to face strong earthquakes is to mitigate the seismic risk by using the two possible complementary approaches represented by (a) the antiseismic design of structures and infrastructures (able to support strong earthquakes without significant damage), and (b) the strong earthquake prediction (in terms of alarm intervals declared for long, intermediate or short-term space-and time-windows). The intermediate term medium-range earthquake prediction represents the most realistic target to be reached at the present state of knowledge. The alarm declared in this case extends over a time window of about one year or more, and a space window of a few hundreds of kilometers. In the case of Vrancea events the spatial uncertainty is much less, being of about 100 km. The main measures for the mitigation of the seismic risk allowed by the intermediate-term medium-range prediction are: (a) verification of the buildings and infrastructures stability and reinforcement measures when required, (b) elaboration of emergency plans of action, (c) schedule of the main actions required in order to restore the normality of the social and economical life after the earthquake. The paper presents the

  13. Strong ground motion prediction applying dynamic rupture simulations for Beppu-Haneyama Active Fault Zone, southwestern Japan

    Science.gov (United States)

    Yoshimi, M.; Matsushima, S.; Ando, R.; Miyake, H.; Imanishi, K.; Hayashida, T.; Takenaka, H.; Suzuki, H.; Matsuyama, H.

    2017-12-01

    We conducted strong ground motion prediction for the active Beppu-Haneyama Fault zone (BHFZ), Kyushu island, southwestern Japan. Since the BHFZ runs through Oita and Beppy cities, strong ground motion as well as fault displacement may affect much to the cities.We constructed a 3-dimensional velocity structure of a sedimentary basin, Beppu bay basin, where the fault zone runs through and Oita and Beppu cities are located. Minimum shear wave velocity of the 3d model is 500 m/s. Additional 1-d structure is modeled for sites with softer sediment: holocene plain area. We observed, collected, and compiled data obtained from microtremor surveys, ground motion observations, boreholes etc. phase velocity and H/V ratio. Finer structure of the Oita Plain is modeled, as 250m-mesh model, with empirical relation among N-value, lithology, depth and Vs, using borehole data, then validated with the phase velocity data obtained by the dense microtremor array observation (Yoshimi et al., 2016).Synthetic ground motion has been calculated with a hybrid technique composed of a stochastic Green's function method (for HF wave), a 3D finite difference (LF wave) and 1D amplification calculation. Fault geometry has been determined based on reflection surveys and active fault map. The rake angles are calculated with a dynamic rupture simulation considering three fault segments under a stress filed estimated from source mechanism of earthquakes around the faults (Ando et al., JpGU-AGU2017). Fault parameters such as the average stress drop, a size of asperity etc. are determined based on an empirical relation proposed by Irikura and Miyake (2001). As a result, strong ground motion stronger than 100 cm/s is predicted in the hanging wall side of the Oita plain.This work is supported by the Comprehensive Research on the Beppu-Haneyama Fault Zone funded by the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.

  14. Review of strongly-coupled composite dark matter models and lattice simulations

    Science.gov (United States)

    Kribs, Graham D.; Neil, Ethan T.

    2016-08-01

    We review models of new physics in which dark matter arises as a composite bound state from a confining strongly-coupled non-Abelian gauge theory. We discuss several qualitatively distinct classes of composite candidates, including dark mesons, dark baryons, and dark glueballs. We highlight some of the promising strategies for direct detection, especially through dark moments, using the symmetries and properties of the composite description to identify the operators that dominate the interactions of dark matter with matter, as well as dark matter self-interactions. We briefly discuss the implications of these theories at colliders, especially the (potentially novel) phenomenology of dark mesons in various regimes of the models. Throughout the review, we highlight the use of lattice calculations in the study of these strongly-coupled theories, to obtain precise quantitative predictions and new insights into the dynamics.

  15. What do saliency models predict?

    Science.gov (United States)

    Koehler, Kathryn; Guo, Fei; Zhang, Sheng; Eckstein, Miguel P.

    2014-01-01

    Saliency models have been frequently used to predict eye movements made during image viewing without a specified task (free viewing). Use of a single image set to systematically compare free viewing to other tasks has never been performed. We investigated the effect of task differences on the ability of three models of saliency to predict the performance of humans viewing a novel database of 800 natural images. We introduced a novel task where 100 observers made explicit perceptual judgments about the most salient image region. Other groups of observers performed a free viewing task, saliency search task, or cued object search task. Behavior on the popular free viewing task was not best predicted by standard saliency models. Instead, the models most accurately predicted the explicit saliency selections and eye movements made while performing saliency judgments. Observers' fixations varied similarly across images for the saliency and free viewing tasks, suggesting that these two tasks are related. The variability of observers' eye movements was modulated by the task (lowest for the object search task and greatest for the free viewing and saliency search tasks) as well as the clutter content of the images. Eye movement variability in saliency search and free viewing might be also limited by inherent variation of what observers consider salient. Our results contribute to understanding the tasks and behavioral measures for which saliency models are best suited as predictors of human behavior, the relationship across various perceptual tasks, and the factors contributing to observer variability in fixational eye movements. PMID:24618107

  16. Empirical equations for the prediction of PGA and pseudo spectral accelerations using Iranian strong-motion data

    Science.gov (United States)

    Zafarani, H.; Luzi, Lucia; Lanzano, Giovanni; Soghrat, M. R.

    2018-01-01

    A recently compiled, comprehensive, and good-quality strong-motion database of the Iranian earthquakes has been used to develop local empirical equations for the prediction of peak ground acceleration (PGA) and 5%-damped pseudo-spectral accelerations (PSA) up to 4.0 s. The equations account for style of faulting and four site classes and use the horizontal distance from the surface projection of the rupture plane as a distance measure. The model predicts the geometric mean of horizontal components and the vertical-to-horizontal ratio. A total of 1551 free-field acceleration time histories recorded at distances of up to 200 km from 200 shallow earthquakes (depth regression analysis using the random effects algorithm of Abrahamson and Youngs (Bull Seism Soc Am 82:505-510, 1992), which considers between-events as well as within-events errors. Due to the limited data used in the development of previous Iranian ground motion prediction equations (GMPEs) and strong trade-offs between different terms of GMPEs, it is likely that the previously determined models might have less precision on their coefficients in comparison to the current study. The richer database of the current study allows improving on prior works by considering additional variables that could not previously be adequately constrained. Here, a functional form used by Boore and Atkinson (Earthquake Spect 24:99-138, 2008) and Bindi et al. (Bull Seism Soc Am 9:1899-1920, 2011) has been adopted that allows accounting for the saturation of ground motions at close distances. A regression has been also performed for the V/H in order to retrieve vertical components by scaling horizontal spectra. In order to take into account epistemic uncertainty, the new model can be used along with other appropriate GMPEs through a logic tree framework for seismic hazard assessment in Iran and Middle East region.

  17. Prediction and design of first super-strong liquid-crystalline polymers

    International Nuclear Information System (INIS)

    Dowell, F.

    1989-01-01

    This paper presents the details of the theoretical prediction and design (atom by atom, bond by bond) of the molecule chemical structures of the first candidate super-strong liquid-crystalline polymers (SS LCPs). These LCPs are the first LCPs designed to have good compressive strengths, as well as to have tensile strengths and tensile moduli significantly larger than those of existing strong LCPs (such as Kevlar). The key feature of this new class of LCPs is that the exceptional strength is three dimensional on a microscopic, molecular level (thus, on a macroscopic level), in contrast to present LCPs (such as Kevlar) with their one-dimensional exceptional strength. These SS LCPs also have some solubility and processing advantages over existing strong LCPs. These SS LCPs are specially-designed combined LCPs such that the side chains of a molecule interdigitate with the side chains of other molecules. This paper also presents other essential general and specific features required for SS LCPs. Considerations in the design of SS LCPs include the spacing distance between side chains along the backbone, the need for rigid sections in the backbone and side chains, the degree of polymerization, the length of the side chains, the regularity of spacing of the side chains along the backbone, the interdigitation of side chains in submolecular strips, the packing of the side chains on one or two sides of the backbone, the symmetry of the side chains, the points of attachment of the side chains to the backbone, the flexibility and size of the chemical group connecting each side chain to the backbone, the effect of semiflexible sections in the backbone and side chains, and the choice of types of dipolar and/or hydrogen bonding forces in the backbones and side chains for easy alignment

  18. Stochastic finite-fault modelling of strong earthquakes in Narmada ...

    Indian Academy of Sciences (India)

    The Narmada South Fault in the Indian peninsular shield region is associated with moderate-to-strong earthquakes. The prevailing hazard evidenced by the earthquake-related fatalities in the region imparts significance to the investigations of the seismogenic environment. In the present study, the prevailing seismotectonic ...

  19. A nonlinear efficient layerwise finite element model for smart piezolaminated composites under strong applied electric field

    International Nuclear Information System (INIS)

    Kapuria, S; Yaqoob Yasin, M

    2013-01-01

    In this work, we present an electromechanically coupled efficient layerwise finite element model for the static response of piezoelectric laminated composite and sandwich plates, considering the nonlinear behavior of piezoelectric materials under strong electric field. The nonlinear model is developed consistently using a variational principle, considering a rotationally invariant second order nonlinear constitutive relationship, and full electromechanical coupling. In the piezoelectric layer, the electric potential is approximated to have a quadratic variation across the thickness, as observed from exact three dimensional solutions, and the equipotential condition of electroded piezoelectric surfaces is modeled using the novel concept of an electric node. The results predicted by the nonlinear model compare very well with the experimental data available in the literature. The effect of the piezoelectric nonlinearity on the static response and deflection/stress control is studied for piezoelectric bimorph as well as hybrid laminated plates with isotropic, angle-ply composite and sandwich substrates. For high electric fields, the difference between the nonlinear and linear predictions is large, and cannot be neglected. The error in the prediction of the smeared counterpart of the present theory with the same number of primary displacement unknowns is also examined. (paper)

  20. Prediction of strong acceleration motion depended on focal mechanism; Shingen mechanism wo koryoshita jishindo yosoku ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Kaneda, Y.; Ejiri, J. [Obayashi Corp., Tokyo (Japan)

    1996-10-01

    This paper describes simulation results of strong acceleration motion with varying uncertain fault parameters mainly for a fault model of Hyogo-ken Nanbu earthquake. For the analysis, based on the fault parameters, the strong acceleration motion was simulated using the radiation patterns and the breaking time difference of composite faults as parameters. A statistic waveform composition method was used for the simulation. For the theoretical radiation patterns, directivity was emphasized which depended on the strike of faults, and the maximum acceleration was more than 220 gal. While, for the homogeneous radiation patterns, the maximum accelerations were isotopically distributed around the fault as a center. For variations in the maximum acceleration and the predominant frequency due to the breaking time difference of three faults, the response spectral value of maximum/minimum was about 1.7 times. From the viewpoint of seismic disaster prevention, underground structures including potential faults and non-arranging properties can be grasped using this simulation. Significance of the prediction of strong acceleration motion was also provided through this simulation using uncertain factors, such as breaking time of composite faults, as parameters. 4 refs., 4 figs., 1 tab.

  1. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  2. Self-Conscious Shyness: Growth during Toddlerhood, Strong Role of Genetics, and No Prediction from Fearful Shyness.

    Science.gov (United States)

    Eggum-Wilkens, Natalie D; Lemery-Chalfant, Kathryn; Aksan, Nazan; Goldsmith, H Hill

    2015-01-01

    Fearful and self-conscious subtypes of shyness have received little attention in the empirical literature. Study aims included: 1) determining if fearful shyness predicted self-conscious shyness, 2) describing development of self-conscious shyness, and 3) examining genetic and environmental contributions to fearful and self-conscious shyness. Observed self-conscious shyness was examined at 19, 22, 25, and 28 months in same-sex twins (MZ = 102, DZ = 111, missing zygosity = 3 pairs). Self-conscious shyness increased across toddlerhood, but onset was earlier than predicted by theory. Fearful shyness (observed [6 and 12 months] and parents' reports [12 and 22 months]) was not predictive of self-conscious shyness. Independent genetic factors made strong contributions to parent-reported (but not observed) fearful shyness (additive genetic influence = .69 and .72 at 12 and 22 months, respectively) and self-conscious shyness (additive genetic influence = .90 for the growth model intercept). Results encourage future investigation of patterns of change and interrelations in shyness subtypes.

  3. Hydrodynamic modelling of small upland lakes under strong wind forcing

    Science.gov (United States)

    Morales, L.; French, J.; Burningham, H.

    2012-04-01

    Small lakes (Area important source of water supply. Lakes also provide an important sedimentary archive of environmental and climate changes and ecosystem function. Hydrodynamic controls on the transport and distribution of lake sediments, and also seasonal variations in thermal structure due to solar radiation, precipitation, evaporation and mixing and the complex vertical and horizontal circulation patterns induced by the action of wind are not very well understood. The work presented here analyses hydrodynamic motions present in small upland lakes due to circulation and internal scale waves, and their linkages with the distribution of bottom sediment accumulation in the lake. For purpose, a 3D hydrodynamic is calibrated and implemented for Llyn Conwy, a small oligotrophic upland lake in North Wales, UK. The model, based around the FVCOM open source community model code, resolves the Navier-Stokes equations using a 3D unstructured mesh and a finite volume scheme. The model is forced by meteorological boundary conditions. Improvements made to the FVCOM code include a new graphical user interface to pre- and post process the model input and results respectively, and a JONSWAT wave model to include the effects of wind-wave induced bottom stresses on lake sediment dynamics. Modelled internal scale waves are validated against summer temperature measurements acquired from a thermistor chain deployed at the deepest part of the lake. Seiche motions were validated using data recorded by high-frequency level sensors around the lake margins, and the velocity field and the circulation patterns were validated using the data recorded by an ADCP and GPS drifters. The model is shown to reproduce the lake hydrodynamics and reveals well-developed seiches at different frequencies superimposed on wind-driven circulation patterns that appear to control the distribution of bottom sediments in this small upland lake.

  4. WHY WE CANNOT PREDICT STRONG EARTHQUAKES IN THE EARTH’S CRUST

    Directory of Open Access Journals (Sweden)

    Iosif L. Gufeld

    2011-01-01

    Full Text Available In the past decade, earthquake disasters caused multiple fatalities and significant economic losses and challenged the modern civilization. The wellknown achievements and growing power of civilization are backstrapped when facing the Nature. The question arises, what hinders solving a problem of earthquake prediction, while longterm and continuous seismic monitoring systems are in place in many regions of the world. For instance, there was no forecast of the Japan Great Earthquake of March 11, 2011, despite the fact that monitoring conditions for its prediction were unique. Its focal zone was 100–200 km away from the monitoring network installed in the area of permanent seismic hazard, which is subject to nonstop and longterm seismic monitoring. Lesson should be learned from our common fiasco in forecasting, taking into account research results obtained during the past 50–60 years. It is now evident that we failed to identify precursors of the earthquakes. Prior to the earthquake occurrence, the observed local anomalies of various fields reflected other processes that were mistakenly viewed as processes of preparation for largescale faulting. For many years, geotectonic situations were analyzed on the basis of the physics of destruction of laboratory specimens, which was applied to the lithospheric conditions. Many researchers realize that such an approach is inaccurate. Nonetheless, persistent attempts are being undertaken with application of modern computation to detect anomalies of various fields, which may be interpreted as earthquake precursors. In our opinion, such illusory intentions were smashed by the Great Japan Earthquake (Figure 6. It is also obvious that sufficient attention has not been given yet to fundamental studies of seismic processes.This review presents the authors’ opinion concerning the origin of the seismic process and strong earthquakes, being part of the process. The authors realize that a wide discussion is

  5. A strong viscous–inviscid interaction model for rotating airfoils

    DEFF Research Database (Denmark)

    Ramos García, Néstor; Sørensen, Jens Nørkær; Shen, Wen Zhong

    2014-01-01

    version, a parametric study on rotational effects induced by the Coriolis and centrifugal forces in the boundary-layer equations shows that the effects of rotation are to decrease the growth of the boundary-layer and delay the onset of separation, hence increasing the lift coefficient slightly while...... the viscous and inviscid parts. The inviscid part is modeled by a 2D panel method, and the viscous part is modeled by solving the integral form of the laminar and turbulent boundary-layer equations with extension for 3D rotational effects. Laminar-to-turbulent transition is either forced by employing...

  6. <strong>An Efficient Algorithm for Modelling Duration in Hidden Markov Models, with a Dramatic Applicationstrong>

    DEFF Research Database (Denmark)

    Hauberg, Søren; Sloth, Jakob

    2008-01-01

    For many years, the hidden Markov model (HMM) has been one of the most popular tools for analysing sequential data. One frequently used special case is the left-right model, in which the order of the hidden states is known. If knowledge of the duration of a state is available it is not possible...... to represent it explicitly with an HMM. Methods for modelling duration with HMM's do exist (Rabiner in Proc. IEEE 77(2):257---286, [1989]), but they come at the price of increased computational complexity. Here we present an efficient and robust algorithm for modelling duration in HMM's, and this algorithm...

  7. Variational Boussinesq model for strongly nonlinear dispersive waves

    NARCIS (Netherlands)

    Lawrence, C.; Adytia, D.; van Groesen, E.

    2018-01-01

    For wave tank, coastal and oceanic applications, a fully nonlinear Variational Boussinesq model with optimized dispersion is derived and a simple Finite Element implementation is described. Improving a previous weakly nonlinear version, high waves over flat and varying bottom are shown to be

  8. Coexistence of two species in a strongly coupled cooperating model

    DEFF Research Database (Denmark)

    Pedersen, Michael

    In this paper, the cooperating two-species Lotka-Volterra model is discussed. We study the existence of solutions to a elliptic system with homogeneous Dirichlet boundary conditions. Our results show that this problem possesses at least one coexistence state if the birth rates are big and self...

  9. Modeling strong-field above-threshold ionization

    International Nuclear Information System (INIS)

    Sundaram, B.; Armstrong, L. Jr.

    1990-01-01

    Above-threshold ionization (ATI) by intense, short-pulse lasers is studied numerically, using the stretched hydrogen atom Hamiltonian. Within our model system, we isolate several mechanisms that contribute to the ATI process. These mechanisms, which involve both excited bound states and continuum states, all invoke intermediate, off-energy shell transitions. In particular, the importance of excited bound states and off-energy shell bound-free processes to the ionization mechanism are shown to relate to a simple physical criterion. These processes point to importance differences in the interpretation of ionization characteristics for short pulses from that for longer pulses. Our analysis concludes that although components of ATI admit of simple, few-state modeling, the ultimate synthesis points to a highly complex mechanism

  10. Modelling of strongly coupled particle growth and aggregation

    International Nuclear Information System (INIS)

    Gruy, F; Touboul, E

    2013-01-01

    The mathematical modelling of the dynamics of particle suspension is based on the population balance equation (PBE). PBE is an integro-differential equation for the population density that is a function of time t, space coordinates and internal parameters. Usually, the particle is characterized by a unique parameter, e.g. the matter volume v. PBE consists of several terms: for instance, the growth rate and the aggregation rate. So, the growth rate is a function of v and t. In classical modelling, the growth and the aggregation are independently considered, i.e. they are not coupled. However, current applications occur where the growth and the aggregation are coupled, i.e. the change of the particle volume with time is depending on its initial value v 0 , that in turn is related to an aggregation event. As a consequence, the dynamics of the suspension does not obey the classical Von Smoluchowski equation. This paper revisits this problem by proposing a new modelling by using a bivariate PBE (with two internal variables: v and v 0 ) and by solving the PBE by means of a numerical method and Monte Carlo simulations. This is applied to a physicochemical system with a simple growth law and a constant aggregation kernel.

  11. Note on the hydrodynamic description of thin nematic films: Strong anchoring model

    KAUST Repository

    Lin, Te-Sheng

    2013-01-01

    We discuss the long-wave hydrodynamic model for a thin film of nematic liquid crystal in the limit of strong anchoring at the free surface and at the substrate. We rigorously clarify how the elastic energy enters the evolution equation for the film thickness in order to provide a solid basis for further investigation: several conflicting models exist in the literature that predict qualitatively different behaviour. We consolidate the various approaches and show that the long-wave model derived through an asymptotic expansion of the full nemato-hydrodynamic equations with consistent boundary conditions agrees with the model one obtains by employing a thermodynamically motivated gradient dynamics formulation based on an underlying free energy functional. As a result, we find that in the case of strong anchoring the elastic distortion energy is always stabilising. To support the discussion in the main part of the paper, an appendix gives the full derivation of the evolution equation for the film thickness via asymptotic expansion. © 2013 AIP Publishing LLC.

  12. The strong interactions beyond the standard model of particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Bergner, Georg [Muenster Univ. (Germany). Inst. for Theoretical Physics

    2016-11-01

    SuperMUC is one of the most convenient high performance machines for our project since it offers a high performance and flexibility regarding different applications. This is of particular importance for investigations of new theories, where on the one hand the parameters and systematic uncertainties have to be estimated in smaller simulations and on the other hand a large computational performance is needed for the estimations of the scale at zero temperature. Our project is just the first investigation of the new physics beyond the standard model of particle physics and we hope to proceed with our studies towards more involved Technicolour candidates, supersymmetric QCD, and extended supersymmetry.

  13. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  14. Evaluating predictive models of software quality

    International Nuclear Information System (INIS)

    Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D

    2014-01-01

    Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.

  15. Modelling and prediction of non-stationary optical turbulence behaviour

    NARCIS (Netherlands)

    Doelman, N.J.; Osborn, J.

    2016-01-01

    There is a strong need to model the temporal fluctuations in turbulence parameters, for instance for scheduling, simulation and prediction purposes. This paper aims at modelling the dynamic behaviour of the turbulence coherence length r0, utilising measurement data from the Stereo-SCIDAR instrument

  16. Modeling loss and backscattering in a photonic-bandgap fiber using strong perturbation

    Science.gov (United States)

    Zamani Aghaie, Kiarash; Digonnet, Michel J. F.; Fan, Shanhui

    2013-02-01

    We use coupled-mode theory with strong perturbation to model the loss and backscattering coefficients of a commercial hollow-core fiber (NKT Photonics' HC-1550-02 fiber) induced by the frozen-in longitudinal perturbations of the fiber cross section. Strong perturbation is used, for the first time to the best of our knowledge, because the large difference between the refractive indices of the two fiber materials (silica and air) makes conventional weak-perturbation less accurate. We first study the loss and backscattering using the mathematical description of conventional surface-capillary waves (SCWs). This model implicitly assumes that the mechanical waves on the core wall of a PBF have the same power spectral density (PSD) as the waves that develop on an infinitely thick cylindrical tube with the same diameter as the PBF core. The loss and backscattering coefficients predicted with this thick-wall SCW roughness are 0.5 dB/km and 1.1×10-10 mm-1, respectively. These values are more than one order of magnitude smaller than the measured values (20-30 dB/km and ~1.5×10-9 mm-1, respectively). This result suggests that the thick-wall SCW PSD is not representative of the roughness of our fiber. We found that this discrepancy occurs at least in part because the effect of the finite thickness of the silica membranes (only ~120 nm) is neglected. We present a new expression for the PSD that takes into account this finite thickness and demonstrates that the finite thickness substantially increases the roughness. The predicted loss and backscattering coefficients predicted with this thin-film SCW PSD are 30 dB/km and 1.3×10-9 mm-1, which are both close to the measured values. We also show that the thin-film SCW PSD accurately predicts the roughness PSD measured by others in a solid-core photonic-crystal fiber.

  17. Departures from predicted type II behavior in dirty strong-coupling superconductors

    International Nuclear Information System (INIS)

    Park, J.C.; Neighbor, J.E.; Shiffman, C.A.

    1976-01-01

    Calorimetric measurements of the Ginsburg-Landau parameters for Pb-Sn and Pb-Bi alloys show good agreement with the calculations of Rainer and Bergmann for kappa 1 (t)/kappa 1 (1). However, the calculations of Rainer and Usadel for kappa 2 (t)/kappa 2 (1) substantially underestimate the enhancements due to strong-coupling. (Auth.)

  18. Exchange and spin-fluctuation superconducting pairing in the strong correlation limit of the Hubbard model

    International Nuclear Information System (INIS)

    Plakida, N. M.; Anton, L.; Adam, S. . Department of Theoretical Physics, Horia Hulubei National Institute for Physics and Nuclear Engineering, PO Box MG-6, RO-76900 Bucharest - Magurele; RO); Adam, Gh. . Department of Theoretical Physics, Horia Hulubei National Institute for Physics and Nuclear Engineering, PO Box MG-6, RO-76900 Bucharest - Magurele; RO)

    2001-01-01

    A microscopical theory of superconductivity in the two-band singlet-hole Hubbard model, in the strong coupling limit in a paramagnetic state, is developed. The model Hamiltonian is obtained by projecting the p-d model to an asymmetric Hubbard model with the lower Hubbard subband occupied by one-hole Cu d-like states and the upper Hubbard subband occupied by two-hole p-d singlet states. The model requires two microscopical parameters only, the p-d hybridization parameter t and the charge-transfer gap Δ. It was previously shown to secure an appropriate description of the normal state properties of the high -T c cuprates. To treat rigorously the strong correlations, the Hubbard operator technique within the projection method for the Green function is used. The Dyson equation is derived. In the molecular field approximation, d-wave superconducting pairing of conventional hole (electron) pairs in one Hubbard subband is found, which is mediated by the exchange interaction given by the interband hopping, J ij = 4 (t ij ) 2 / Δ. The normal and anomalous components of the self-energy matrix are calculated in the self-consistent Born approximation for the electron-spin-fluctuation scattering mediated by kinematic interaction of the second order of the intraband hopping. The derived numerical and analytical solutions predict the occurrence of singlet d x 2 -y 2 -wave pairing both in the d-hole and singlet Hubbard subbands. The gap functions and T c are calculated for different hole concentrations. The exchange interaction is shown to be the most important pairing interaction in the Hubbard model in the strong correlation limit, while the spin-fluctuation coupling results only in a moderate enhancement of T c . The smaller weight of the latter comes from two specific features: its vanishing inside the Brillouin zone (BZ) along the lines, |k x | + |k y |=π pointing towards the hot spots and the existence of a small energy shell within which the pairing is effective. By

  19. Serum MHPG Strongly Predicts Conversion to Alzheimer's Disease in Behaviorally Characterized Subjects with Down Syndrome

    NARCIS (Netherlands)

    Dekker, Alain D.; Coppus, Antonia M. W.; Vermeiren, Yannick; Aerts, Tony; van Duijn, Cornelia M.; Kremer, Berry P.; Naude, Pieter J. W.; Van Dam, Debby; De Deyn, Peter P.

    2015-01-01

    Background: Down syndrome (DS) is the most prevalent genetic cause of intellectual disability. Early-onset Alzheimer's disease (AD) frequently develops in DS and is characterized by progressive memory loss and behavioral and psychological signs and symptoms of dementia (BPSD). Predicting and

  20. Strong homing does not predict high site fidelity in juvenile reef fishes

    Science.gov (United States)

    Streit, Robert P.; Bellwood, David R.

    2018-03-01

    After being displaced, juvenile reef fishes are able to return home over large distances. This strong homing behaviour is extraordinary and may allow insights into the longer-term spatial ecology of fish communities. For example, it appears intuitive that strong homing behaviour should be indicative of long-term site fidelity. However, this connection has rarely been tested. We quantified the site fidelity of juvenile fishes of four species after returning home following displacement. Two species, parrotfishes and Pomacentrus moluccensis, showed significantly reduced site fidelity after returning home. On average, they disappeared from their home sites almost 3 d earlier than expected. Mortality or competitive exclusion does not seem to be the main reasons for their disappearance. Rather, we suggest an increased propensity to relocate after encountering alternative reef locations while homing. It appears that some juvenile fishes may have a higher innate spatial flexibility than their strict homing drive suggests.

  1. Strong dynamics in a classically scale invariant extension of the standard model with a flat potential

    Science.gov (United States)

    Haba, Naoyuki; Yamada, Toshifumi

    2017-06-01

    We investigate the scenario where the standard model is extended with classical scale invariance, which is broken by chiral symmetry breaking and confinement in a new strongly coupled gauge theory that resembles QCD. The standard model Higgs field emerges as a result of the mixing of a scalar meson in the new strong dynamics and a massless elementary scalar field. The mass and scalar decay constant of that scalar meson, which are generated dynamically in the new gauge theory, give rise to the Higgs field mass term, automatically possessing the correct negative sign by the bosonic seesaw mechanism. Using analogy with QCD, we evaluate the dynamical scale of the new gauge theory and further make quantitative predictions for light pseudo-Nambu-Goldstone bosons associated with the spontaneous breaking of axial symmetry along chiral symmetry breaking in the new gauge theory. A prominent consequence of the scenario is that there should be a standard model gauge singlet pseudo-Nambu-Goldstone boson with mass below 220 GeV, which couples to two electroweak gauge bosons through the Wess-Zumino-Witten term, whose strength is thus determined by the dynamical scale of the new gauge theory. Other pseudo-Nambu-Goldstone bosons, charged under the electroweak gauge groups, also appear. Concerning the theoretical aspects, it is shown that the scalar quartic coupling can vanish at the Planck scale with the top quark pole mass as large as 172.5 GeV, realizing the flatland scenario without being in tension with the current experimental data.

  2. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  3. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  4. Dst Index in the 2008 GEM Modeling Challenge - Model Performance for Moderate and Strong Magnetic Storms

    Science.gov (United States)

    Rastaetter, Lutz; Kuznetsova, Maria; Hesse, Michael; Chulaki, Anna; Pulkkinen, Antti; Ridley, Aaron J.; Gombosi, Tamas; Vapirev, Alexander; Raeder, Joachim; Wiltberger, Michael James; hide

    2010-01-01

    The GEM 2008 modeling challenge efforts are expanding beyond comparing in-situ measurements in the magnetosphere and ionosphere to include the computation of indices to be compared. The Dst index measures the largest deviations of the horizontal magnetic field at 4 equatorial magnetometers from the quiet-time background field and is commonly used to track the strength of the magnetic disturbance of the magnetosphere during storms. Models can calculate a proxy Dst index in various ways, including using the Dessler-Parker Sckopke relation and the energy of the ring current and Biot-Savart integration of electric currents in the magnetosphere. The GEM modeling challenge investigates 4 space weather events and we compare models available at CCMC against each other and the observed values of Ost. Models used include SWMF/BATSRUS, OpenGGCM, LFM, GUMICS (3D magnetosphere MHD models), Fok-RC, CRCM, RAM-SCB (kinetic drift models of the ring current), WINDMI (magnetosphere-ionosphere electric circuit model), and predictions based on an impulse response function (IRF) model and analytic coupling functions with inputs of solar wind data. In addition to the analysis of model-observation comparisons we look at the way Dst is computed in global magnetosphere models. The default value of Dst computed by the SWMF model is for Bz the Earth's center. In addition to this, we present results obtained at different locations on the Earth's surface. We choose equatorial locations at local noon, dusk (18:00 hours), midnight and dawn (6:00 hours). The different virtual observatory locations reveal the variation around the earth-centered Dst value resulting from the distribution of electric currents in the magnetosphere during different phases of a storm.

  5. Atmospheric CO2 observations and models suggest strong carbon uptake by forests in New Zealand

    Science.gov (United States)

    Steinkamp, Kay; Mikaloff Fletcher, Sara E.; Brailsford, Gordon; Smale, Dan; Moore, Stuart; Keller, Elizabeth D.; Baisden, W. Troy; Mukai, Hitoshi; Stephens, Britton B.

    2017-01-01

    A regional atmospheric inversion method has been developed to determine the spatial and temporal distribution of CO2 sinks and sources across New Zealand for 2011-2013. This approach infers net air-sea and air-land CO2 fluxes from measurement records, using back-trajectory simulations from the Numerical Atmospheric dispersion Modelling Environment (NAME) Lagrangian dispersion model, driven by meteorology from the New Zealand Limited Area Model (NZLAM) weather prediction model. The inversion uses in situ measurements from two fixed sites, Baring Head on the southern tip of New Zealand's North Island (41.408° S, 174.871° E) and Lauder from the central South Island (45.038° S, 169.684° E), and ship board data from monthly cruises between Japan, New Zealand, and Australia. A range of scenarios is used to assess the sensitivity of the inversion method to underlying assumptions and to ensure robustness of the results. The results indicate a strong seasonal cycle in terrestrial land fluxes from the South Island of New Zealand, especially in western regions covered by indigenous forest, suggesting higher photosynthetic and respiratory activity than is evident in the current a priori land process model. On the annual scale, the terrestrial biosphere in New Zealand is estimated to be a net CO2 sink, removing 98 (±37) Tg CO2 yr-1 from the atmosphere on average during 2011-2013. This sink is much larger than the reported 27 Tg CO2 yr-1 from the national inventory for the same time period. The difference can be partially reconciled when factors related to forest and agricultural management and exports, fossil fuel emission estimates, hydrologic fluxes, and soil carbon change are considered, but some differences are likely to remain. Baseline uncertainty, model transport uncertainty, and limited sensitivity to the northern half of the North Island are the main contributors to flux uncertainty.

  6. South African seasonal rainfall prediction performance by a coupled ocean-atmosphere model

    CSIR Research Space (South Africa)

    Landman, WA

    2010-12-01

    Full Text Available Evidence is presented that coupled ocean-atmosphere models can already outscore computationally less expensive atmospheric models. However, if the atmospheric models are forced with highly skillful SST predictions, they may still be a very strong...

  7. North Atlantic climate model bias influence on multiyear predictability

    Science.gov (United States)

    Wu, Y.; Park, T.; Park, W.; Latif, M.

    2018-01-01

    The influences of North Atlantic biases on multiyear predictability of unforced surface air temperature (SAT) variability are examined in the Kiel Climate Model (KCM). By employing a freshwater flux correction over the North Atlantic to the model, which strongly alleviates both North Atlantic sea surface salinity (SSS) and sea surface temperature (SST) biases, the freshwater flux-corrected integration depicts significantly enhanced multiyear SAT predictability in the North Atlantic sector in comparison to the uncorrected one. The enhanced SAT predictability in the corrected integration is due to a stronger and more variable Atlantic Meridional Overturning Circulation (AMOC) and its enhanced influence on North Atlantic SST. Results obtained from preindustrial control integrations of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) support the findings obtained from the KCM: models with large North Atlantic biases tend to have a weak AMOC influence on SAT and exhibit a smaller SAT predictability over the North Atlantic sector.

  8. Staying Power of Churn Prediction Models

    NARCIS (Netherlands)

    Risselada, Hans; Verhoef, Peter C.; Bijmolt, Tammo H. A.

    In this paper, we study the staying power of various churn prediction models. Staying power is defined as the predictive performance of a model in a number of periods after the estimation period. We examine two methods, logit models and classification trees, both with and without applying a bagging

  9. The influence of fragmentation models on the determination of the strong coupling constant in e+e- annihilation into hadrons

    International Nuclear Information System (INIS)

    Behrend, H.J.; Chen, C.; Fenner, H.; Schachter, M.J.; Schroeder, V.; Sindt, H.; D'Agostini, G.; Apel, W.D.; Banerjee, S.; Bodenkamp, J.; Chrobaczek, D.; Engler, J.; Fluegge, G.; Fries, D.C.; Fues, W.; Gamerdinger, K.; Hopp, G.; Kuester, H.; Mueller, H.; Randoll, H.; Schmidt, G.; Schneider, H.; Boer, W. de; Buschhorn, G.; Grindhammer, G.; Grosse-Wiesmann, P.; Gunderson, B.; Kiesling, C.; Kotthaus, R.; Kruse, U.; Lierl, H.; Lueers, D.; Oberlack, H.; Schacht, P.; Colas, P.; Cordier, A.; Davier, M.; Fournier, D.; Grivaz, J.F.; Haissinski, J.; Journe, V.; Klarsfeld, A.; Laplanche, F.; Le Diberder, F.; Mallik, U.; Veillet, J.J.; Field, J.H.; George, R.; Goldberg, M.; Grossetete, B.; Hamon, O.; Kapusta, F.; Kovacs, F.; London, G.; Poggioli, L.; Rivoal, M.; Aleksan, R.; Bouchez, J.; Carnesecchi, G.; Cozzika, G.; Ducros, Y.; Gaidot, A.; Jadach, S.; Lavagne, Y.; Pamela, J.; Pansart, J.P.; Pierre, F.

    1983-01-01

    Hadronic events obtained with the CELLO detector at PETRA were compared with first-order QCD predictions using two different models for the fragmentation of quarks and gluons, the Hoyer model and the Lund model. Both models are in reasonable agreement with the data, although they do not completely reproduce the details of many distributions. Several methods have been applied to determine the strong coupling constant αsub(s). Although within one model the value of αsub(s) varies by 20% among the different methods, the values determined using the Lund model are 30% or more larger (depending on the method used) than the values determined with the Hoyer model. Our results using the Hoyer model are in agreement with previous results based on this approach. (orig.)

  10. Diabetic Retinopathy Is Strongly Predictive of Cardiovascular Autonomic Neuropathy in Type 2 Diabetes.

    Science.gov (United States)

    Huang, Chih-Cheng; Lee, Jong-Jer; Lin, Tsu-Kung; Tsai, Nai-Wen; Huang, Chi-Ren; Chen, Shu-Fang; Lu, Cheng-Hsien; Liu, Rue-Tsuan

    2016-01-01

    A well-established, comprehensive, and simple test battery was used here to re-evaluate risk factors for cardiovascular autonomic neuropathy (CAN) in type 2 diabetes. One hundred and seventy-four patients with type 2 diabetes were evaluated through the methods of deep breathing and Valsalva maneuver for correlation with factors that might influence the presence and severity of CAN. The Composite Autonomic Scoring Scale (CASS) was used to grade the severity of autonomic impairment, and CAN was defined as a CASS score ≥2. Results showed that nephropathy, duration of diabetes, blood pressure, uric acid, and the presence of retinopathy and metabolic syndrome significantly correlated with the CASS score. Age may not be a risk factor for diabetic CAN. However, the effects of diabetes on CAN are more prominent in younger patients than in older ones. Diabetic retinopathy is the most significant risk factor predictive of the presence of CAN in patients with type 2 diabetes.

  11. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  12. Predicting the biological variability of environmental rhythms: weak or strong anticipation for sensorimotor synchronization?

    Science.gov (United States)

    Torre, Kjerstin; Varlet, Manuel; Marmelat, Vivien

    2013-12-01

    The internal processes involved in synchronizing our movements with environmental stimuli have traditionally been addressed using regular metronomic sequences. Regarding real-life environments, however, biological rhythms are known to have intrinsic variability, ubiquitously characterized as fractal long-range correlations. In our research we thus investigate to what extent the synchronization processes drawn from regular metronome paradigms can be generalized to other (biologically) variable rhythms. Participants performed synchronized finger tapping under five conditions of long-range and/or short-range correlated, randomly variable, and regular auditory sequences. Combining experimental data analysis and numerical simulation, we found that synchronizing with biologically variable rhythms involves the same internal processes as with other variable rhythms (whether totally random or comprising lawful regularities), but different from those involved with a regular metronome. This challenges both the generalizability of conclusions drawn from regular-metronome paradigms, and recent research assuming that biologically variable rhythms may trigger specific strong anticipatory processes to achieve synchronization. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Acute post cessation smoking. A strong predictive factor for metabolic syndrome among adult Saudis

    International Nuclear Information System (INIS)

    AlDaghri, Nasser M.

    2009-01-01

    To determine the influence of tobacco exposure in the development of metabolic syndrome (MS) in the adult Saudi population. Six hundred and sixty-four adults (305 males and 359 females) aged 25-70 years were included in this cross-sectional study conducted at the King Abdul Aziz University Hospital, between June 2006 and May 2007. We classified the participants into non-smokers, smokers, and ex-smokers (defined as complete cessation for 1-2 years). All subjects were screened for the presence of MS using the modified American Heart Association/National Heart, Lung and Blood Institute (AHA/NHLBI), International Diabetes Federation (IDF) and World Health Organization (WHO) definitions. Metabolic syndrome was highest among ex-smokers regardless of definition used. Relative risk for ex-smokers (95% CI: 2.23, 1.06-4.73) was more than twice in harboring MS as compared to non-smokers (95% CI: 2.78, 1.57-4.92) (p=0.009). Acute post-cessation smoking is a strong predictor for MS among male and female Arabs. Smoking cessation programs should include a disciplined lifestyle and dietary intervention to counteract the MS-augmenting side-effect of smoking cessation. (author)

  14. Lifetime of dynamic heterogeneity in strong and fragile kinetically constrained spin models

    International Nuclear Information System (INIS)

    Leonard, Sebastien; Berthier, Ludovic

    2005-01-01

    Kinetically constrained spin models are schematic coarse-grained models for the glass transition which represent an efficient theoretical tool to study detailed spatio-temporal aspects of dynamic heterogeneity in supercooled liquids. Here, we study how spatially correlated dynamic domains evolve with time and compare our results to various experimental and numerical investigations. We find that strong and fragile models yield different results. In particular, the lifetime of dynamic heterogeneity remains constant and roughly equal to the alpha relaxation time in strong models, while it increases more rapidly in fragile models when the glass transition is approached

  15. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  16. Predictive Model Assessment for Count Data

    National Research Council Canada - National Science Library

    Czado, Claudia; Gneiting, Tilmann; Held, Leonhard

    2007-01-01

    .... In case studies, we critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. Key words: Calibration...

  17. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs......) for modeling and forecasting. It is argued that this gives models and predictions which better reflect reality. The SDE approach also offers a more adequate framework for modeling and a number of efficient tools for model building. A software package (CTSM-R) for SDE-based modeling is briefly described....... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...

  18. Predictive models for arteriovenous fistula maturation.

    Science.gov (United States)

    Al Shakarchi, Julien; McGrogan, Damian; Van der Veer, Sabine; Sperrin, Matthew; Inston, Nicholas

    2016-05-07

    Haemodialysis (HD) is a lifeline therapy for patients with end-stage renal disease (ESRD). A critical factor in the survival of renal dialysis patients is the surgical creation of vascular access, and international guidelines recommend arteriovenous fistulas (AVF) as the gold standard of vascular access for haemodialysis. Despite this, AVFs have been associated with high failure rates. Although risk factors for AVF failure have been identified, their utility for predicting AVF failure through predictive models remains unclear. The objectives of this review are to systematically and critically assess the methodology and reporting of studies developing prognostic predictive models for AVF outcomes and assess them for suitability in clinical practice. Electronic databases were searched for studies reporting prognostic predictive models for AVF outcomes. Dual review was conducted to identify studies that reported on the development or validation of a model constructed to predict AVF outcome following creation. Data were extracted on study characteristics, risk predictors, statistical methodology, model type, as well as validation process. We included four different studies reporting five different predictive models. Parameters identified that were common to all scoring system were age and cardiovascular disease. This review has found a small number of predictive models in vascular access. The disparity between each study limits the development of a unified predictive model.

  19. Model Predictive Control Fundamentals | Orukpe | Nigerian Journal ...

    African Journals Online (AJOL)

    Model Predictive Control (MPC) has developed considerably over the last two decades, both within the research control community and in industries. MPC strategy involves the optimization of a performance index with respect to some future control sequence, using predictions of the output signal based on a process model, ...

  20. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    In this work, a new model predictive controller is developed that handles unreachable setpoints better than traditional model predictive control methods. The new controller induces an interesting fast/slow asymmetry in the tracking response of the system. Nominal asymptotic stability of the optim...

  1. Group Targets Tracking Using Multiple Models GGIW-CPHD Based on Best-Fitting Gaussian Approximation and Strong Tracking Filter

    Directory of Open Access Journals (Sweden)

    Yun Wang

    2016-01-01

    Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.

  2. Clinical Prediction Models for Cardiovascular Disease: Tufts Predictive Analytics and Comparative Effectiveness Clinical Prediction Model Database.

    Science.gov (United States)

    Wessler, Benjamin S; Lai Yh, Lana; Kramer, Whitney; Cangelosi, Michael; Raman, Gowri; Lutz, Jennifer S; Kent, David M

    2015-07-01

    Clinical prediction models (CPMs) estimate the probability of clinical outcomes and hold the potential to improve decision making and individualize care. For patients with cardiovascular disease, there are numerous CPMs available although the extent of this literature is not well described. We conducted a systematic review for articles containing CPMs for cardiovascular disease published between January 1990 and May 2012. Cardiovascular disease includes coronary heart disease, heart failure, arrhythmias, stroke, venous thromboembolism, and peripheral vascular disease. We created a novel database and characterized CPMs based on the stage of development, population under study, performance, covariates, and predicted outcomes. There are 796 models included in this database. The number of CPMs published each year is increasing steadily over time. Seven hundred seventeen (90%) are de novo CPMs, 21 (3%) are CPM recalibrations, and 58 (7%) are CPM adaptations. This database contains CPMs for 31 index conditions, including 215 CPMs for patients with coronary artery disease, 168 CPMs for population samples, and 79 models for patients with heart failure. There are 77 distinct index/outcome pairings. Of the de novo models in this database, 450 (63%) report a c-statistic and 259 (36%) report some information on calibration. There is an abundance of CPMs available for a wide assortment of cardiovascular disease conditions, with substantial redundancy in the literature. The comparative performance of these models, the consistency of effects and risk estimates across models and the actual and potential clinical impact of this body of literature is poorly understood. © 2015 American Heart Association, Inc.

  3. Predictive models for acute kidney injury following cardiac surgery.

    Science.gov (United States)

    Demirjian, Sevag; Schold, Jesse D; Navia, Jose; Mastracci, Tara M; Paganini, Emil P; Yared, Jean-Pierre; Bashour, Charles A

    2012-03-01

    Accurate prediction of cardiac surgery-associated acute kidney injury (AKI) would improve clinical decision making and facilitate timely diagnosis and treatment. The aim of the study was to develop predictive models for cardiac surgery-associated AKI using presurgical and combined pre- and intrasurgical variables. Prospective observational cohort. 25,898 patients who underwent cardiac surgery at Cleveland Clinic in 2000-2008. Presurgical and combined pre- and intrasurgical variables were used to develop predictive models. Dialysis therapy and a composite of doubling of serum creatinine level or dialysis therapy within 2 weeks (or discharge if sooner) after cardiac surgery. Incidences of dialysis therapy and the composite of doubling of serum creatinine level or dialysis therapy were 1.7% and 4.3%, respectively. Kidney function parameters were strong independent predictors in all 4 models. Surgical complexity reflected by type and history of previous cardiac surgery were robust predictors in models based on presurgical variables. However, the inclusion of intrasurgical variables accounted for all explained variance by procedure-related information. Models predictive of dialysis therapy showed good calibration and superb discrimination; a combined (pre- and intrasurgical) model performed better than the presurgical model alone (C statistics, 0.910 and 0.875, respectively). Models predictive of the composite end point also had excellent discrimination with both presurgical and combined (pre- and intrasurgical) variables (C statistics, 0.797 and 0.825, respectively). However, the presurgical model predictive of the composite end point showed suboptimal calibration (P predictive models in other cohorts is required before wide-scale application. We developed and internally validated 4 new models that accurately predict cardiac surgery-associated AKI. These models are based on readily available clinical information and can be used for patient counseling, clinical

  4. Hybrid approaches to physiologic modeling and prediction

    Science.gov (United States)

    Olengü, Nicholas O.; Reifman, Jaques

    2005-05-01

    This paper explores how the accuracy of a first-principles physiological model can be enhanced by integrating data-driven, "black-box" models with the original model to form a "hybrid" model system. Both linear (autoregressive) and nonlinear (neural network) data-driven techniques are separately combined with a first-principles model to predict human body core temperature. Rectal core temperature data from nine volunteers, subject to four 30/10-minute cycles of moderate exercise/rest regimen in both CONTROL and HUMID environmental conditions, are used to develop and test the approach. The results show significant improvements in prediction accuracy, with average improvements of up to 30% for prediction horizons of 20 minutes. The models developed from one subject's data are also used in the prediction of another subject's core temperature. Initial results for this approach for a 20-minute horizon show no significant improvement over the first-principles model by itself.

  5. Classical trajectory perspective of atomic ionization in strong laser fields semiclassical modeling

    CERN Document Server

    Liu, Jie

    2014-01-01

    The ionization of atoms and molecules in strong laser fields is an active field in modern physics and has versatile applications in such as attosecond physics, X-ray generation, inertial confined fusion (ICF), medical science and so on. Classical Trajectory Perspective of Atomic Ionization in Strong Laser Fields covers the basic concepts in this field and discusses many interesting topics using the semiclassical model of classical trajectory ensemble simulation, which is one of the most successful ionization models and has the advantages of a clear picture, feasible computing and accounting for many exquisite experiments quantitatively. The book also presents many applications of the model in such topics as the single ionization, double ionization, neutral atom acceleration and other timely issues in strong field physics, and delivers useful messages to readers with presenting the classical trajectory perspective on the strong field atomic ionization. The book is intended for graduate students and researchers...

  6. Evaluating the Predictive Value of Growth Prediction Models

    Science.gov (United States)

    Murphy, Daniel L.; Gaertner, Matthew N.

    2014-01-01

    This study evaluates four growth prediction models--projection, student growth percentile, trajectory, and transition table--commonly used to forecast (and give schools credit for) middle school students' future proficiency. Analyses focused on vertically scaled summative mathematics assessments, and two performance standards conditions (high…

  7. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  8. Development of 3D ferromagnetic model of tokamak core with strong toroidal asymmetry

    DEFF Research Database (Denmark)

    Markovič, Tomáš; Gryaznevich, Mikhail; Ďuran, Ivan

    2015-01-01

    Fully 3D model of strongly asymmetric tokamak core, based on boundary integral method approach (i.e. characterization of ferromagnet by its surface) is presented. The model is benchmarked on measurements on tokamak GOLEM, as well as compared to 2D axisymmetric core equivalent for this tokamak, pr...

  9. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    DEFF Research Database (Denmark)

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    By comparison with the solution of the time-dependent Schrodinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  10. A Global Model for Bankruptcy Prediction.

    Science.gov (United States)

    Alaminos, David; Del Castillo, Agustín; Fernández, Manuel Ángel

    2016-01-01

    The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy.

  11. Fingerprint verification prediction model in hand dermatitis.

    Science.gov (United States)

    Lee, Chew K; Chang, Choong C; Johor, Asmah; Othman, Puwira; Baba, Roshidah

    2015-07-01

    Hand dermatitis associated fingerprint changes is a significant problem and affects fingerprint verification processes. This study was done to develop a clinically useful prediction model for fingerprint verification in patients with hand dermatitis. A case-control study involving 100 patients with hand dermatitis. All patients verified their thumbprints against their identity card. Registered fingerprints were randomized into a model derivation and model validation group. Predictive model was derived using multiple logistic regression. Validation was done using the goodness-of-fit test. The fingerprint verification prediction model consists of a major criterion (fingerprint dystrophy area of ≥ 25%) and two minor criteria (long horizontal lines and long vertical lines). The presence of the major criterion predicts it will almost always fail verification, while presence of both minor criteria and presence of one minor criterion predict high and low risk of fingerprint verification failure, respectively. When none of the criteria are met, the fingerprint almost always passes the verification. The area under the receiver operating characteristic curve was 0.937, and the goodness-of-fit test showed agreement between the observed and expected number (P = 0.26). The derived fingerprint verification failure prediction model is validated and highly discriminatory in predicting risk of fingerprint verification in patients with hand dermatitis. © 2014 The International Society of Dermatology.

  12. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  13. Model for Thermal Relic Dark Matter of Strongly Interacting Massive Particles.

    Science.gov (United States)

    Hochberg, Yonit; Kuflik, Eric; Murayama, Hitoshi; Volansky, Tomer; Wacker, Jay G

    2015-07-10

    A recent proposal is that dark matter could be a thermal relic of 3→2 scatterings in a strongly coupled hidden sector. We present explicit classes of strongly coupled gauge theories that admit this behavior. These are QCD-like theories of dynamical chiral symmetry breaking, where the pions play the role of dark matter. The number-changing 3→2 process, which sets the dark matter relic abundance, arises from the Wess-Zumino-Witten term. The theories give an explicit relationship between the 3→2 annihilation rate and the 2→2 self-scattering rate, which alters predictions for structure formation. This is a simple calculable realization of the strongly interacting massive-particle mechanism.

  14. In silico and cell-based analyses reveal strong divergence between prediction and observation of T-cell-recognized tumor antigen T-cell epitopes.

    Science.gov (United States)

    Schmidt, Julien; Guillaume, Philippe; Dojcinovic, Danijel; Karbach, Julia; Coukos, George; Luescher, Immanuel

    2017-07-14

    Tumor exomes provide comprehensive information on mutated, overexpressed genes and aberrant splicing, which can be exploited for personalized cancer immunotherapy. Of particular interest are mutated tumor antigen T-cell epitopes, because neoepitope-specific T cells often are tumoricidal. However, identifying tumor-specific T-cell epitopes is a major challenge. A widely used strategy relies on initial prediction of human leukocyte antigen-binding peptides by in silico algorithms, but the predictive power of this approach is unclear. Here, we used the human tumor antigen NY-ESO-1 (ESO) and the human leukocyte antigen variant HLA-A*0201 (A2) as a model and predicted in silico the 41 highest-affinity, A2-binding 8-11-mer peptides and assessed their binding, kinetic complex stability, and immunogenicity in A2-transgenic mice and on peripheral blood mononuclear cells from ESO-vaccinated melanoma patients. We found that 19 of the peptides strongly bound to A2, 10 of which formed stable A2-peptide complexes and induced CD8 + T cells in A2-transgenic mice. However, only 5 of the peptides induced cognate T cells in humans; these peptides exhibited strong binding and complex stability and contained multiple large hydrophobic and aromatic amino acids. These results were not predicted by in silico algorithms and provide new clues to improving T-cell epitope identification. In conclusion, our findings indicate that only a small fraction of in silico -predicted A2-binding ESO peptides are immunogenic in humans, namely those that have high peptide-binding strength and complex stability. This observation highlights the need for improving in silico predictions of peptide immunogenicity. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  15. Predictive Model of Systemic Toxicity (SOT)

    Science.gov (United States)

    In an effort to ensure chemical safety in light of regulatory advances away from reliance on animal testing, USEPA and L’Oréal have collaborated to develop a quantitative systemic toxicity prediction model. Prediction of human systemic toxicity has proved difficult and remains a ...

  16. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  17. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  18. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  19. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  8. Nonlocal response functions for predicting shear flow of strongly inhomogeneous fluids. I. Sinusoidally driven shear and sinusoidally driven inhomogeneity.

    Science.gov (United States)

    Glavatskiy, Kirill S; Dalton, Benjamin A; Daivis, Peter J; Todd, B D

    2015-06-01

    We present theoretical expressions for the density, strain rate, and shear pressure profiles in strongly inhomogeneous fluids undergoing steady shear flow with periodic boundary conditions. The expressions that we obtain take the form of truncated functional expansions. In these functional expansions, the independent variables are the spatially sinusoidal longitudinal and transverse forces that we apply in nonequilibrium molecular-dynamics simulations. The longitudinal force produces strong density inhomogeneity, and the transverse force produces sinusoidal shear. The functional expansions define new material properties, the response functions, which characterize the system's nonlocal response to the longitudinal force and the transverse force. We find that the sinusoidal longitudinal force, which is mainly responsible for the generation of density inhomogeneity, also modulates the strain rate and shear pressure profiles. Likewise, we find that the sinusoidal transverse force, which is mainly responsible for the generation of sinusoidal shear flow, can also modify the density. These cross couplings between density inhomogeneity and shear flow are also characterized by nonlocal response functions. We conduct nonequilibrium molecular-dynamics simulations to calculate all of the response functions needed to describe the response of the system for weak shear flow in the presence of strong density inhomogeneity up to the third order in the functional expansion. The response functions are then substituted directly into the truncated functional expansions and used to predict the density, velocity, and shear pressure profiles. The results are compared to the directly evaluated profiles from molecular-dynamics simulations, and we find that the predicted profiles from the truncated functional expansions are in excellent agreement with the directly computed density, velocity, and shear pressure profiles.

  9. Nonlocal response functions for predicting shear flow of strongly inhomogeneous fluids. I. Sinusoidally driven shear and sinusoidally driven inhomogeneity

    Science.gov (United States)

    Glavatskiy, Kirill S.; Dalton, Benjamin A.; Daivis, Peter J.; Todd, B. D.

    2015-06-01

    We present theoretical expressions for the density, strain rate, and shear pressure profiles in strongly inhomogeneous fluids undergoing steady shear flow with periodic boundary conditions. The expressions that we obtain take the form of truncated functional expansions. In these functional expansions, the independent variables are the spatially sinusoidal longitudinal and transverse forces that we apply in nonequilibrium molecular-dynamics simulations. The longitudinal force produces strong density inhomogeneity, and the transverse force produces sinusoidal shear. The functional expansions define new material properties, the response functions, which characterize the system's nonlocal response to the longitudinal force and the transverse force. We find that the sinusoidal longitudinal force, which is mainly responsible for the generation of density inhomogeneity, also modulates the strain rate and shear pressure profiles. Likewise, we find that the sinusoidal transverse force, which is mainly responsible for the generation of sinusoidal shear flow, can also modify the density. These cross couplings between density inhomogeneity and shear flow are also characterized by nonlocal response functions. We conduct nonequilibrium molecular-dynamics simulations to calculate all of the response functions needed to describe the response of the system for weak shear flow in the presence of strong density inhomogeneity up to the third order in the functional expansion. The response functions are then substituted directly into the truncated functional expansions and used to predict the density, velocity, and shear pressure profiles. The results are compared to the directly evaluated profiles from molecular-dynamics simulations, and we find that the predicted profiles from the truncated functional expansions are in excellent agreement with the directly computed density, velocity, and shear pressure profiles.

  10. On the model dependence of the determination of the strong coupling constant in second order QCD from e+e--annihilation into hadrons

    International Nuclear Information System (INIS)

    Achterberg, O.; D'Agostini, G.; Apel, W.D.; Engler, J.; Fluegge, G.; Forstbauer, B.; Fries, D.C.; Fues, W.; Gamerdinger, K.; Henkes, T.; Hopp, G.; Krueger, M.; Kuester, H.; Mueller, H.; Randoll, H.; Schmidt, G.; Schneider, H.; Boer, W. de; Buschhorn, G.; Grindhammer, G.; Grosse-Wiesmann, P.; Gunderson, B.; Kiesling, C.; Kotthaus, R.; Kruse, U.; Lierl, H.; Lueers, D.; Oberlack, H.; Schacht, P.; Bonneaud, G.; Colas, P.; Cordier, A.; Davier, M.; Fournier, D.; Grivaz, J.F.; Haissinski, J.; Journe, V.; Laplanche, F.; Le Diberder, F.; Mallik, U.; Ros, E.; Veillet, J.J.; Behrend, H.J.; Fenner, H.; Schachter, M.J.; Schroeder, V.; Sindt, H.

    1983-12-01

    Hadronic events obtained with the CELLO detector at PETRA are compared with second order QCD predictions using different models for the fragmentation of quarks and gluons into hadrons. We find that the model dependence in the determination of the strong coupling constant persists when going from first to second order QCD calculations. (orig.)

  11. Predicting and Modeling RNA Architecture

    Science.gov (United States)

    Westhof, Eric; Masquida, Benoît; Jossinet, Fabrice

    2011-01-01

    SUMMARY A general approach for modeling the architecture of large and structured RNA molecules is described. The method exploits the modularity and the hierarchical folding of RNA architecture that is viewed as the assembly of preformed double-stranded helices defined by Watson-Crick base pairs and RNA modules maintained by non-Watson-Crick base pairs. Despite the extensive molecular neutrality observed in RNA structures, specificity in RNA folding is achieved through global constraints like lengths of helices, coaxiality of helical stacks, and structures adopted at the junctions of helices. The Assemble integrated suite of computer tools allows for sequence and structure analysis as well as interactive modeling by homology or ab initio assembly with possibilities for fitting within electronic density maps. The local key role of non-Watson-Crick pairs guides RNA architecture formation and offers metrics for assessing the accuracy of three-dimensional models in a more useful way than usual root mean square deviation (RMSD) values. PMID:20504963

  12. Multiple Steps Prediction with Nonlinear ARX Models

    OpenAIRE

    Zhang, Qinghua; Ljung, Lennart

    2007-01-01

    NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...

  13. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  14. Model complexity control for hydrologic prediction

    Science.gov (United States)

    Schoups, G.; van de Giesen, N. C.; Savenije, H. H. G.

    2008-12-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore needed. We compare three model complexity control methods for hydrologic prediction, namely, cross validation (CV), Akaike's information criterion (AIC), and structural risk minimization (SRM). Results show that simulation of water flow using non-physically-based models (polynomials in this case) leads to increasingly better calibration fits as the model complexity (polynomial order) increases. However, prediction uncertainty worsens for complex non-physically-based models because of overfitting of noisy data. Incorporation of physically based constraints into the model (e.g., storage-discharge relationship) effectively bounds prediction uncertainty, even as the number of parameters increases. The conclusion is that overparameterization and equifinality do not lead to a continued increase in prediction uncertainty, as long as models are constrained by such physical principles. Complexity control of hydrologic models reduces parameter equifinality and identifies the simplest model that adequately explains the data, thereby providing a means of hydrologic generalization and classification. SRM is a promising technique for this purpose, as it (1) provides analytic upper bounds on prediction uncertainty, hence avoiding the computational burden of CV, and (2) extends the applicability of classic methods such as AIC to finite data. The main hurdle in applying SRM is the need for an a priori estimation of the complexity of the hydrologic model, as measured by its Vapnik-Chernovenkis (VC) dimension. Further research is needed in this area.

  15. Inert two-Higgs-doublet model strongly coupled to a non-Abelian vector resonance

    Science.gov (United States)

    Rojas-Abatte, Felipe; Mora, Maria Luisa; Urbina, Jose; Zerwekh, Alfonso R.

    2017-11-01

    We study the possibility of a dark matter candidate having its origin in an extended Higgs sector which, at least partially, is related to a new strongly interacting sector. More concretely, we consider an i2HDM (i.e., a Type-I two Higgs doublet model supplemented with a Z2 under which the nonstandard scalar doublet is odd) based on the gauge group S U (2 )1×S U (2 )2×U (1 )Y . We assume that one of the scalar doublets and the standard fermion transform nontrivially under S U (2 )1 while the second doublet transforms under S U (2 )2. Our main hypothesis is that standard sector is weakly coupled while the gauge interactions associated to the second group is characterized by a large coupling constant. We explore the consequences of this construction for the phenomenology of the dark matter candidate and we show that the presence of the new vector resonance reduces the relic density saturation region, compared to the usual i2DHM, in the high dark matter mass range. In the collider side, we argue that the mono-Z production is the channel which offers the best chances to manifest the presence of the new vector field. We study the departures from the usual i2HDM predictions and show that the discovery of the heavy vector at the LHC is challenging even in the mono-Z channel since the typical cross sections are of the order of 10-2 fb .

  16. Quantifying predictive accuracy in survival models.

    Science.gov (United States)

    Lirette, Seth T; Aban, Inmaculada

    2017-12-01

    For time-to-event outcomes in medical research, survival models are the most appropriate to use. Unlike logistic regression models, quantifying the predictive accuracy of these models is not a trivial task. We present the classes of concordance (C) statistics and R 2 statistics often used to assess the predictive ability of these models. The discussion focuses on Harrell's C, Kent and O'Quigley's R 2 , and Royston and Sauerbrei's R 2 . We present similarities and differences between the statistics, discuss the software options from the most widely used statistical analysis packages, and give a practical example using the Worcester Heart Attack Study dataset.

  17. Predictive power of nuclear-mass models

    Directory of Open Access Journals (Sweden)

    Yu. A. Litvinov

    2013-12-01

    Full Text Available Ten different theoretical models are tested for their predictive power in the description of nuclear masses. Two sets of experimental masses are used for the test: the older set of 2003 and the newer one of 2011. The predictive power is studied in two regions of nuclei: the global region (Z, N ≥ 8 and the heavy-nuclei region (Z ≥ 82, N ≥ 126. No clear correlation is found between the predictive power of a model and the accuracy of its description of the masses.

  18. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  19. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  20. Accuracy assessment of landslide prediction models

    International Nuclear Information System (INIS)

    Othman, A N; Mohd, W M N W; Noraini, S

    2014-01-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones

  1. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  2. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  3. Extending the reach of strong-coupling: an iterative technique for Hamiltonian lattice models

    International Nuclear Information System (INIS)

    Alberty, J.; Greensite, J.; Patkos, A.

    1983-12-01

    The authors propose an iterative method for doing lattice strong-coupling-like calculations in a range of medium to weak couplings. The method is a modified Lanczos scheme, with greatly improved convergence properties. The technique is tested on the Mathieu equation and on a Hamiltonian finite-chain XY model, with excellent results. (Auth.)

  4. Why do faultlines matter? A computational model of how strong demographic faultlines undermine team cohesion

    NARCIS (Netherlands)

    Flache, Andreas; Mas, Michael; Mäs, Michael

    Lau and Murnighan (LM) suggested that strong demographic faultlines threaten team cohesion and reduce consensus. However, it remains unclear which assumptions are exactly needed to derive faultline effects. We propose a formal computational model of the effects of faultlines that uses four

  5. Engineering the Dynamics of Effective Spin-Chain Models for Strongly Interacting Atomic Gases

    DEFF Research Database (Denmark)

    Volosniev, A. G.; Petrosyan, D.; Valiente, M.

    2015-01-01

    We consider a one-dimensional gas of cold atoms with strong contact interactions and construct an effective spin-chain Hamiltonian for a two-component system. The resulting Heisenberg spin model can be engineered by manipulating the shape of the external confining potential of the atomic gas. We...

  6. Posterior predictive checking of multiple imputation models.

    Science.gov (United States)

    Nguyen, Cattram D; Lee, Katherine J; Carlin, John B

    2015-07-01

    Multiple imputation is gaining popularity as a strategy for handling missing data, but there is a scarcity of tools for checking imputation models, a critical step in model fitting. Posterior predictive checking (PPC) has been recommended as an imputation diagnostic. PPC involves simulating "replicated" data from the posterior predictive distribution of the model under scrutiny. Model fit is assessed by examining whether the analysis from the observed data appears typical of results obtained from the replicates produced by the model. A proposed diagnostic measure is the posterior predictive "p-value", an extreme value of which (i.e., a value close to 0 or 1) suggests a misfit between the model and the data. The aim of this study was to evaluate the performance of the posterior predictive p-value as an imputation diagnostic. Using simulation methods, we deliberately misspecified imputation models to determine whether posterior predictive p-values were effective in identifying these problems. When estimating the regression parameter of interest, we found that more extreme p-values were associated with poorer imputation model performance, although the results highlighted that traditional thresholds for classical p-values do not apply in this context. A shortcoming of the PPC method was its reduced ability to detect misspecified models with increasing amounts of missing data. Despite the limitations of posterior predictive p-values, they appear to have a valuable place in the imputer's toolkit. In addition to automated checking using p-values, we recommend imputers perform graphical checks and examine other summaries of the test quantity distribution. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained...... in the Markov model for this task. Classifications that are purely based on statistical models might not always be biologically meaningful. We present combinatorial methods to incorporate biological background knowledge to enhance the prediction performance....

  8. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...... on underlying basic assumptions, such as diffuse fields, high modal overlap, resonant field being dominant, etc., and the consequences of these in terms of limitations in the theory and in the practical use of the models....

  9. Comparative Study of Bancruptcy Prediction Models

    Directory of Open Access Journals (Sweden)

    Isye Arieshanti

    2013-09-01

    Full Text Available Early indication of bancruptcy is important for a company. If companies aware of  potency of their bancruptcy, they can take a preventive action to anticipate the bancruptcy. In order to detect the potency of a bancruptcy, a company can utilize a a model of bancruptcy prediction. The prediction model can be built using a machine learning methods. However, the choice of machine learning methods should be performed carefully. Because the suitability of a model depends on the problem specifically. Therefore, in this paper we perform a comparative study of several machine leaning methods for bancruptcy prediction. According to the comparative study, the performance of several models that based on machine learning methods (k-NN, fuzzy k-NN, SVM, Bagging Nearest Neighbour SVM, Multilayer Perceptron(MLP, Hybrid of MLP + Multiple Linear Regression, it can be showed that fuzzy k-NN method achieve the best performance with accuracy 77.5%

  10. Prediction Models for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos; Noor, Muhammad; Simmhan, Yogesh; Prasanna, Viktor K.

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D2R, which we address in this paper. Our first contribution is the formal definition of D2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D2R. Also, prediction models require just few days’ worth of data indicating that small amounts of

  11. Angular Structure of Jet Quenching Within a Hybrid Strong/Weak Coupling Model

    CERN Document Server

    Casalderrey-Solana, Jorge; Milhano, Guilherme; Pablos, Daniel; Rajagopal, Krishna

    2017-01-01

    Within the context of a hybrid strong/weak coupling model of jet quenching, we study the modification of the angular distribution of the energy within jets in heavy ion collisions, as partons within jet showers lose energy and get kicked as they traverse the strongly coupled plasma produced in the collision. To describe the dynamics transverse to the jet axis, we add the effects of transverse momentum broadening into our hybrid construction, introducing a parameter $K\\equiv \\hat q/T^3$ that governs its magnitude. We show that, because of the quenching of the energy of partons within a jet, even when $K\

  12. Are animal models predictive for humans?

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2009-01-01

    Full Text Available Abstract It is one of the central aims of the philosophy of science to elucidate the meanings of scientific terms and also to think critically about their application. The focus of this essay is the scientific term predict and whether there is credible evidence that animal models, especially in toxicology and pathophysiology, can be used to predict human outcomes. Whether animals can be used to predict human response to drugs and other chemicals is apparently a contentious issue. However, when one empirically analyzes animal models using scientific tools they fall far short of being able to predict human responses. This is not surprising considering what we have learned from fields such evolutionary and developmental biology, gene regulation and expression, epigenetics, complexity theory, and comparative genomics.

  13. Fuzzy-logic modeling of Fenton's strong chemical oxidation process treating three types of landfill leachates.

    Science.gov (United States)

    Sari, Hanife; Yetilmezsoy, Kaan; Ilhan, Fatih; Yazici, Senem; Kurt, Ugur; Apaydin, Omer

    2013-06-01

    Three multiple input and multiple output-type fuzzy-logic-based models were developed as an artificial intelligence-based approach to model a novel integrated process (UF-IER-EDBM-FO) consisted of ultrafiltration (UF), ion exchange resins (IER), electrodialysis with bipolar membrane (EDBM), and Fenton's oxidation (FO) units treating young, middle-aged, and stabilized landfill leachates. The FO unit was considered as the key process for implementation of the proposed modeling scheme. Four input components such as H(2)O(2)/chemical oxygen demand ratio, H(2)O(2)/Fe(2+) ratio, reaction pH, and reaction time were fuzzified in a Mamdani-type fuzzy inference system to predict the removal efficiencies of chemical oxygen demand, total organic carbon, color, and ammonia nitrogen. A total of 200 rules in the IF-THEN format were established within the framework of a graphical user interface for each fuzzy-logic model. The product (prod) and the center of gravity (centroid) methods were performed as the inference operator and defuzzification methods, respectively, for the proposed prognostic models. Fuzzy-logic predicted results were compared to the outputs of multiple regression models by means of various descriptive statistical indicators, and the proposed methodology was tested against the experimental data. The testing results clearly revealed that the proposed prognostic models showed a superior predictive performance with very high determination coefficients (R (2)) between 0.930 and 0.991. This study indicated a simple means of modeling and potential of a knowledge-based approach for capturing complicated inter-relationships in a highly non-linear problem. Clearly, it was shown that the proposed prognostic models provided a well-suited and cost-effective method to predict removal efficiencies of wastewater parameters prior to discharge to receiving streams.

  14. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  15. Analog quantum simulation of the Rabi model in the ultra-strong coupling regime.

    Science.gov (United States)

    Braumüller, Jochen; Marthaler, Michael; Schneider, Andre; Stehli, Alexander; Rotzinger, Hannes; Weides, Martin; Ustinov, Alexey V

    2017-10-03

    The quantum Rabi model describes the fundamental mechanism of light-matter interaction. It consists of a two-level atom or qubit coupled to a quantized harmonic mode via a transversal interaction. In the weak coupling regime, it reduces to the well-known Jaynes-Cummings model by applying a rotating wave approximation. The rotating wave approximation breaks down in the ultra-strong coupling regime, where the effective coupling strength g is comparable to the energy ω of the bosonic mode, and remarkable features in the system dynamics are revealed. Here we demonstrate an analog quantum simulation of an effective quantum Rabi model in the ultra-strong coupling regime, achieving a relative coupling ratio of g/ω ~ 0.6. The quantum hardware of the simulator is a superconducting circuit embedded in a cQED setup. We observe fast and periodic quantum state collapses and revivals of the initial qubit state, being the most distinct signature of the synthesized model.An analog quantum simulation scheme has been explored with a quantum hardware based on a superconducting circuit. Here the authors investigate the time evolution of the quantum Rabi model at ultra-strong coupling conditions, which is synthesized by slowing down the system dynamics in an effective frame.

  16. Model predictive controller design of hydrocracker reactors

    OpenAIRE

    GÖKÇE, Dila

    2014-01-01

    This study summarizes the design of a Model Predictive Controller (MPC) in Tüpraş, İzmit Refinery Hydrocracker Unit Reactors. Hydrocracking process, in which heavy vacuum gasoil is converted into lighter and valuable products at high temperature and pressure is described briefly. Controller design description, identification and modeling studies are examined and the model variables are presented. WABT (Weighted Average Bed Temperature) equalization and conversion increase are simulate...

  17. The strong-weak coupling symmetry in 2D Φ4 field models

    Directory of Open Access Journals (Sweden)

    B.N.Shalaev

    2005-01-01

    Full Text Available It is found that the exact beta-function β(g of the continuous 2D gΦ4 model possesses two types of dual symmetries, these being the Kramers-Wannier (KW duality symmetry and the strong-weak (SW coupling symmetry f(g, or S-duality. All these transformations are explicitly constructed. The S-duality transformation f(g is shown to connect domains of weak and strong couplings, i.e. above and below g*. Basically it means that there is a tempting possibility to compute multiloop Feynman diagrams for the β-function using high-temperature lattice expansions. The regular scheme developed is found to be strongly unstable. Approximate values of the renormalized coupling constant g* found from duality symmetry equations are in an agreement with available numerical results.

  18. Global Behavior for a Strongly Coupled Predator-Prey Model with One Resource and Two Consumers

    Directory of Open Access Journals (Sweden)

    Yujuan Jiao

    2012-01-01

    Full Text Available We consider a strongly coupled predator-prey model with one resource and two consumers, in which the first consumer species feeds on the resource according to the Holling II functional response, while the second consumer species feeds on the resource following the Beddington-DeAngelis functional response, and they compete for the common resource. Using the energy estimates and Gagliardo-Nirenberg-type inequalities, the existence and uniform boundedness of global solutions for the model are proved. Meanwhile, the sufficient conditions for global asymptotic stability of the positive equilibrium for this model are given by constructing a Lyapunov function.

  19. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    DEFF Research Database (Denmark)

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    2013-01-01

    with situations where the ensemble average of the force deviates considerably from the force calculated at the average position of the trajectories of the ensemble. We identify the general trends for the applicability of the semiclassical model in terms of intensity, ellipticity, and wavelength of the laser pulse......By comparison with the solution of the time-dependent Schrödinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  20. Chronic health conditions and depressive symptoms strongly predict persistent food insecurity among rural low-income families.

    Science.gov (United States)

    Hanson, Karla L; Olson, Christine M

    2012-08-01

    Longitudinal studies of food insecurity have not considered the unique circumstances of rural families. This study identified factors predictive of discontinuous and persistent food insecurity over three years among low-income families with children in rural counties in 13 U.S. states. Respondents reported substantial knowledge of community resources, food and finance skills, and use of formal public food assistance, yet 24% had persistent food insecurity, and another 41% were food insecure for one or two years. Multivariate multinomial regression models tested relationships between human capital, social support, financial resources, expenses, and food insecurity. Enduring chronic health conditions increased the risk of both discontinuous and persistent food insecurity. Lasting risk for depression predicted only persistent food insecurity. Education beyond high school was the only factor found protective against persistent food insecurity. Access to quality physical and mental health care services are essential to ameliorate persistent food insecurity among rural, low-income families.

  1. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  2. Thermodynamic modeling of activity coefficient and prediction of solubility: Part 1. Predictive models.

    Science.gov (United States)

    Mirmehrabi, Mahmoud; Rohani, Sohrab; Perry, Luisa

    2006-04-01

    A new activity coefficient model was developed from excess Gibbs free energy in the form G(ex) = cA(a) x(1)(b)...x(n)(b). The constants of the proposed model were considered to be function of solute and solvent dielectric constants, Hildebrand solubility parameters and specific volumes of solute and solvent molecules. The proposed model obeys the Gibbs-Duhem condition for activity coefficient models. To generalize the model and make it as a purely predictive model without any adjustable parameters, its constants were found using the experimental activity coefficient and physical properties of 20 vapor-liquid systems. The predictive capability of the proposed model was tested by calculating the activity coefficients of 41 binary vapor-liquid equilibrium systems and showed good agreement with the experimental data in comparison with two other predictive models, the UNIFAC and Hildebrand models. The only data used for the prediction of activity coefficients, were dielectric constants, Hildebrand solubility parameters, and specific volumes of the solute and solvent molecules. Furthermore, the proposed model was used to predict the activity coefficient of an organic compound, stearic acid, whose physical properties were available in methanol and 2-butanone. The predicted activity coefficient along with the thermal properties of the stearic acid were used to calculate the solubility of stearic acid in these two solvents and resulted in a better agreement with the experimental data compared to the UNIFAC and Hildebrand predictive models.

  3. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  4. A revised prediction model for natural conception.

    Science.gov (United States)

    Bensdorp, Alexandra J; van der Steeg, Jan Willem; Steures, Pieternel; Habbema, J Dik F; Hompes, Peter G A; Bossuyt, Patrick M M; van der Veen, Fulco; Mol, Ben W J; Eijkemans, Marinus J C

    2017-06-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis was to assess whether additional predictors can refine the Hunault model and extend its applicability. Consecutive subfertile couples with unexplained and mild male subfertility presenting in fertility clinics were asked to participate in a prospective cohort study. We constructed a multivariable prediction model with the predictors from the Hunault model and new potential predictors. The primary outcome, natural conception leading to an ongoing pregnancy, was observed in 1053 women of the 5184 included couples (20%). All predictors of the Hunault model were selected into the revised model plus an additional seven (woman's body mass index, cycle length, basal FSH levels, tubal status,history of previous pregnancies in the current relationship (ongoing pregnancies after natural conception, fertility treatment or miscarriages), semen volume, and semen morphology. Predictions from the revised model seem to concur better with observed pregnancy rates compared with the Hunault model; c-statistic of 0.71 (95% CI 0.69 to 0.73) compared with 0.59 (95% CI 0.57 to 0.61). Copyright © 2017. Published by Elsevier Ltd.

  5. The moduli and gravitino (non)-problems in models with strongly stabilized moduli

    International Nuclear Information System (INIS)

    Evans, Jason L.; Olive, Keith A.; Garcia, Marcos A.G.

    2014-01-01

    In gravity mediated models and in particular in models with strongly stabilized moduli, there is a natural hierarchy between gaugino masses, the gravitino mass and moduli masses: m 1/2 << m 3/2 << m φ . Given this hierarchy, we show that 1) moduli problems associated with excess entropy production from moduli decay and 2) problems associated with moduli/gravitino decays to neutralinos are non-existent. Placed in an inflationary context, we show that the amplitude of moduli oscillations are severely limited by strong stabilization. Moduli oscillations may then never come to dominate the energy density of the Universe. As a consequence, moduli decay to gravitinos and their subsequent decay to neutralinos need not overpopulate the cold dark matter density

  6. Strong nucleon and Δ-isobar form factors in the quark-confinement model

    International Nuclear Information System (INIS)

    Efimov, G.V.; Ivanov, M.A.; Lubovitskij, V.E.

    1989-01-01

    The nucleon and the Δ-isobar are investigated as three-quark systems in the quark-confinement model (QCM). This model is based on two hypotheses. First, quark confinement is accomplished through averaging over some vacuum gluon fields which are assumed to provide the confinement of any colour states. Second, hadrons are treated as collective colourless excitations of quark-gluon interactions. The QCM is applied to low-energy baryon physics. The nucleon magnetic moments and electromagnetic radii, the ratio G A /G V , and the decay width for Δ→pπ are calculated. The behaviour of the electromagnetic and strong meson-nucleon (meson-isobar) form factors is determined for space-like momentum transfers. The results are compared with experimental data for the electromagnetic form factors and phenomenological strong form factors as used in the Bonn potential. 32 refs.; 10 figs.; 4 tabs

  7. Global dynamics and bifurcation analysis of a host-parasitoid model with strong Allee effect.

    Science.gov (United States)

    Khan, Abdul Qadeer; Ma, Jiying; Xiao, Dongmei

    2017-12-01

    In this paper, we study the global dynamics and bifurcations of a two-dimensional discrete time host-parasitoid model with strong Allee effect. The existence of fixed points and their stability are analysed in all allowed parametric region. The bifurcation analysis shows that the model can undergo fold bifurcation and Neimark-Sacker bifurcation. As the parameters vary in a small neighbourhood of the Neimark-Sacker bifurcation condition, the unique positive fixed point changes its stability and an invariant closed circle bifurcates from the positive fixed point. From the viewpoint of biology, the invariant closed curve corresponds to the periodic or quasi-periodic oscillations between host and parasitoid populations. Furthermore, it is proved that all solutions of this model are bounded, and there exist some values of the parameters such that the model has a global attractor. These theoretical results reveal the complex dynamics of the present model.

  8. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  9. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  10. Strong Electroweak Phase Transitions in the Standard Model with a Singlet

    CERN Document Server

    Espinosa, Jose R; Riva, Francesco

    2012-01-01

    It is well known that the electroweak phase transition (EWPhT) in extensions of the Standard Model with one real scalar singlet can be first-order for realistic values of the Higgs mass. We revisit this scenario with the most general renormalizable scalar potential systematically identifying all regions in parameter space that develop, due to tree-level dynamics, a potential barrier at the critical temperature that is strong enough to avoid sphaleron wash-out of the baryon asymmetry. Such strong EWPhTs allow for a simple mean-field approximation and an analytic treatment of the free-energy that leads to very good theoretical control and understanding of the different mechanisms that can make the transition strong. We identify a new realization of such mechanism, based on a flat direction developing at the critical temperature, which could operate in other models. Finally, we discuss in detail some special cases of the model performing a numerical calculation of the one-loop free-energy that improves over the ...

  11. Combination of 24-Hour and 7-Day Relative Neurological Improvement Strongly Predicts 90-Day Functional Outcome of Endovascular Stroke Therapy.

    Science.gov (United States)

    Pu, Jie; Wang, Huaiming; Tu, Mingyi; Zi, Wenjie; Hao, Yonggang; Yang, Dong; Liu, Wenhua; Wan, Yue; Geng, Yu; Lin, Min; Jin, Ping; Xiong, Yunyun; Xu, Gelin; Yin, Qin; Liu, Xinfeng

    2018-01-03

    Early judgment of long-term prognosis is the key to making medical decisions in acute anterior circulation large-vessel occlusion stroke (LVOS) after endovascular treatment (EVT). We aimed to investigate the relationship between the combination of 24-hour and 7-day relative neurological improvement (RNI) and 90-day functional outcome. We selected the target population from a multicenter ischemic stroke registry. The National Institutes of Health Stroke Scale (NIHSS) scores at baseline, 24 hours, and 7 days were collected. RNI was calculated by the following equation: (baseline NIHSS - 24-hour/7-day NIHSS)/baseline NIHSS × 100%. A modified Rankin Scale score of 0-2 at 90 days was defined as a favorable outcome. Multivariable logistic regression analysis was used to evaluate the relationship between RNI and 90-day outcome. Receiver operator characteristic curve analysis was performed to identify the predictive power and cutoff point of RNI for functional outcome. A total of 568 patients were enrolled. Both 24-hour and 7-day RNI were independent predictors of 90-day outcome. The best cutoff points of 24-hour and 7-day RNI were 28% and 42%, respectively. Compared with those with 24-hour RNI of less than 28% and 7-day RNI of less than 42%, patients with 24-hour RNI of 28% or greater and 7-day RNI of 42% or greater had a 39.595-fold (95% confidence interval 22.388-70.026) increased probability of achieving 90-day favorable outcome. The combination of 24-hour and 7-day RNI very strongly predicts 90-day functional outcome in patients with acute anterior circulation LVOS who received EVT, and it can be used as an early accurate surrogate of long-term outcome. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  12. Model Predictive Control of Sewer Networks

    DEFF Research Database (Denmark)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik

    2016-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and cont...... benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control....... and controlled have thus become essential factors for efficient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona...

  13. Bayesian Predictive Models for Rayleigh Wind Speed

    DEFF Research Database (Denmark)

    Shahirinia, Amir; Hajizadeh, Amin; Yu, David C

    2017-01-01

    predictive model of the wind speed aggregates the non-homogeneous distributions into a single continuous distribution. Therefore, the result is able to capture the variation among the probability distributions of the wind speeds at the turbines’ locations in a wind farm. More specifically, instead of using...... a wind speed distribution whose parameters are known or estimated, the parameters are considered as random whose variations are according to probability distributions. The Bayesian predictive model for a Rayleigh which only has a single model scale parameter has been proposed. Also closed-form posterior......One of the major challenges with the increase in wind power generation is the uncertain nature of wind speed. So far the uncertainty about wind speed has been presented through probability distributions. Also the existing models that consider the uncertainty of the wind speed primarily view...

  14. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  15. Lattice Hamiltonian approach to the Schwinger model. Further results from the strong coupling expansion

    International Nuclear Information System (INIS)

    Szyniszewski, Marcin; Manchester Univ.; Cichy, Krzysztof; Poznan Univ.; Kujawa-Cichy, Agnieszka

    2014-10-01

    We employ exact diagonalization with strong coupling expansion to the massless and massive Schwinger model. New results are presented for the ground state energy and scalar mass gap in the massless model, which improve the precision to nearly 10 -9 %. We also investigate the chiral condensate and compare our calculations to previous results available in the literature. Oscillations of the chiral condensate which are present while increasing the expansion order are also studied and are shown to be directly linked to the presence of flux loops in the system.

  16. Predictive analytics can support the ACO model.

    Science.gov (United States)

    Bradley, Paul

    2012-04-01

    Predictive analytics can be used to rapidly spot hard-to-identify opportunities to better manage care--a key tool in accountable care. When considering analytics models, healthcare providers should: Make value-based care a priority and act on information from analytics models. Create a road map that includes achievable steps, rather than major endeavors. Set long-term expectations and recognize that the effectiveness of an analytics program takes time, unlike revenue cycle initiatives that may show a quick return.

  17. Predictive modeling in homogeneous catalysis: a tutorial

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2010-01-01

    Predictive modeling has become a practical research tool in homogeneous catalysis. It can help to pinpoint ‘good regions’ in the catalyst space, narrowing the search for the optimal catalyst for a given reaction. Just like any other new idea, in silico catalyst optimization is accepted by some

  18. Model predictive control of smart microgrids

    DEFF Research Database (Denmark)

    Hu, Jiefeng; Zhu, Jianguo; Guerrero, Josep M.

    2014-01-01

    required to realise high-performance of distributed generations and will realise innovative control techniques utilising model predictive control (MPC) to assist in coordinating the plethora of generation and load combinations, thus enable the effective exploitation of the clean renewable energy sources...

  19. Feedback model predictive control by randomized algorithms

    NARCIS (Netherlands)

    Batina, Ivo; Stoorvogel, Antonie Arij; Weiland, Siep

    2001-01-01

    In this paper we present a further development of an algorithm for stochastic disturbance rejection in model predictive control with input constraints based on randomized algorithms. The algorithm presented in our work can solve the problem of stochastic disturbance rejection approximately but with

  20. A Robustly Stabilizing Model Predictive Control Algorithm

    Science.gov (United States)

    Ackmece, A. Behcet; Carson, John M., III

    2007-01-01

    A model predictive control (MPC) algorithm that differs from prior MPC algorithms has been developed for controlling an uncertain nonlinear system. This algorithm guarantees the resolvability of an associated finite-horizon optimal-control problem in a receding-horizon implementation.

  1. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...

  2. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations ...

  3. The random transverse field Ising model in d = 2: analysis via boundary strong disorder renormalization

    Science.gov (United States)

    Monthus, Cécile; Garel, Thomas

    2012-09-01

    To avoid the complicated topology of surviving clusters induced by standard strong disorder RG in dimension d > 1, we introduce a modified procedure called ‘boundary strong disorder RG’ where the order of decimations is chosen a priori. We apply this modified procedure numerically to the random transverse field Ising model in dimension d = 2. We find that the location of the critical point, the activated exponent ψ ≃ 0.5 of the infinite-disorder scaling, and the finite-size correlation exponent νFS ≃ 1.3 are compatible with the values obtained previously using standard strong disorder RG. Our conclusion is thus that strong disorder RG is very robust with respect to changes in the order of decimations. In addition, we analyze the RG flows within the two phases in more detail, to show explicitly the presence of various correlation length exponents: we measure the typical correlation exponent νtyp ≃ 0.64 for the disordered phase (this value is very close to the correlation exponent {\

  4. Modeling a nonperturbative spinor vacuum interacting with a strong gravitational wave

    Energy Technology Data Exchange (ETDEWEB)

    Dzhunushaliev, Vladimir [Al-Farabi Kazakh National University, Department of Theoretical and Nuclear Physics, Almaty (Kazakhstan); Al-Farabi Kazakh National University, Institute of Experimental and Theoretical Physics, Almaty (Kazakhstan); Folomeev, Vladimir [Institute of Physicotechnical Problems and Material Science, NAS of the Kyrgyz Republic, Bishkek (Kyrgyzstan)

    2015-07-15

    We consider the propagation of strong gravitational waves interacting with a nonperturbative vacuum of spinor fields. To described the latter, we suggest an approximate model. The corresponding Einstein equation has the form of the Schroedinger equation. Its gravitational-wave solution is analogous to the solution of the Schroedinger equation for an electron moving in a periodic potential. The general solution for the periodic gravitational waves is found. The analog of the Kronig-Penney model for gravitational waves is considered. It is shown that the suggested gravitational-wave model permits the existence of weak electric charge and current densities concomitant with the gravitational wave. Based on this observation, a possible experimental verification of the model is suggested. (orig.)

  5. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  6. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  7. Interaction effects in a microscopic quantum wire model with strong spin-orbit interaction

    Science.gov (United States)

    Winkler, G. W.; Ganahl, M.; Schuricht, D.; Evertz, H. G.; Andergassen, S.

    2017-06-01

    We investigate the effect of strong interactions on the spectral properties of quantum wires with strong Rashba spin-orbit (SO) interaction in a magnetic field, using a combination of matrix product state and bosonization techniques. Quantum wires with strong Rashba SO interaction and magnetic field exhibit a partial gap in one-half of the conducting modes. Such systems have attracted wide-spread experimental and theoretical attention due to their unusual physical properties, among which are spin-dependent transport, or a topological superconducting phase when under the proximity effect of an s-wave superconductor. As a microscopic model for the quantum wire we study an extended Hubbard model with SO interaction and Zeeman field. We obtain spin resolved spectral densities from the real-time evolution of excitations, and calculate the phase diagram. We find that interactions increase the pseudo gap at k = 0 and thus also enhance the Majorana-supporting phase and stabilize the helical spin order. Furthermore, we calculate the optical conductivity and compare it with the low energy spiral Luttinger liquid result, obtained from field theoretical calculations. With interactions, the optical conductivity is dominated by an excotic excitation of a bound soliton-antisoliton pair known as a breather state. We visualize the oscillating motion of the breather state, which could provide the route to their experimental detection in e.g. cold atom experiments.

  8. Link Prediction via Sparse Gaussian Graphical Model

    Directory of Open Access Journals (Sweden)

    Liangliang Zhang

    2016-01-01

    Full Text Available Link prediction is an important task in complex network analysis. Traditional link prediction methods are limited by network topology and lack of node property information, which makes predicting links challenging. In this study, we address link prediction using a sparse Gaussian graphical model and demonstrate its theoretical and practical effectiveness. In theory, link prediction is executed by estimating the inverse covariance matrix of samples to overcome information limits. The proposed method was evaluated with four small and four large real-world datasets. The experimental results show that the area under the curve (AUC value obtained by the proposed method improved by an average of 3% and 12.5% compared to 13 mainstream similarity methods, respectively. This method outperforms the baseline method, and the prediction accuracy is superior to mainstream methods when using only 80% of the training set. The method also provides significantly higher AUC values when using only 60% in Dolphin and Taro datasets. Furthermore, the error rate of the proposed method demonstrates superior performance with all datasets compared to mainstream methods.

  9. A theoretical model of strong and moderate El Niño regimes

    Science.gov (United States)

    Takahashi, Ken; Karamperidou, Christina; Dewitte, Boris

    2018-02-01

    The existence of two regimes for El Niño (EN) events, moderate and strong, has been previously shown in the GFDL CM2.1 climate model and also suggested in observations. The two regimes have been proposed to originate from the nonlinearity in the Bjerknes feedback, associated with a threshold in sea surface temperature (T_c ) that needs to be exceeded for deep atmospheric convection to occur in the eastern Pacific. However, although the recent 2015-16 EN event provides a new data point consistent with the sparse strong EN regime, it is not enough to statistically reject the null hypothesis of a unimodal distribution based on observations alone. Nevertheless, we consider the possibility suggestive enough to explore it with a simple theoretical model based on the nonlinear Bjerknes feedback. In this study, we implemented this nonlinear mechanism in the recharge-discharge (RD) ENSO model and show that it is sufficient to produce the two EN regimes, i.e. a bimodal distribution in peak surface temperature (T) during EN events. The only modification introduced to the original RD model is that the net damping is suppressed when T exceeds T_c , resulting in a weak nonlinearity in the system. Due to the damping, the model is globally stable and it requires stochastic forcing to maintain the variability. The sustained low-frequency component of the stochastic forcing plays a key role for the onset of strong EN events (i.e. for T>T_c ), at least as important as the precursor positive heat content anomaly (h). High-frequency forcing helps some EN events to exceed T_c , increasing the number of strong events, but the rectification effect is small and the overall number of EN events is little affected by this forcing. Using the Fokker-Planck equation, we show how the bimodal probability distribution of EN events arises from the nonlinear Bjerknes feedback and also propose that the increase in the net feedback with increasing T is a necessary condition for bimodality in the RD

  10. Modelling alongshore flow in a semi-enclosed lagoon strongly forced by tides and waves

    Science.gov (United States)

    Taskjelle, Torbjørn; Barthel, Knut; Christensen, Kai H.; Furaca, Noca; Gammelsrød, Tor; Hoguane, António M.; Nharreluga, Bilardo

    2014-08-01

    Alongshore flows strongly driven by tides and waves is studied in the context of a one-dimensional numerical model. Observations from field surveys performed in a semi-enclosed lagoon (1.7 km×0.2 km) outside Xai-Xai, Mozambique, are used to validate the model results. The model is able to capture most of the observed temporal variability of the current, but sea surface height tends to be overestimated at high tide, especially during high wave events. Inside the lagoon we observed a mainly uni-directional alongshore current, with speeds up to 1 ms-1. The current varies primarily with the tide, being close to zero near low tide, generally increasing during flood and decreasing during ebb. The observations revealed a local minimum in the alongshore flow at high tide, which the model was successful in reproducing. Residence times in the lagoon were calculated to be less than one hour with wave forcing dominating the flushing. At this beach a high number of drowning casualties have occurred, but no connection was found between them and strong current events in a simulation covering the period 2011-2012.

  11. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  12. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Genetic models of homosexuality: generating testable predictions

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  14. The random transverse field Ising model in d = 2: analysis via boundary strong disorder renormalization

    International Nuclear Information System (INIS)

    Monthus, Cécile; Garel, Thomas

    2012-01-01

    To avoid the complicated topology of surviving clusters induced by standard strong disorder RG in dimension d > 1, we introduce a modified procedure called ‘boundary strong disorder RG’ where the order of decimations is chosen a priori. We apply this modified procedure numerically to the random transverse field Ising model in dimension d = 2. We find that the location of the critical point, the activated exponent ψ ≃ 0.5 of the infinite-disorder scaling, and the finite-size correlation exponent ν FS ≃ 1.3 are compatible with the values obtained previously using standard strong disorder RG. Our conclusion is thus that strong disorder RG is very robust with respect to changes in the order of decimations. In addition, we analyze the RG flows within the two phases in more detail, to show explicitly the presence of various correlation length exponents: we measure the typical correlation exponent ν typ ≃ 0.64 for the disordered phase (this value is very close to the correlation exponent ν pure Q (d=2)≅0.6 3 of the pure two-dimensional quantum Ising model), and the typical exponent ν h ≃ 1 for the ordered phase. These values satisfy the relations between critical exponents imposed by the expected finite-size scaling properties at infinite-disorder critical points. We also measure, within the disordered phase, the fluctuation exponent ω ≃ 0.35 which is compatible with the directed polymer exponent ω DP (1+1)= 1/3 in (1 + 1) dimensions. (paper)

  15. Ginzburg-Landau expansion in strongly disordered attractive Anderson-Hubbard model

    Science.gov (United States)

    Kuchinskii, E. Z.; Kuleeva, N. A.; Sadovskii, M. V.

    2017-07-01

    We have studied disordering effects on the coefficients of Ginzburg-Landau expansion in powers of superconducting order parameter in the attractive Anderson-Hubbard model within the generalized DMFT+Σ approximation. We consider the wide region of attractive potentials U from the weak coupling region, where superconductivity is described by BCS model, to the strong coupling region, where the superconducting transition is related with Bose-Einstein condensation (BEC) of compact Cooper pairs formed at temperatures essentially larger than the temperature of superconducting transition, and a wide range of disorder—from weak to strong, where the system is in the vicinity of Anderson transition. In the case of semielliptic bare density of states, disorder's influence upon the coefficients A and B of the square and the fourth power of the order parameter is universal for any value of electron correlation and is related only to the general disorder widening of the bare band (generalized Anderson theorem). Such universality is absent for the gradient term expansion coefficient C. In the usual theory of "dirty" superconductors, the C coefficient drops with the growth of disorder. In the limit of strong disorder in BCS limit, the coefficient C is very sensitive to the effects of Anderson localization, which lead to its further drop with disorder growth up to the region of the Anderson insulator. In the region of BCS-BEC crossover and in BEC limit, the coefficient C and all related physical properties are weakly dependent on disorder. In particular, this leads to relatively weak disorder dependence of both penetration depth and coherence lengths, as well as of related slope of the upper critical magnetic field at superconducting transition, in the region of very strong coupling.

  16. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  17. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  18. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to

  19. A Simple Model of Fields Including the Strong or Nuclear Force and a Cosmological Speculation

    Directory of Open Access Journals (Sweden)

    David L. Spencer

    2016-10-01

    Full Text Available Reexamining the assumptions underlying the General Theory of Relativity and calling an object's gravitational field its inertia, and acceleration simply resistance to that inertia, yields a simple field model where the potential (kinetic energy of a particle at rest is its capacity to move itself when its inertial field becomes imbalanced. The model then attributes electromagnetic and strong forces to the effects of changes in basic particle shape. Following up on the model's assumption that the relative intensity of a particle's gravitational field is always inversely related to its perceived volume and assuming that all black holes spin, may create the possibility of a cosmic rebound where a final spinning black hole ends with a new Big Bang.

  20. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  1. Strong motion modeling at the Paducah Diffusion Facility for a large New Madrid earthquake

    International Nuclear Information System (INIS)

    Herrmann, R.B.

    1991-01-01

    The Paducah Diffusion Facility is within 80 kilometers of the location of the very large New Madrid earthquakes which occurred during the winter of 1811-1812. Because of their size, seismic moment of 2.0 x 10 27 dyne-cm or moment magnitude M w = 7.5, the possible recurrence of these earthquakes is a major element in the assessment of seismic hazard at the facility. Probabilistic hazard analysis can provide uniform hazard response spectra estimates for structure evaluation, but a deterministic modeling of a such a large earthquake can provide strong constraints on the expected duration of motion. The large earthquake is modeled by specifying the earthquake fault and its orientation with respect to the site, and by specifying the rupture process. Synthetic time histories, based on forward modeling of the wavefield, from each subelement are combined to yield a three component time history at the site. Various simulations are performed to sufficiently exercise possible spatial and temporal distributions of energy release on the fault. Preliminary results demonstrate the sensitivity of the method to various assumptions, and also indicate strongly that the total duration of ground motion at the site is controlled primarily by the length of the rupture process on the fault

  2. Predictive Models in Differentiating Vertebral Lesions Using Multiparametric MRI.

    Science.gov (United States)

    Rathore, R; Parihar, A; Dwivedi, D K; Dwivedi, A K; Kohli, N; Garg, R K; Chandra, A

    2017-12-01

    Conventional MR imaging has high sensitivity but limited specificity in differentiating various vertebral lesions. We aimed to assess the ability of multiparametric MR imaging in differentiating spinal vertebral lesions and to develop statistical models for predicting the probability of malignant vertebral lesions. One hundred twenty-six consecutive patients underwent multiparametric MRI (conventional MR imaging, diffusion-weighted MR imaging, and in-phase/opposed-phase imaging) for vertebral lesions. Vertebral lesions were divided into 3 subgroups: infectious, noninfectious benign, and malignant. The cutoffs for apparent diffusion coefficient (expressed as 10 -3 mm 2 /s) and signal intensity ratio values were calculated, and 3 predictive models were established for differentiating these subgroups. Of the lesions of the 126 patients, 62 were infectious, 22 were noninfectious benign, and 42 were malignant. The mean ADC was 1.23 ± 0.16 for infectious, 1.41 ± 0.31 for noninfectious benign, and 1.01 ± 0.22 mm 2 /s for malignant lesions. The mean signal intensity ratio was 0.80 ± 0.13 for infectious, 0.75 ± 0.19 for noninfectious benign, and 0.98 ± 0.11 for the malignant group. The combination of ADC and signal intensity ratio showed strong discriminatory ability to differentiate lesion type. We found an area under the curve of 0.92 for the predictive model in differentiating infectious from malignant lesions and an area under the curve of 0.91 for the predictive model in differentiating noninfectious benign from malignant lesions. On the basis of the mean ADC and signal intensity ratio, we established automated statistical models that would be helpful in differentiating vertebral lesions. Our study shows that multiparametric MRI differentiates various vertebral lesions, and we established prediction models for the same. © 2017 by American Journal of Neuroradiology.

  3. Predictive Modelling of Contagious Deforestation in the Brazilian Amazon

    Science.gov (United States)

    Rosa, Isabel M. D.; Purves, Drew; Souza, Carlos; Ewers, Robert M.

    2013-01-01

    Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges “bottom up”, as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated–pre- and post-PPCDAM (“Plano de Ação para Proteção e Controle do Desmatamento na Amazônia”)–the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is

  4. Predictive modelling of contagious deforestation in the Brazilian Amazon.

    Science.gov (United States)

    Rosa, Isabel M D; Purves, Drew; Souza, Carlos; Ewers, Robert M

    2013-01-01

    Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia")-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently

  5. Three-loop Standard Model effective potential at leading order in strong and top Yukawa couplings

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Stephen P. [Santa Barbara, KITP

    2014-01-08

    I find the three-loop contribution to the effective potential for the Standard Model Higgs field, in the approximation that the strong and top Yukawa couplings are large compared to all other couplings, using dimensional regularization with modified minimal subtraction. Checks follow from gauge invariance and renormalization group invariance. I also briefly comment on the special problems posed by Goldstone boson contributions to the effective potential, and on the numerical impact of the result on the relations between the Higgs vacuum expectation value, mass, and self-interaction coupling.

  6. Simple model of a Feshbach resonance in the strong-coupling regime

    Science.gov (United States)

    Wasak, T.; Krych, M.; Idziaszek, Z.; Trippenbach, M.; Avishai, Y.; Band, Y. B.

    2014-11-01

    We use the dressed potentials obtained in the adiabatic representation of two coupled channels to calculate s -wave Feshbach resonances in a three-dimensional spherically symmetric potential with an open channel interacting with a closed channel. Analytic expressions for the s -wave scattering length a and number of resonances are obtained for a piecewise constant model with a piecewise constant interaction of the open and closed channels near the origin. We show analytically and numerically that, for strong enough coupling strength, Feshbach resonances can exist even when the closed channel does not have a bound state.

  7. Predictive Models for Carcinogenicity and Mutagenicity ...

    Science.gov (United States)

    Mutagenicity and carcinogenicity are endpoints of major environmental and regulatory concern. These endpoints are also important targets for development of alternative methods for screening and prediction due to the large number of chemicals of potential concern and the tremendous cost (in time, money, animals) of rodent carcinogenicity bioassays. Both mutagenicity and carcinogenicity involve complex, cellular processes that are only partially understood. Advances in technologies and generation of new data will permit a much deeper understanding. In silico methods for predicting mutagenicity and rodent carcinogenicity based on chemical structural features, along with current mutagenicity and carcinogenicity data sets, have performed well for local prediction (i.e., within specific chemical classes), but are less successful for global prediction (i.e., for a broad range of chemicals). The predictivity of in silico methods can be improved by improving the quality of the data base and endpoints used for modelling. In particular, in vitro assays for clastogenicity need to be improved to reduce false positives (relative to rodent carcinogenicity) and to detect compounds that do not interact directly with DNA or have epigenetic activities. New assays emerging to complement or replace some of the standard assays include VitotoxTM, GreenScreenGC, and RadarScreen. The needs of industry and regulators to assess thousands of compounds necessitate the development of high-t

  8. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  9. A MAGNIFIED GLANCE INTO THE DARK SECTOR: PROBING COSMOLOGICAL MODELS WITH STRONG LENSING IN A1689

    International Nuclear Information System (INIS)

    Magaña, Juan; Motta, V.; Cárdenas, Victor H.; Verdugo, T.; Jullo, Eric

    2015-01-01

    In this paper we constrain four alternative models to the late cosmic acceleration in the universe: Chevallier–Polarski–Linder (CPL), interacting dark energy (IDE), Ricci holographic dark energy (HDE), and modified polytropic Cardassian (MPC). Strong lensing (SL) images of background galaxies produced by the galaxy cluster Abell 1689 are used to test these models. To perform this analysis we modify the LENSTOOL lens modeling code. The value added by this probe is compared with other complementary probes: Type Ia supernovae (SN Ia), baryon acoustic oscillations (BAO), and cosmic microwave background (CMB). We found that the CPL constraints obtained for the SL data are consistent with those estimated using the other probes. The IDE constraints are consistent with the complementary bounds only if large errors in the SL measurements are considered. The Ricci HDE and MPC constraints are weak, but they are similar to the BAO, SN Ia, and CMB estimations. We also compute the figure of merit as a tool to quantify the goodness of fit of the data. Our results suggest that the SL method provides statistically significant constraints on the CPL parameters but is weak for those of the other models. Finally, we show that the use of the SL measurements in galaxy clusters is a promising and powerful technique to constrain cosmological models. The advantage of this method is that cosmological parameters are estimated by modeling the SL features for each underlying cosmology. These estimations could be further improved by SL constraints coming from other galaxy clusters

  10. Targeting G-quadruplex DNA Structures by EMICORON has a strong antitumor efficacy against advanced models of human colon cancer

    DEFF Research Database (Denmark)

    Porru, Manuela; Artuso, Simona; Salvati, Erica

    2015-01-01

    similar blood levels in humans. Moreover, EMICORON showed a marked therapeutic efficacy, as it inhibited the growth of patient-derived xenografts (PDX) and orthotopic colon cancer and strongly reduced the dissemination of tumor cells to lymph nodes, intestine, stomach, and liver. Finally, activation...... of human colon cancer that could adequately predict human clinical outcomes. Our results showed that EMICORON was well tolerated in mice, as no adverse effects were reported, and a low ratio of sensitivity across human and mouse bone marrow cells was observed, indicating a good potential for reaching...... of DNA damage and impairment of proliferation and angiogenesis are proved to be key determinants of EMICORON antitumoral activity. Altogether, our results, performed on advanced experimental models of human colon cancer that bridge the translational gap between preclinical and clinical studies...

  11. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  12. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  13. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...... values of L (0.254 mm, 0.330 mm) was produced by comparing predicted values with external face-to-face measurements. After removing outliers, the results show that the developed two-parameter model can serve as tool for modeling the FDM dimensional behavior in a wide range of deposition angles....

  14. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  15. Predictive modelling of evidence informed teaching

    OpenAIRE

    Zhang, Dell; Brown, C.

    2017-01-01

    In this paper, we analyse the questionnaire survey data collected from 79 English primary schools about the situation of evidence informed teaching, where the evidences could come from research journals or conferences. Specifically, we build a predictive model to see what external factors could help to close the gap between teachers’ belief and behaviour in evidence informed teaching, which is the first of its kind to our knowledge. The major challenge, from the data mining perspective, is th...

  16. A Predictive Model for Cognitive Radio

    Science.gov (United States)

    2006-09-14

    response in a given situation. Vadde et al. interest and produce a model for prediction of the response. have applied response surface methodology and...34 2000. [3] K. K. Vadde and V. R. Syrotiuk, "Factor interaction on service configurations to those that best meet our communication delivery in mobile ad...resulting set of configurations randomly or apply additional 2004. screening criteria. [4] K. K. Vadde , M.-V. R. Syrotiuk, and D. C. Montgomery

  17. Tectonic predictions with mantle convection models

    Science.gov (United States)

    Coltice, Nicolas; Shephard, Grace E.

    2018-04-01

    Over the past 15 yr, numerical models of convection in Earth's mantle have made a leap forward: they can now produce self-consistent plate-like behaviour at the surface together with deep mantle circulation. These digital tools provide a new window into the intimate connections between plate tectonics and mantle dynamics, and can therefore be used for tectonic predictions, in principle. This contribution explores this assumption. First, initial conditions at 30, 20, 10 and 0 Ma are generated by driving a convective flow with imposed plate velocities at the surface. We then compute instantaneous mantle flows in response to the guessed temperature fields without imposing any boundary conditions. Plate boundaries self-consistently emerge at correct locations with respect to reconstructions, except for small plates close to subduction zones. As already observed for other types of instantaneous flow calculations, the structure of the top boundary layer and upper-mantle slab is the dominant character that leads to accurate predictions of surface velocities. Perturbations of the rheological parameters have little impact on the resulting surface velocities. We then compute fully dynamic model evolution from 30 and 10 to 0 Ma, without imposing plate boundaries or plate velocities. Contrary to instantaneous calculations, errors in kinematic predictions are substantial, although the plate layout and kinematics in several areas remain consistent with the expectations for the Earth. For these calculations, varying the rheological parameters makes a difference for plate boundary evolution. Also, identified errors in initial conditions contribute to first-order kinematic errors. This experiment shows that the tectonic predictions of dynamic models over 10 My are highly sensitive to uncertainties of rheological parameters and initial temperature field in comparison to instantaneous flow calculations. Indeed, the initial conditions and the rheological parameters can be good enough

  18. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert F.; Knox, James C.

    2016-01-01

    As part of NASA's Advanced Exploration Systems (AES) program and the Life Support Systems Project (LSSP), fully predictive models of the Four Bed Molecular Sieve (4BMS) of the Carbon Dioxide Removal Assembly (CDRA) on the International Space Station (ISS) are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  19. Models of the Strongly Lensed Quasar DES J0408-5354

    Energy Technology Data Exchange (ETDEWEB)

    Agnello, A.; et al.

    2017-02-01

    We present gravitational lens models of the multiply imaged quasar DES J0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with the aim of interpreting its remarkable quad-like configuration. We first model the DES single-epoch $grizY$ images as a superposition of a lens galaxy and four point-like objects, obtaining spectral energy distributions (SEDs) and relative positions for the objects. Three of the point sources (A,B,D) have SEDs compatible with the discovery quasar spectra, while the faintest point-like image (G2/C) shows significant reddening and a `grey' dimming of $\\approx0.8$mag. In order to understand the lens configuration, we fit different models to the relative positions of A,B,D. Models with just a single deflector predict a fourth image at the location of G2/C but considerably brighter and bluer. The addition of a small satellite galaxy ($R_{\\rm E}\\approx0.2$") in the lens plane near the position of G2/C suppresses the flux of the fourth image and can explain both the reddening and grey dimming. All models predict a main deflector with Einstein radius between $1.7"$ and $2.0",$ velocity dispersion $267-280$km/s and enclosed mass $\\approx 6\\times10^{11}M_{\\odot},$ even though higher resolution imaging data are needed to break residual degeneracies in model parameters. The longest time-delay (B-A) is estimated as $\\approx 85$ (resp. $\\approx125$) days by models with (resp. without) a perturber near G2/C. The configuration and predicted time-delays of J0408-5354 make it an excellent target for follow-up aimed at understanding the source quasar host galaxy and substructure in the lens, and measuring cosmological parameters. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.

  20. Single-particle model of a strongly driven, dense, nanoscale quantum ensemble

    Science.gov (United States)

    DiLoreto, C. S.; Rangan, C.

    2018-01-01

    We study the effects of interatomic interactions on the quantum dynamics of a dense, nanoscale, atomic ensemble driven by a strong electromagnetic field. We use a self-consistent, mean-field technique based on the pseudospectral time-domain method and a full, three-directional basis to solve the coupled Maxwell-Liouville equations. We find that interatomic interactions generate a decoherence in the state of an ensemble on a much faster time scale than the excited-state lifetime of individual atoms. We present a single-particle model of the driven, dense ensemble by incorporating interactions into a dephasing rate. This single-particle model reproduces the essential physics of the full simulation and is an efficient way of rapidly estimating the collective dynamics of a dense ensemble.

  1. BGK-type models in strong reaction and kinetic chemical equilibrium regimes

    International Nuclear Information System (INIS)

    Monaco, R; Bianchi, M Pandolfi; Soares, A J

    2005-01-01

    A BGK-type procedure is applied to multi-component gases undergoing chemical reactions of bimolecular type. The relaxation process towards local Maxwellians, depending on mass and numerical densities of each species as well as common velocity and temperature, is investigated in two different cases with respect to chemical regimes. These cases are related to the strong reaction regime characterized by slow reactions, and to the kinetic chemical equilibrium regime where fast reactions take place. The consistency properties of both models are stated in detail. The trend to equilibrium is numerically tested and comparisons for the two regimes are performed within the hydrogen-air and carbon-oxygen reaction mechanism. In the spatial homogeneous case, it is also shown that the thermodynamical equilibrium of the models recovers satisfactorily the asymptotic equilibrium solutions to the reactive Euler equations

  2. Oblique S and T constraints on electroweak strongly-coupled models with a light Higgs

    Energy Technology Data Exchange (ETDEWEB)

    Pich, A. [Departament de Física Teòrica, IFIC, Universitat de València - CSIC,Apt. Correus 22085, E-46071 València (Spain); Rosell, I. [Departament de Física Teòrica, IFIC, Universitat de València - CSIC,Apt. Correus 22085, E-46071 València (Spain); Departamento de Ciencias Físicas, Matemáticas y de la Computación,Universidad CEU Cardenal Herrera,c/ Sant Bartomeu 55, E-46115 Alfara del Patriarca, València (Spain); Sanz-Ciller, J.J. [Departamento de Física Teórica, Instituto de Física Teórica,Universidad Autónoma de Madrid - CSIC,c/ Nicolás Cabrera 13-15, E-28049 Cantoblanco, Madrid (Spain)

    2014-01-28

    Using a general effective Lagrangian implementing the chiral symmetry breaking SU(2){sub L}⊗SU(2){sub R}→SU(2){sub L+R}, we present a one-loop calculation of the oblique S and T parameters within electroweak strongly-coupled models with a light scalar. Imposing a proper ultraviolet behaviour, we determine S and T at next-to-leading order in terms of a few resonance parameters. The constraints from the global fit to electroweak precision data force the massive vector and axial-vector states to be heavy, with masses above the TeV scale, and suggest that the W{sup +}W{sup −} and ZZ couplings of the Higgs-like scalar should be close to the Standard Model value. Our findings are generic, since they only rely on soft requirements on the short-distance properties of the underlying strongly-coupled theory, which are widely satisfied in more specific scenarios.

  3. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  4. Predictive Modeling by the Cerebellum Improves Proprioception

    Science.gov (United States)

    Bhanpuri, Nasir H.; Okamura, Allison M.

    2013-01-01

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance. PMID:24005283

  5. On the prediction of solar activity using different neural network models

    Directory of Open Access Journals (Sweden)

    F. Fessant

    1996-01-01

    Full Text Available Accurate prediction of ionospheric parameters is crucial for telecommunication companies. These parameters rely strongly on solar activity. In this paper, we analyze the use of neural networks for sunspot time series prediction. Three types of models are tested and experimental results are reported for a particular sunspot time series: the IR5 index.

  6. On the prediction of solar activity using different neural network models

    Directory of Open Access Journals (Sweden)

    F. Fessant

    Full Text Available Accurate prediction of ionospheric parameters is crucial for telecommunication companies. These parameters rely strongly on solar activity. In this paper, we analyze the use of neural networks for sunspot time series prediction. Three types of models are tested and experimental results are reported for a particular sunspot time series: the IR5 index.

  7. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  8. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  9. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  10. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  11. PREDICTION MODELS OF GRAIN YIELD AND CHARACTERIZATION

    Directory of Open Access Journals (Sweden)

    Narciso Ysac Avila Serrano

    2009-06-01

    Full Text Available With the objective to characterize the grain yield of five cowpea cultivars and to find linear regression models to predict it, a study was developed in La Paz, Baja California Sur, Mexico. A complete randomized blocks design was used. Simple and multivariate analyses of variance were carried out using the canonical variables to characterize the cultivars. The variables cluster per plant, pods per plant, pods per cluster, seeds weight per plant, seeds hectoliter weight, 100-seed weight, seeds length, seeds wide, seeds thickness, pods length, pods wide, pods weight, seeds per pods, and seeds weight per pods, showed significant differences (P≤ 0.05 among cultivars. Paceño and IT90K-277-2 cultivars showed the higher seeds weight per plant. The linear regression models showed correlation coefficients ≥0.92. In these models, the seeds weight per plant, pods per cluster, pods per plant, cluster per plant and pods length showed significant correlations (P≤ 0.05. In conclusion, the results showed that grain yield differ among cultivars and for its estimation, the prediction models showed determination coefficients highly dependable.

  12. Modeling of strongly heat-driven flow in partially saturated fractured porous media

    International Nuclear Information System (INIS)

    Pruess, K.; Tsang, Y.W.; Wang, J.S.Y.

    1985-01-01

    The authors have performed modeling studies on the simultaneous transport of heat, liquid water, vapor, and air in partially saturated fractured porous media, with particular emphasis on strongly heat-driven flow. The presence of fractures makes the transport problem very complex, both in terms of flow geometry and physics. The numerical simulator used for their flow calculations takes into account most of the physical effects which are important in multi-phase fluid and heat flow. It has provisions to handle the extreme non-linearities which arise in phase transitions, component disappearances, and capillary discontinuities at fracture faces. They model a region around an infinite linear string of nuclear waste canisters, taking into account both the discrete fractures and the porous matrix. From an analysis of the results obtained with explicit fractures, they develop equivalent continuum models which can reproduce the temperature, saturation, and pressure variation, and gas and liquid flow rates of the discrete fracture-porous matrix calculations. The equivalent continuum approach makes use of a generalized relative permeability concept to take into account the fracture effects. This results in a substantial simplification of the flow problem which makes larger scale modeling of complicated unsaturated fractured porous systems feasible. Potential applications for regional scale simulations and limitations of the continuum approach are discussed. 27 references, 13 figures, 2 tables

  13. Modeling of strongly heat-driven flow in partially saturated fractured porous media

    International Nuclear Information System (INIS)

    Pruess, K.; Tsang, Y.W.; Wang, J.S.Y.

    1984-10-01

    We have performed modeling studies on the simultaneous transport of heat, liquid water, vapor, and air in partially saturated fractured porous media, with particular emphasis on strongly heat-driven flow. The presence of fractures makes the transport problem very complex, both in terms of flow geometry and physics. The numerical simulator used for our flow calculations takes into account most of the physical effects which are important in multi-phase fluid and heat flow. It has provisions to handle the extreme non-linearities which arise in phase transitions, component disappearances, and capillary discontinuities at fracture faces. We model a region around an infinite linear string of nuclear waste canisters, taking into account both the discrete fractures and the porous matrix. From an analysis of the results obtained with explicit fractures, we develop equivalent continuum models which can reproduce the temperature, saturation, and pressure variation, and gas and liquid flow rates of the discrete fracture-porous matrix calculations. The equivalent continuum approach makes use of a generalized relative permeability concept to take into account for fracture effects. This results in a substantial simplification of the flow problem which makes larger scale modeling of complicated unsaturated fractured porous systems feasible. Potential applications for regional scale simulations and limitations of the continuum approach are discussed. 27 references, 13 figures, 2 tables

  14. Interpreting the Strongly Lensed Supernova iPTF16geu: Time Delay Predictions, Microlensing, and Lensing Rates

    Energy Technology Data Exchange (ETDEWEB)

    More, Anupreeta; Oguri, Masamune; More, Surhud [Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), University of Tokyo, Chiba 277-8583 (Japan); Suyu, Sherry H. [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, D-85748 Garching (Germany); Lee, Chien-Hsiu, E-mail: anupreeta.more@ipmu.jp [Subaru Telescope, National Astronomical Observatory of Japan, 650 North Aohoku Place, Hilo, HI 96720 (United States)

    2017-02-01

    We present predictions for time delays between multiple images of the gravitationally lensed supernova, iPTF16geu, which was recently discovered from the intermediate Palomar Transient Factory (iPTF). As the supernova is of Type Ia where the intrinsic luminosity is usually well known, accurately measured time delays of the multiple images could provide tight constraints on the Hubble constant. According to our lens mass models constrained by the Hubble Space Telescope F814W image, we expect the maximum relative time delay to be less than a day, which is consistent with the maximum of 100 hr reported by Goobar et al. but places a stringent upper limit. Furthermore, the fluxes of most of the supernova images depart from expected values suggesting that they are affected by microlensing. The microlensing timescales are small enough that they may pose significant problems to measure the time delays reliably. Our lensing rate calculation indicates that the occurrence of a lensed SN in iPTF is likely. However, the observed total magnification of iPTF16geu is larger than expected, given its redshift. This may be a further indication of ongoing microlensing in this system.

  15. Predictive Models for Normal Fetal Cardiac Structures.

    Science.gov (United States)

    Krishnan, Anita; Pike, Jodi I; McCarter, Robert; Fulgium, Amanda L; Wilson, Emmanuel; Donofrio, Mary T; Sable, Craig A

    2016-12-01

    Clinicians rely on age- and size-specific measures of cardiac structures to diagnose cardiac disease. No universally accepted normative data exist for fetal cardiac structures, and most fetal cardiac centers do not use the same standards. The aim of this study was to derive predictive models for Z scores for 13 commonly evaluated fetal cardiac structures using a large heterogeneous population of fetuses without structural cardiac defects. The study used archived normal fetal echocardiograms in representative fetuses aged 12 to 39 weeks. Thirteen cardiac dimensions were remeasured by a blinded echocardiographer from digitally stored clips. Studies with inadequate imaging views were excluded. Regression models were developed to relate each dimension to estimated gestational age (EGA) by dates, biparietal diameter, femur length, and estimated fetal weight by the Hadlock formula. Dimension outcomes were transformed (e.g., using the logarithm or square root) as necessary to meet the normality assumption. Higher order terms, quadratic or cubic, were added as needed to improve model fit. Information criteria and adjusted R 2 values were used to guide final model selection. Each Z-score equation is based on measurements derived from 296 to 414 unique fetuses. EGA yielded the best predictive model for the majority of dimensions; adjusted R 2 values ranged from 0.72 to 0.893. However, each of the other highly correlated (r > 0.94) biometric parameters was an acceptable surrogate for EGA. In most cases, the best fitting model included squared and cubic terms to introduce curvilinearity. For each dimension, models based on EGA provided the best fit for determining normal measurements of fetal cardiac structures. Nevertheless, other biometric parameters, including femur length, biparietal diameter, and estimated fetal weight provided results that were nearly as good. Comprehensive Z-score results are available on the basis of highly predictive models derived from gestational

  16. An analytical model for climatic predictions

    International Nuclear Information System (INIS)

    Njau, E.C.

    1990-12-01

    A climatic model based upon analytical expressions is presented. This model is capable of making long-range predictions of heat energy variations on regional or global scales. These variations can then be transformed into corresponding variations of some other key climatic parameters since weather and climatic changes are basically driven by differential heating and cooling around the earth. On the basis of the mathematical expressions upon which the model is based, it is shown that the global heat energy structure (and hence the associated climatic system) are characterized by zonally as well as latitudinally propagating fluctuations at frequencies downward of 0.5 day -1 . We have calculated the propagation speeds for those particular frequencies that are well documented in the literature. The calculated speeds are in excellent agreement with the measured speeds. (author). 13 refs

  17. An Anisotropic Hardening Model for Springback Prediction

    International Nuclear Information System (INIS)

    Zeng, Danielle; Xia, Z. Cedric

    2005-01-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test

  18. Advanced Practice Nursing Committee on Process Improvement in Trauma: An Innovative Application of the Strong Model.

    Science.gov (United States)

    West, Sarah Katherine

    2016-01-01

    This article aims to summarize the successes and future implications for a nurse practitioner-driven committee on process improvement in trauma. The trauma nurse practitioner is uniquely positioned to recognize the need for clinical process improvement and enact change within the clinical setting. Application of the Strong Model of Advanced Practice proves to actively engage the trauma nurse practitioner in process improvement initiatives. Through enhancing nurse practitioner professional engagement, the committee aims to improve health care delivery to the traumatically injured patient. A retrospective review of the committee's first year reveals trauma nurse practitioner success in the domains of direct comprehensive care, support of systems, education, and leadership. The need for increased trauma nurse practitioner involvement has been identified for the domains of research and publication.

  19. Web tools for predictive toxicology model building.

    Science.gov (United States)

    Jeliazkova, Nina

    2012-07-01

    The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.

  20. [Endometrial cancer: Predictive models and clinical impact].

    Science.gov (United States)

    Bendifallah, Sofiane; Ballester, Marcos; Daraï, Emile

    2017-12-01

    In France, in 2015, endometrial cancer (CE) is the first gynecological cancer in terms of incidence and the fourth cause of cancer of the woman. About 8151 new cases and nearly 2179 deaths have been reported. Treatments (surgery, external radiotherapy, brachytherapy and chemotherapy) are currently delivered on the basis of an estimation of the recurrence risk, an estimation of lymph node metastasis or an estimate of survival probability. This risk is determined on the basis of prognostic factors (clinical, histological, imaging, biological) taken alone or grouped together in the form of classification systems, which are currently insufficient to account for the evolutionary and prognostic heterogeneity of endometrial cancer. For endometrial cancer, the concept of mathematical modeling and its application to prediction have developed in recent years. These biomathematical tools have opened a new era of care oriented towards the promotion of targeted therapies and personalized treatments. Many predictive models have been published to estimate the risk of recurrence and lymph node metastasis, but a tiny fraction of them is sufficiently relevant and of clinical utility. The optimization tracks are multiple and varied, suggesting the possibility in the near future of a place for these mathematical models. The development of high-throughput genomics is likely to offer a more detailed molecular characterization of the disease and its heterogeneity. Copyright © 2017 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  1. A boundary condition to the Khokhlov-Zabolotskaya equation for modeling strongly focused nonlinear ultrasound fields

    Energy Technology Data Exchange (ETDEWEB)

    Rosnitskiy, P., E-mail: pavrosni@yandex.ru; Yuldashev, P., E-mail: petr@acs366.phys.msu.ru; Khokhlova, V., E-mail: vera@acs366.phys.msu.ru [Physics Faculty, Moscow State University, Leninskie Gory, 119991 Moscow (Russian Federation)

    2015-10-28

    An equivalent source model was proposed as a boundary condition to the nonlinear parabolic Khokhlov-Zabolotskaya (KZ) equation to simulate high intensity focused ultrasound (HIFU) fields generated by medical ultrasound transducers with the shape of a spherical shell. The boundary condition was set in the initial plane; the aperture, the focal distance, and the initial pressure of the source were chosen based on the best match of the axial pressure amplitude and phase distributions in the Rayleigh integral analytic solution for a spherical transducer and the linear parabolic approximation solution for the equivalent source. Analytic expressions for the equivalent source parameters were derived. It was shown that the proposed approach allowed us to transfer the boundary condition from the spherical surface to the plane and to achieve a very good match between the linear field solutions of the parabolic and full diffraction models even for highly focused sources with F-number less than unity. The proposed method can be further used to expand the capabilities of the KZ nonlinear parabolic equation for efficient modeling of HIFU fields generated by strongly focused sources.

  2. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  3. Predictions of models for environmental radiological assessment

    International Nuclear Information System (INIS)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa; Mahler, Claudio Fernando

    2011-01-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for 137 Cs and 60 Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  4. Two adaptive radiative transfer schemes for numerical weather prediction models

    Directory of Open Access Journals (Sweden)

    V. Venema

    2007-11-01

    Full Text Available Radiative transfer calculations in atmospheric models are computationally expensive, even if based on simplifications such as the δ-two-stream approximation. In most weather prediction models these parameterisation schemes are therefore called infrequently, accepting additional model error due to the persistence assumption between calls. This paper presents two so-called adaptive parameterisation schemes for radiative transfer in a limited area model: A perturbation scheme that exploits temporal correlations and a local-search scheme that mainly takes advantage of spatial correlations. Utilising these correlations and with similar computational resources, the schemes are able to predict the surface net radiative fluxes more accurately than a scheme based on the persistence assumption. An important property of these adaptive schemes is that their accuracy does not decrease much in case of strong reductions in the number of calls to the δ-two-stream scheme. It is hypothesised that the core idea can also be employed in parameterisation schemes for other processes and in other dynamical models.

  5. Nonlocal response functions for predicting shear flow of strongly inhomogeneous fluids. II. Sinusoidally driven shear and multisinusoidal inhomogeneity.

    Science.gov (United States)

    Dalton, Benjamin A; Glavatskiy, Kirill S; Daivis, Peter J; Todd, B D

    2015-07-01

    We use molecular-dynamics computer simulations to investigate the density, strain-rate, and shear-pressure responses of a simple model atomic fluid to transverse and longitudinal external forces. We have previously introduced a response function formalism for describing the density, strain-rate, and shear-pressure profiles in an atomic fluid when it is perturbed by a combination of longitudinal and transverse external forces that are independent of time and have a simple sinusoidal spatial variation. In this paper, we extend the application of the previously introduced formalism to consider the case of a longitudinal force composed of multiple sinusoidal components in combination with a single-component sinusoidal transverse force. We find that additional harmonics are excited in the density, strain-rate, and shear-pressure profiles due to couplings between the force components. By analyzing the density, strain-rate, and shear-pressure profiles in Fourier space, we are able to evaluate the Fourier coefficients of the response functions, which now have additional components describing the coupling relationships. Having evaluated the Fourier coefficients of the response functions, we are then able to accurately predict the density, velocity, and shear-pressure profiles for fluids that are under the influence of a longitudinal force composed of two or three sinusoidal components combined with a single-component sinusoidal transverse force. We also find that in the case of a multisinusoidal longitudinal force, it is sufficient to include only pairwise couplings between different longitudinal force components. This means that it is unnecessary to include couplings between three or more force components in the case of a longitudinal force composed of many Fourier components, and this paves the way for a highly accurate but tractable treatment of nonlocal transport phenomena in fluids with density and strain-rate inhomogeneities on the molecular length scale.

  6. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  7. Modeling brittle fracture, slip weakening, and variable friction in geomaterials with an embedded strong discontinuity finite element.

    Energy Technology Data Exchange (ETDEWEB)

    Regueiro, Richard A. (University of Colorado, Boulder, CO); Borja, R. I. (Stanford University, Stanford, CA); Foster, C. D. (Stanford University, Stanford, CA)

    2006-10-01

    Localized shear deformation plays an important role in a number of geotechnical and geological processes. Slope failures, the formation and propagation of faults, cracking in concrete dams, and shear fractures in subsiding hydrocarbon reservoirs are examples of important effects of shear localization. Traditional engineering analyses of these phenomena, such as limit equilibrium techniques, make certain assumptions on the shape of the failure surface as well as other simplifications. While these methods may be adequate for the applications for which they were designed, it is difficult to extrapolate the results to more general scenarios. An alternative approach is to use a numerical modeling technique, such as the finite element method, to predict localization. While standard finite elements can model a wide variety of loading situations and geometries quite well, for numerical reasons they have difficulty capturing the softening and anisotropic damage that accompanies localization. By introducing an enhancement to the element in the form of a fracture surface at an arbitrary position and orientation in the element, we can regularize the solution, model the weakening response, and track the relative motion of the surfaces. To properly model the slip along these surfaces, the traction-displacement response must be properly captured. This report focuses on the development of a constitutive model appropriate to localizing geomaterials, and the embedding of this model into the enhanced finite element framework. This modeling covers two distinct phases. The first, usually brief, phase is the weakening response as the material transitions from intact continuum to a body with a cohesionless fractured surface. Once the cohesion has been eliminated, the response along the surface is completely frictional. We have focused on a rate- and state-dependent frictional model that captures stable and unstable slip along the surface. This model is embedded numerically into the

  8. Combining GPS measurements and IRI model predictions

    International Nuclear Information System (INIS)

    Hernandez-Pajares, M.; Juan, J.M.; Sanz, J.; Bilitza, D.

    2002-01-01

    The free electrons distributed in the ionosphere (between one hundred and thousands of km in height) produce a frequency-dependent effect on Global Positioning System (GPS) signals: a delay in the pseudo-orange and an advance in the carrier phase. These effects are proportional to the columnar electron density between the satellite and receiver, i.e. the integrated electron density along the ray path. Global ionospheric TEC (total electron content) maps can be obtained with GPS data from a network of ground IGS (international GPS service) reference stations with an accuracy of few TEC units. The comparison with the TOPEX TEC, mainly measured over the oceans far from the IGS stations, shows a mean bias and standard deviation of about 2 and 5 TECUs respectively. The discrepancies between the STEC predictions and the observed values show an RMS typically below 5 TECUs (which also includes the alignment code noise). he existence of a growing database 2-hourly global TEC maps and with resolution of 5x2.5 degrees in longitude and latitude can be used to improve the IRI prediction capability of the TEC. When the IRI predictions and the GPS estimations are compared for a three month period around the Solar Maximum, they are in good agreement for middle latitudes. An over-determination of IRI TEC has been found at the extreme latitudes, the IRI predictions being, typically two times higher than the GPS estimations. Finally, local fits of the IRI model can be done by tuning the SSN from STEC GPS observations

  9. Mathematical models for indoor radon prediction

    International Nuclear Information System (INIS)

    Malanca, A.; Pessina, V.; Dallara, G.

    1995-01-01

    It is known that the indoor radon (Rn) concentration can be predicted by means of mathematical models. The simplest model relies on two variables only: the Rn source strength and the air exchange rate. In the Lawrence Berkeley Laboratory (LBL) model several environmental parameters are combined into a complex equation; besides, a correlation between the ventilation rate and the Rn entry rate from the soil is admitted. The measurements were carried out using activated carbon canisters. Seventy-five measurements of Rn concentrations were made inside two rooms placed on the second floor of a building block. One of the rooms had a single-glazed window whereas the other room had a double pane window. During three different experimental protocols, the mean Rn concentration was always higher into the room with a double-glazed window. That behavior can be accounted for by the simplest model. A further set of 450 Rn measurements was collected inside a ground-floor room with a grounding well in it. This trend maybe accounted for by the LBL model

  10. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time......). Five technical and economic aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  11. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    Energy Technology Data Exchange (ETDEWEB)

    Andrade-Ines, Eduardo [Institute de Mécanique Céleste et des Calcul des Éphémérides—Observatoire de Paris, 77 Avenue Denfert Rochereau, F-75014 Paris (France); Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, 91109 Pasadena, CA (United States)

    2017-04-01

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-order models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.

  12. Predictive Models for Photovoltaic Electricity Production in Hot Weather Conditions

    Directory of Open Access Journals (Sweden)

    Jabar H. Yousif

    2017-07-01

    Full Text Available The process of finding a correct forecast equation for photovoltaic electricity production from renewable sources is an important matter, since knowing the factors affecting the increase in the proportion of renewable energy production and reducing the cost of the product has economic and scientific benefits. This paper proposes a mathematical model for forecasting energy production in photovoltaic (PV panels based on a self-organizing feature map (SOFM model. The proposed model is compared with other models, including the multi-layer perceptron (MLP and support vector machine (SVM models. Moreover, a mathematical model based on a polynomial function for fitting the desired output is proposed. Different practical measurement methods are used to validate the findings of the proposed neural and mathematical models such as mean square error (MSE, mean absolute error (MAE, correlation (R, and coefficient of determination (R2. The proposed SOFM model achieved a final MSE of 0.0007 in the training phase and 0.0005 in the cross-validation phase. In contrast, the SVM model resulted in a small MSE value equal to 0.0058, while the MLP model achieved a final MSE of 0.026 with a correlation coefficient of 0.9989, which indicates a strong relationship between input and output variables. The proposed SOFM model closely fits the desired results based on the R2 value, which is equal to 0.9555. Finally, the comparison results of MAE for the three models show that the SOFM model achieved a best result of 0.36156, whereas the SVM and MLP models yielded 4.53761 and 3.63927, respectively. A small MAE value indicates that the output of the SOFM model closely fits the actual results and predicts the desired output.

  13. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  14. Damping at positive frequencies in the limit J⊥-->0 in the strongly correlated Hubbard model

    Science.gov (United States)

    Mohan, Minette M.

    1992-08-01

    I show damping in the two-dimensional strongly correlated Hubbard model within the retraceable-path approximation, using an expansion around dominant poles for the self-energy. The damping half-width ~J2/3z occurs only at positive frequencies ω>5/2Jz, the excitation energy of a pure ``string'' state of length one, where Jz is the Ising part of the superexchange interaction, and occurs even in the absence of spin-flip terms ~J⊥ in contrast to other theoretical treatments. The dispersion relation for both damped and undamped peaks near the upper band edge is found and is shown to have lost the simple J2/3z dependence characteristic of the peaks near the lower band edge. The position of the first three peaks near the upper band edge agrees well with numerical simulations on the t-J model. The weight of the undamped peaks near the upper band edge is ~J4/3z, contrasting with Jz for the weight near the lower band edge.

  15. Dynamical models of hadrons based on string model and behaviour of strongly interacting matter at high density

    International Nuclear Information System (INIS)

    Senda Ikuo.

    1991-05-01

    We propose dynamical models of hadrons, the nucleation model and the free-decay model, in which results of string model are used to represent interactions. The dynamical properties of hadrons, which are obtained by string model, are examined and their parameters are fitted by experimental data. The equilibrium properties of hadrons at high density are investigated by the nucleation model and we found a singular behaviour at energy density 3 ∼ 5 GeV/fm 3 , where hadrons coalesce to create highly excited states. We argue that this singular behaviour corresponds to the phase transition to quark-gluon plasma. The possibility to observe the production of high density strongly interacting matter at collider experiments are discussed using the free-decay model, which produces pion distributions as decay products of resonances. We show that our free-decay model recovers features of hadron distributions obtained in hadron collision experiments. Finally the perspectives and extensions are discussed. (author). 34 refs, 19 figs, 2 tabs

  16. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...... model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model...

  17. Methods for prediction of strong earthquake ground motion. Final technical report, October 1, 1976--September 30, 1977

    International Nuclear Information System (INIS)

    Trifunac, M.D.

    1977-09-01

    The purpose of this report is to summarize the results of the work on characterization of strong earthquake ground motion. The objective of this effort has been to initiate presentation of simple yet detailed methodology for characterization of strong earthquake ground motion for use in licensing and evaluation of operating Nuclear Power Plants. This report will emphasize the simplicity of the methodology by presenting only the end results in a format that may be useful for the development of the site specific criteria in seismic risk analysis, for work on the development of modern standards and regulatory guides, and for re-evaluation of the existing power plant sites

  18. A model for the training effects in swimming demonstrates a strong relationship between parasympathetic activity, performance and index of fatigue.

    Directory of Open Access Journals (Sweden)

    Sébastien Chalencon

    Full Text Available Competitive swimming as a physical activity results in changes to the activity level of the autonomic nervous system (ANS. However, the precise relationship between ANS activity, fatigue and sports performance remains contentious. To address this problem and build a model to support a consistent relationship, data were gathered from national and regional swimmers during two 30 consecutive-week training periods. Nocturnal ANS activity was measured weekly and quantified through wavelet transform analysis of the recorded heart rate variability. Performance was then measured through a subsequent morning 400 meters freestyle time-trial. A model was proposed where indices of fatigue were computed using Banister's two antagonistic component model of fatigue and adaptation applied to both the ANS activity and the performance. This demonstrated that a logarithmic relationship existed between performance and ANS activity for each subject. There was a high degree of model fit between the measured and calculated performance (R(2=0.84±0.14,p<0.01 and the measured and calculated High Frequency (HF power of the ANS activity (R(2=0.79±0.07, p<0.01. During the taper periods, improvements in measured performance and measured HF were strongly related. In the model, variations in performance were related to significant reductions in the level of 'Negative Influences' rather than increases in 'Positive Influences'. Furthermore, the delay needed to return to the initial performance level was highly correlated to the delay required to return to the initial HF power level (p<0.01. The delay required to reach peak performance was highly correlated to the delay required to reach the maximal level of HF power (p=0.02. Building the ANS/performance identity of a subject, including the time to peak HF, may help predict the maximal performance that could be obtained at a given time.

  19. Cyclone-track based seasonal prediction for South Pacific tropical cyclone activity using APCC multi-model ensemble prediction

    Science.gov (United States)

    Kim, Ok-Yeon; Chan, Johnny C. L.

    2018-01-01

    This study aims to predict the seasonal TC track density over the South Pacific by combining the Asia-Pacific Economic Cooperation (APEC) Climate Center (APCC) multi-model ensemble (MME) dynamical prediction system with a statistical model. The hybrid dynamical-statistical model is developed for each of the three clusters that represent major groups of TC best tracks in the South Pacific. The cross validation result from the MME hybrid model demonstrates moderate but statistically significant skills to predict TC numbers across all TC clusters, with correlation coefficients of 0.4 to 0.6 between the hindcasts and observations for 1982/1983 to 2008/2009. The prediction skill in the area east of about 170°E is significantly influenced by strong El Niño, whereas the skill in the southwest Pacific region mainly comes from the linear trend of TC number. The prediction skill of TC track density is particularly high in the region where there is climatological high TC track density around the area 160°E-180° and 20°S. Since this area has a mixed response with respect to ENSO, the prediction skill of TC track density is higher in non-ENSO years compared to that in ENSO years. Even though the cross-validation prediction skill is higher in the area east of about 170°E compared to other areas, this region shows less skill for track density based on the categorical verification due to huge influences by strong El Niño years. While prediction skill of the developed methodology varies across the region, it is important that the model demonstrates skill in the area where TC activity is high. Such a result has an important practical implication—improving the accuracy of seasonal forecast and providing communities at risk with advanced information which could assist with preparedness and disaster risk reduction.

  20. Modeling consequences of prolonged strong unpredictable stress in zebrafish: Complex effects on behavior and physiology.

    Science.gov (United States)

    Song, Cai; Liu, Bai-Ping; Zhang, Yong-Ping; Peng, Zhilan; Wang, JiaJia; Collier, Adam D; Echevarria, David J; Savelieva, Katerina V; Lawrence, Robert F; Rex, Christopher S; Meshalkina, Darya A; Kalueff, Allan V

    2018-02-02

    Chronic stress is the major pathogenetic factor of human anxiety and depression. Zebrafish (Danio rerio) have become a novel popular model species for neuroscience research and CNS drug discovery. The utility of zebrafish for mimicking human affective disorders is also rapidly growing. Here, we present a new zebrafish model of clinically relevant, prolonged unpredictable strong chronic stress (PUCS). The 5-week PUCS induced overt anxiety-like and motor retardation-like behaviors in adult zebrafish, also elevating whole-body cortisol and proinflammatory cytokines - interleukins IL-1β and IL-6. PUCS also elevated whole-body levels of the anti-inflammatory cytokine IL-10 and increased the density of dendritic spines in zebrafish telencephalic neurons. Chronic treatment of fish with an antidepressant fluoxetine (0.1mg/L for 8days) normalized their behavioral and endocrine phenotypes, as well as corrected stress-elevated IL-1β and IL-6 levels, similar to clinical and rodent data. The CNS expression of the bdnf gene, the two genes of its receptors (trkB, p75), and the gfap gene of glia biomarker, the glial fibrillary acidic protein, was unaltered in all three groups. However, PUCS elevated whole-body BDNF levels and the telencephalic dendritic spine density (which were corrected by fluoxetine), thereby somewhat differing from the effects of chronic stress in rodents. Together, these findings support zebrafish as a useful in-vivo model of chronic stress, also calling for further cross-species studies of both shared/overlapping and distinct neurobiological responses to chronic stress. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Strongly Coupled Systems: From Quantum Antiferromagnets To Unified Models For Superconductors

    CERN Document Server

    Chudnovsky, V

    2002-01-01

    I discuss the significance of the antiferromagnetic Heisenberg model (AFHM) in both high-energy and condensed-matter physics, and proceed to describe an efficient cluster algorithm used to simulate the AFHM. This is one of two algorithms with which my collaborators and I were able to obtain numerical results that definitively confirm that chiral perturbation theory, corrected for cutoff effects in the AFHM, leads to a correct field-theoretical description of the low- temperature behavior of the spin correlation length in various spin representations S. Using a finite-size-scaling technique, we explored correlation lengths of up to 105 lattice spacings for spins S = 1 and 5/2. We show how the recent prediction of cutoff effects by P. Hasenfratz is approached for moderate correlation lengths, and smoothly connects with other approaches to modeling the AFHM at smaller correlation lengths. I also simulate and discuss classical antiferromagnetic systems with simultaneous SO(M) and SO( N) symmetries, which have bee...

  2. Strongly Coupled Systems From Quantum Antiferromagnets To Unified Models For Superconductors

    CERN Document Server

    Chudnovsky, V

    2002-01-01

    I discuss the significance of the antiferromagnetic Heisenberg model (AFHM) in both high-energy and condensed-matter physics, and proceed to describe an efficient cluster algorithm used to simulate the AFHM. This is one of two algorithms with which my collaborators and I were able to obtain numerical results that definitively confirm that chiral perturbation theory, corrected for cutoff effects in the AFHM, leads to a correct field-theoretical description of the low- temperature behavior of the spin correlation length in various spin representations S. Using a finite-size-scaling technique, we explored correlation lengths of up to 105 lattice spacings for spins S = 1 and 5/2. We show how the recent prediction of cutoff effects by P. Hasenfratz is approached for moderate correlation lengths, and smoothly connects with other approaches to modeling the AFHM at smaller correlation lengths. I also simulate and discuss classical antiferromagnetic systems with simultaneous SO(M) and SO( N) symmetries, which have bee...

  3. Density functional theory and dynamical mean-field theory. A way to model strongly correlated systems

    International Nuclear Information System (INIS)

    Backes, Steffen

    2017-04-01

    -local fluctuations. It has been successfully used to study the whole range of weakly to strongly correlated lattice models, including the metal-insulator transition, since even in the relevant dimensions of d = 2 and d = 3 spatial fluctuations are often small. The extension of DMFT towards realistic system by the use of DFT has been termed LDA+DMFT and has since then allowed for a significant improvement of the understanding of strongly correlated materials. We dedicate this thesis to the LDA+DMFT method and the study of the recently discovered ironpnictide superconductors, which are known to show effects of strong electronic correlations. Thus, in many cases these materials cannot be adequately described by a pure DFT approach alone and provide and ideal case for an investigation of their electronic properties within LDA+DMFT. We will first review the DFT method and point out what kind of approximations have to be made in practical calculations and what deficits they entail. Then we will give an introduction into the Green's function formalism in the real and imaginary time representation and discuss the resulting consequences like analytic continuation to pave the way for the derivation of the DMFT equations. After that, we will discuss the combination of DFT and DMFT into the LDA+DMFT method and how to set up the effective lattice models for practical calculations. Then we will apply the LDA+DMFT method to the hole-doped iron-pnictide superconductor KFe 2 As 2 , which we find to be a rather strongly correlated material that can only be reasonably described when electronic correlations are treated on a proper level beyond the the standard DFT approach. Our results show that the LDA+DMFT method is able to significantly improve the agreement of the theoretical calculation with experimental observations. Then we expand our study towards the isovalent series of KFe 2 As 2 , RbFe 2 As 2 and CsFe 2 As 2 , which we propose to show even stronger effects of electronic correlations due

  4. Density functional theory and dynamical mean-field theory. A way to model strongly correlated systems

    Energy Technology Data Exchange (ETDEWEB)

    Backes, Steffen

    2017-04-15

    -local fluctuations. It has been successfully used to study the whole range of weakly to strongly correlated lattice models, including the metal-insulator transition, since even in the relevant dimensions of d = 2 and d = 3 spatial fluctuations are often small. The extension of DMFT towards realistic system by the use of DFT has been termed LDA+DMFT and has since then allowed for a significant improvement of the understanding of strongly correlated materials. We dedicate this thesis to the LDA+DMFT method and the study of the recently discovered ironpnictide superconductors, which are known to show effects of strong electronic correlations. Thus, in many cases these materials cannot be adequately described by a pure DFT approach alone and provide and ideal case for an investigation of their electronic properties within LDA+DMFT. We will first review the DFT method and point out what kind of approximations have to be made in practical calculations and what deficits they entail. Then we will give an introduction into the Green's function formalism in the real and imaginary time representation and discuss the resulting consequences like analytic continuation to pave the way for the derivation of the DMFT equations. After that, we will discuss the combination of DFT and DMFT into the LDA+DMFT method and how to set up the effective lattice models for practical calculations. Then we will apply the LDA+DMFT method to the hole-doped iron-pnictide superconductor KFe{sub 2}As{sub 2}, which we find to be a rather strongly correlated material that can only be reasonably described when electronic correlations are treated on a proper level beyond the the standard DFT approach. Our results show that the LDA+DMFT method is able to significantly improve the agreement of the theoretical calculation with experimental observations. Then we expand our study towards the isovalent series of KFe{sub 2}As{sub 2}, RbFe{sub 2}As{sub 2} and CsFe{sub 2}As{sub 2}, which we propose to show even stronger

  5. Assessment of factors influencing finite element vertebral model predictions.

    Science.gov (United States)

    Jones, Alison C; Wilcox, Ruth K

    2007-12-01

    This study aimed to establish model construction and configuration procedures for future vertebral finite element analysis by studying convergence, sensitivity, and accuracy behaviors of semiautomatically generated models and comparing the results with manually generated models. During a previous study, six porcine vertebral bodies were imaged using a microcomputed tomography scanner and tested in axial compression to establish their stiffness and failure strength. Finite element models were built using a manual meshing method. In this study, the experimental agreement of those models was compared with that of semiautomatically generated models of the same six vertebrae. Both manually and semiautomatically generated models were assigned gray-scale-based, element-specific material properties. The convergence of the semiautomatically generated models was analyzed for the complete models along with material property and architecture control cases. A sensitivity study was also undertaken to test the reaction of the models to changes in material property values, architecture, and boundary conditions. In control cases, the element-specific material properties reduce the convergence of the models in comparison to homogeneous models. However, the full vertebral models showed strong convergence characteristics. The sensitivity study revealed a significant reaction to changes in architecture, boundary conditions, and load position, while the sensitivity to changes in material property values was proportional. The semiautomatically generated models produced stiffness and strength predictions of similar accuracy to the manually generated models with much shorter image segmentation and meshing times. Semiautomatic methods can provide a more rapid alternative to manual mesh generation techniques and produce vertebral models of similar accuracy. The representation of the boundary conditions, load position, and surrounding environment is crucial to the accurate prediction of the

  6. Predicting Biological Information Flow in a Model Oxygen Minimum Zone

    Science.gov (United States)

    Louca, S.; Hawley, A. K.; Katsev, S.; Beltran, M. T.; Bhatia, M. P.; Michiels, C.; Capelle, D.; Lavik, G.; Doebeli, M.; Crowe, S.; Hallam, S. J.

    2016-02-01

    Microbial activity drives marine biochemical fluxes and nutrient cycling at global scales. Geochemical measurements as well as molecular techniques such as metagenomics, metatranscriptomics and metaproteomics provide great insight into microbial activity. However, an integration of molecular and geochemical data into mechanistic biogeochemical models is still lacking. Recent work suggests that microbial metabolic pathways are, at the ecosystem level, strongly shaped by stoichiometric and energetic constraints. Hence, models rooted in fluxes of matter and energy may yield a holistic understanding of biogeochemistry. Furthermore, such pathway-centric models would allow a direct consolidation with meta'omic data. Here we present a pathway-centric biogeochemical model for the seasonal oxygen minimum zone in Saanich Inlet, a fjord off the coast of Vancouver Island. The model considers key dissimilatory nitrogen and sulfur fluxes, as well as the population dynamics of the genes that mediate them. By assuming a direct translation of biocatalyzed energy fluxes to biosynthesis rates, we make predictions about the distribution and activity of the corresponding genes. A comparison of the model to molecular measurements indicates that the model explains observed DNA, RNA, protein and cell depth profiles. This suggests that microbial activity in marine ecosystems such as oxygen minimum zones is well described by DNA abundance, which, in conjunction with geochemical constraints, determines pathway expression and process rates. Our work further demonstrates how meta'omic data can be mechanistically linked to environmental redox conditions and biogeochemical processes.

  7. Predictive modeling: potential application in prevention services.

    Science.gov (United States)

    Wilson, Moira L; Tumen, Sarah; Ota, Rissa; Simmers, Anthony G

    2015-05-01

    In 2012, the New Zealand Government announced a proposal to introduce predictive risk models (PRMs) to help professionals identify and assess children at risk of abuse or neglect as part of a preventive early intervention strategy, subject to further feasibility study and trialing. The purpose of this study is to examine technical feasibility and predictive validity of the proposal, focusing on a PRM that would draw on population-wide linked administrative data to identify newborn children who are at high priority for intensive preventive services. Data analysis was conducted in 2013 based on data collected in 2000-2012. A PRM was developed using data for children born in 2010 and externally validated for children born in 2007, examining outcomes to age 5 years. Performance of the PRM in predicting administratively recorded substantiations of maltreatment was good compared to the performance of other tools reviewed in the literature, both overall, and for indigenous Māori children. Some, but not all, of the children who go on to have recorded substantiations of maltreatment could be identified early using PRMs. PRMs should be considered as a potential complement to, rather than a replacement for, professional judgment. Trials are needed to establish whether risks can be mitigated and PRMs can make a positive contribution to frontline practice, engagement in preventive services, and outcomes for children. Deciding whether to proceed to trial requires balancing a range of considerations, including ethical and privacy risks and the risk of compounding surveillance bias. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  8. On the Strong Solution for the 3D Stochastic Leray-Alpha Model

    Directory of Open Access Journals (Sweden)

    Gabriel Deugoue

    2010-01-01

    Full Text Available We prove the existence and uniqueness of strong solution to the stochastic Leray-α equations under appropriate conditions on the data. This is achieved by means of the Galerkin approximation scheme. We also study the asymptotic behaviour of the strong solution as alpha goes to zero. We show that a sequence of strong solutions converges in appropriate topologies to weak solutions of the 3D stochastic Navier-Stokes equations.

  9. On the Strong Solution for the 3D Stochastic Leray-Alpha Model

    Directory of Open Access Journals (Sweden)

    Deugoue Gabriel

    2010-01-01

    Full Text Available Abstract We prove the existence and uniqueness of strong solution to the stochastic Leray- equations under appropriate conditions on the data. This is achieved by means of the Galerkin approximation scheme. We also study the asymptotic behaviour of the strong solution as alpha goes to zero. We show that a sequence of strong solutions converges in appropriate topologies to weak solutions of the 3D stochastic Navier-Stokes equations.

  10. Analytical modeling of equilibrium of strongly anisotropic plasma in tokamaks and stellarators

    Energy Technology Data Exchange (ETDEWEB)

    Lepikhin, N. D.; Pustovitov, V. D., E-mail: pustovit@nfi.kiae.ru [National Research Centre Kurchatov Institute (Russian Federation)

    2013-08-15

    Theoretical analysis of equilibrium of anisotropic plasma in tokamaks and stellarators is presented. The anisotropy is assumed strong, which includes the cases with essentially nonuniform distributions of plasma pressure on magnetic surfaces. Such distributions can arise at neutral beam injection or at ion cyclotron resonance heating. Then the known generalizations of the standard theory of plasma equilibrium that treat p{sub ‖} and p{sub ⊥} (parallel and perpendicular plasma pressures) as almost constant on magnetic surfaces are not applicable anymore. Explicit analytical prescriptions of the profiles of p{sub ‖} and p{sub ⊥} are proposed that allow modeling of the anisotropic plasma equilibrium even with large ratios of p{sub ‖}/p{sub ⊥} or p{sub ⊥}/p{sub ‖}. A method for deriving the equation for the Shafranov shift is proposed that does not require introduction of the flux coordinates and calculation of the metric tensor. It is shown that for p{sub ⊥} with nonuniformity described by a single poloidal harmonic, the equation for the Shafranov shift coincides with a known one derived earlier for almost constant p{sub ⊥} on a magnetic surface. This does not happen in the other more complex case.

  11. Heuristic Modeling for TRMM Lifetime Predictions

    Science.gov (United States)

    Jordan, P. S.; Sharer, P. J.; DeFazio, R. L.

    1996-01-01

    Analysis time for computing the expected mission lifetimes of proposed frequently maneuvering, tightly altitude constrained, Earth orbiting spacecraft have been significantly reduced by means of a heuristic modeling method implemented in a commercial-off-the-shelf spreadsheet product (QuattroPro) running on a personal computer (PC). The method uses a look-up table to estimate the maneuver frequency per month as a function of the spacecraft ballistic coefficient and the solar flux index, then computes the associated fuel use by a simple engine model. Maneuver frequency data points are produced by means of a single 1-month run of traditional mission analysis software for each of the 12 to 25 data points required for the table. As the data point computations are required only a mission design start-up and on the occasion of significant mission redesigns, the dependence on time consuming traditional modeling methods is dramatically reduced. Results to date have agreed with traditional methods to within 1 to 1.5 percent. The spreadsheet approach is applicable to a wide variety of Earth orbiting spacecraft with tight altitude constraints. It will be particularly useful to such missions as the Tropical Rainfall Measurement Mission scheduled for launch in 1997, whose mission lifetime calculations are heavily dependent on frequently revised solar flux predictions.

  12. A Computational Model for Predicting Gas Breakdown

    Science.gov (United States)

    Gill, Zachary

    2017-10-01

    Pulsed-inductive discharges are a common method of producing a plasma. They provide a mechanism for quickly and efficiently generating a large volume of plasma for rapid use and are seen in applications including propulsion, fusion power, and high-power lasers. However, some common designs see a delayed response time due to the plasma forming when the magnitude of the magnetic field in the thruster is at a minimum. New designs are difficult to evaluate due to the amount of time needed to construct a new geometry and the high monetary cost of changing the power generation circuit. To more quickly evaluate new designs and better understand the shortcomings of existing designs, a computational model is developed. This model uses a modified single-electron model as the basis for a Mathematica code to determine how the energy distribution in a system changes with regards to time and location. By analyzing this energy distribution, the approximate time and location of initial plasma breakdown can be predicted. The results from this code are then compared to existing data to show its validity and shortcomings. Missouri S&T APLab.

  13. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  14. Which method predicts recidivism best?: A comparison of statistical, machine learning, and data mining predictive models

    OpenAIRE

    Tollenaar, N.; van der Heijden, P.G.M.

    2012-01-01

    Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...

  15. Angular structure of jet quenching within a hybrid strong/weak coupling model

    Energy Technology Data Exchange (ETDEWEB)

    Casalderrey-Solana, Jorge [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Departament de Física Quàntica i Astrofísica & Institut de Ciències del Cosmos (ICC),Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Gulhan, Doga Can [CERN, EP Department,CH-1211 Geneva 23 (Switzerland); Milhano, José Guilherme [CENTRA, Instituto Superior Técnico, Universidade de Lisboa,Av. Rovisco Pais, P-1049-001 Lisboa (Portugal); Laboratório de Instrumentação e Física Experimental de Partículas (LIP),Av. Elias Garcia 14-1, P-1000-149 Lisboa (Portugal); Theoretical Physics Department, CERN,Geneva (Switzerland); Pablos, Daniel [Departament de Física Quàntica i Astrofísica & Institut de Ciències del Cosmos (ICC),Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Rajagopal, Krishna [Center for Theoretical Physics, Massachusetts Institute of Technology,Cambridge, MA 02139 (United States)

    2017-03-27

    Within the context of a hybrid strong/weak coupling model of jet quenching, we study the modification of the angular distribution of the energy within jets in heavy ion collisions, as partons within jet showers lose energy and get kicked as they traverse the strongly coupled plasma produced in the collision. To describe the dynamics transverse to the jet axis, we add the effects of transverse momentum broadening into our hybrid construction, introducing a parameter K≡q̂/T{sup 3} that governs its magnitude. We show that, because of the quenching of the energy of partons within a jet, even when K≠0 the jets that survive with some specified energy in the final state are narrower than jets with that energy in proton-proton collisions. For this reason, many standard observables are rather insensitive to K. We propose a new differential jet shape ratio observable in which the effects of transverse momentum broadening are apparent. We also analyze the response of the medium to the passage of the jet through it, noting that the momentum lost by the jet appears as the momentum of a wake in the medium. After freezeout this wake becomes soft particles with a broad angular distribution but with net momentum in the jet direction, meaning that the wake contributes to what is reconstructed as a jet. This effect must therefore be included in any description of the angular structure of the soft component of a jet. We show that the particles coming from the response of the medium to the momentum and energy deposited in it leads to a correlation between the momentum of soft particles well separated from the jet in angle with the direction of the jet momentum, and find qualitative but not quantitative agreement with experimental data on observables designed to extract such a correlation. More generally, by confronting the results that we obtain upon introducing transverse momentum broadening and the response of the medium to the jet with available jet data, we highlight the

  16. Predicting fatigue crack initiation through image-based micromechanical modeling

    International Nuclear Information System (INIS)

    Cheong, K.-S.; Smillie, Matthew J.; Knowles, David M.

    2007-01-01

    The influence of individual grain orientation on early fatigue crack initiation in a four-point bend fatigue test was investigated numerically and experimentally. The 99.99% aluminium test sample was subjected to high cycle fatigue (HCF) and the top surface microstructure within the inner span of the sample was characterized using electron-beam backscattering diffraction (EBSD). Applying a finite-element submodelling approach, the microstructure was digitally reconstructed and refined studies carried out in regions where fatigue damage was observed. The constitutive behaviour of aluminium was described by a crystal plasticity model which considers the evolution of dislocations and accumulation of edge dislocation dipoles. Using an energy-based approach to quantify fatigue damage, the model correctly predicts regions in grains where early fatigue crack initiation was observed. The tendency for fatigue cracks to initiate in these grains appears to be strongly linked to the orientations of the grains relative to the direction of loading - grains less favourably aligned with respect to the loading direction appear more susceptible to fatigue crack initiation. The limitations of this modelling approach are also highlighted and discussed, as some grains predicted to initiate cracks did not show any visible signs of fatigue cracking in the same locations during testing

  17. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...

  18. Self-Conscious Shyness: Growth during Toddlerhood, Strong Role of Genetics, and No Prediction from Fearful Shyness

    OpenAIRE

    Eggum-Wilkens, Natalie D.; Lemery-Chalfant, Kathryn; Aksan, Nazan; Goldsmith, H. Hill

    2014-01-01

    Fearful and self-conscious subtypes of shyness have received little attention in the empirical literature. Study aims included: 1) determining if fearful shyness predicted self-conscious shyness, 2) describing development of self-conscious shyness, and 3) examining genetic and environmental contributions to fearful and self-conscious shyness. Observed self-conscious shyness was examined at 19, 22, 25, and 28 months in same-sex twins (MZ = 102, DZ = 111, missing zygosity = 3 pairs). Self-consc...

  19. Predictive modeling of reactive wetting and metal joining.

    Energy Technology Data Exchange (ETDEWEB)

    van Swol, Frank B.

    2013-09-01

    The performance, reproducibility and reliability of metal joints are complex functions of the detailed history of physical processes involved in their creation. Prediction and control of these processes constitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy and reactive wetting. Understanding this process requires coupling strong molecularscale chemistry at the interface with microscopic (diffusion) and macroscopic mass transport (flow) inside the liquid followed by subsequent cooling and solidification of the new metal mixture. The final joint displays compositional heterogeneity and its resulting microstructure largely determines the success or failure of the entire component. At present there exists no computational tool at Sandia that can predict the formation and success of a braze joint, as current capabilities lack the ability to capture surface/interface reactions and their effect on interface properties. This situation precludes us from implementing a proactive strategy to deal with joining problems. Here, we describe what is needed to arrive at a predictive modeling and simulation capability for multicomponent metals with complicated phase diagrams for melting and solidification, incorporating dissolutive and composition-dependent wetting.

  20. Generalised model-independent characterisation of strong gravitational lenses. I. Theoretical foundations

    Science.gov (United States)

    Wagner, J.

    2017-05-01

    We extend our model-independent approach for characterising strong gravitational lenses to its most general form to leading order and use the orientation angles of a set of multiple images with respect to their connection line(s) in addition to the relative distances between the images, their ellipticities, and time-delays. For two symmetric images that straddle the critical curve, the orientation angle additionally allows us to determine the slope of the critical curve and a second (reduced) flexion coefficient at the critical point on the connection line between the images. It also allows us to drop the symmetry assumption that the axis of largest image extension is orthogonal to the critical curve. For three images almost forming a giant arc, the degree of assumed image symmetry is also reduced to the most general case, describing image configurations for which the source need not be placed on the symmetry axis of the two folds that unite at the cusp. For a given set of multiple images, we set limits on the applicability of our approach, show which information can be obtained in cases of merging images, and analyse the accuracy achievable due to the Taylor expansion of the lensing potential for the fold case on a galaxy cluster scale Navarro-Frenk-White-profile, a fold and cusp case on a galaxy cluster scale singular isothermal ellipse, and compare the generalised approach with our previously published one. The position of the critical points is reconstructed with less than 5'' deviation for multiple images closer to the critical points than 30% of the (effective) Einstein radius. The slope of the critical curve at a fold and its shape in the vicinity of a cusp deviate less than 20% from the true values for distances of the images to the critical points less than 15% of the (effective) Einstein radius.

  1. NMR studies of strong hydrogen bonds in enzymes and in a model compound

    Science.gov (United States)

    Harris, T. K.; Zhao, Q.; Mildvan, A. S.

    2000-09-01

    Hydrogen bond lengths on enzymes have been derived with high precision (≤±0.05 Å) from both the proton chemical shifts (δ) and the fractionation factors (φ) of the proton involved and were compared with those obtained from protein X-ray crystallography. Hydrogen bond lengths derived from proton chemical shifts were obtained from a correlation of 59 O-H⋯O hydrogen bond lengths, measured by small molecule high resolution X-ray crystallography, with chemical shifts determined by solid-state NMR in the same crystals [A. McDermott, C.F. Ridenour, Encyclopedia of NMR, Wiley, Sussex, England, 1996, 3820pp]. Hydrogen bond lengths were independently obtained from fractionation factors which yield distances between the two proton wells in quartic double minimum potential functions [M.M. Kreevoy, T.M. Liang, J. Am. Chem. Soc. 102 (1980) 3315]. The high precision hydrogen bond lengths derived from their corresponding NMR-measured proton chemical shifts and fractionation factors agree well with each other and with those reported in protein X-ray structures within the larger errors (±0.2-0.8 Å) in lengths obtained by protein X-ray crystallography. The increased precision in measurements of hydrogen bond lengths by NMR has provided insight into the contributions of short, strong hydrogen bonds to catalysis for several enzymes including ketosteroid isomerase, triosephosphate isomerase, and serine proteases. The O-H⋯O hydrogen bond length derived from the proton chemical shift in a model dihydroxy-naphthalene compound in aqueous solution agreed well with lengths of such hydrogen bonds determined by high resolution, small molecule X-ray diffraction.

  2. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  3. Predictive models for pressure ulcers from intensive care unit electronic health records using Bayesian networks.

    Science.gov (United States)

    Kaewprag, Pacharmon; Newton, Cheryl; Vermillion, Brenda; Hyun, Sookyung; Huang, Kun; Machiraju, Raghu

    2017-07-05

    We develop predictive models enabling clinicians to better understand and explore patient clinical data along with risk factors for pressure ulcers in intensive care unit patients from electronic health record data. Identifying accurate risk factors of pressure ulcers is essential to determining appropriate prevention strategies; in this work we examine medication, diagnosis, and traditional Braden pressure ulcer assessment scale measurements as patient features. In order to predict pressure ulcer incidence and better understand the structure of related risk factors, we construct Bayesian networks from patient features. Bayesian network nodes (features) and edges (conditional dependencies) are simplified with statistical network techniques. Upon reviewing a network visualization of our model, our clinician collaborators were able to identify strong relationships between risk factors widely recognized as associated with pressure ulcers. We present a three-stage framework for predictive analysis of patient clinical data: 1) Developing electronic health record feature extraction functions with assistance of clinicians, 2) simplifying features, and 3) building Bayesian network predictive models. We evaluate all combinations of Bayesian network models from different search algorithms, scoring functions, prior structure initializations, and sets of features. From the EHRs of 7,717 ICU patients, we construct Bayesian network predictive models from 86 medication, diagnosis, and Braden scale features. Our model not only identifies known and suspected high PU risk factors, but also substantially increases sensitivity of the prediction - nearly three times higher comparing to logistical regression models - without sacrificing the overall accuracy. We visualize a representative model with which our clinician collaborators identify strong relationships between risk factors widely recognized as associated with pressure ulcers. Given the strong adverse effect of pressure ulcers

  4. Modeling and notation of DEA with strong and weak disposable outputs.

    Science.gov (United States)

    Kuntz, Ludwig; Sülz, Sandra

    2011-12-01

    Recent articles published in Health Care Management Science have described DEA applications under the assumption of strong and weak disposable outputs. As we confidently assume that these papers include some methodical deficiencies, we aim to illustrate a revised approach.

  5. Model Predictive Control for an Industrial SAG Mill

    DEFF Research Database (Denmark)

    Ohan, Valeriu; Steinke, Florian; Metzger, Michael

    2012-01-01

    We discuss Model Predictive Control (MPC) based on ARX models and a simple lower order disturbance model. The advantage of this MPC formulation is that it has few tuning parameters and is based on an ARX prediction model that can readily be identied using standard technologies from system identic...

  6. Uncertainties in spatially aggregated predictions from a logistic regression model

    NARCIS (Netherlands)

    Horssen, P.W. van; Pebesma, E.J.; Schot, P.P.

    2002-01-01

    This paper presents a method to assess the uncertainty of an ecological spatial prediction model which is based on logistic regression models, using data from the interpolation of explanatory predictor variables. The spatial predictions are presented as approximate 95% prediction intervals. The

  7. Dealing with missing predictor values when applying clinical prediction models.

    NARCIS (Netherlands)

    Janssen, K.J.; Vergouwe, Y.; Donders, A.R.T.; Harrell Jr, F.E.; Chen, Q.; Grobbee, D.E.; Moons, K.G.

    2009-01-01

    BACKGROUND: Prediction models combine patient characteristics and test results to predict the presence of a disease or the occurrence of an event in the future. In the event that test results (predictor) are unavailable, a strategy is needed to help users applying a prediction model to deal with

  8. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  9. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  10. Predictive capabilities of various constitutive models for arterial tissue.

    Science.gov (United States)

    Schroeder, Florian; Polzer, Stanislav; Slažanský, Martin; Man, Vojtěch; Skácel, Pavel

    2018-02-01

    Aim of this study is to validate some constitutive models by assessing their capabilities in describing and predicting uniaxial and biaxial behavior of porcine aortic tissue. 14 samples from porcine aortas were used to perform 2 uniaxial and 5 biaxial tensile tests. Transversal strains were furthermore stored for uniaxial data. The experimental data were fitted by four constitutive models: Holzapfel-Gasser-Ogden model (HGO), model based on generalized structure tensor (GST), Four-Fiber-Family model (FFF) and Microfiber model. Fitting was performed to uniaxial and biaxial data sets separately and descriptive capabilities of the models were compared. Their predictive capabilities were assessed in two ways. Firstly each model was fitted to biaxial data and its accuracy (in term of R 2 and NRMSE) in prediction of both uniaxial responses was evaluated. Then this procedure was performed conversely: each model was fitted to both uniaxial tests and its accuracy in prediction of 5 biaxial responses was observed. Descriptive capabilities of all models were excellent. In predicting uniaxial response from biaxial data, microfiber model was the most accurate while the other models showed also reasonable accuracy. Microfiber and FFF models were capable to reasonably predict biaxial responses from uniaxial data while HGO and GST models failed completely in this task. HGO and GST models are not capable to predict biaxial arterial wall behavior while FFF model is the most robust of the investigated constitutive models. Knowledge of transversal strains in uniaxial tests improves robustness of constitutive models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Comparing National Water Model Inundation Predictions with Hydrodynamic Modeling

    Science.gov (United States)

    Egbert, R. J.; Shastry, A.; Aristizabal, F.; Luo, C.

    2017-12-01

    The National Water Model (NWM) simulates the hydrologic cycle and produces streamflow forecasts, runoff, and other variables for 2.7 million reaches along the National Hydrography Dataset for the continental United States. NWM applies Muskingum-Cunge channel routing which is based on the continuity equation. However, the momentum equation also needs to be considered to obtain better estimates of streamflow and stage in rivers especially for applications such as flood inundation mapping. Simulation Program for River NeTworks (SPRNT) is a fully dynamic model for large scale river networks that solves the full nonlinear Saint-Venant equations for 1D flow and stage height in river channel networks with non-uniform bathymetry. For the current work, the steady-state version of the SPRNT model was leveraged. An evaluation on SPRNT's and NWM's abilities to predict inundation was conducted for the record flood of Hurricane Matthew in October 2016 along the Neuse River in North Carolina. This event was known to have been influenced by backwater effects from the Hurricane's storm surge. Retrospective NWM discharge predictions were converted to stage using synthetic rating curves. The stages from both models were utilized to produce flood inundation maps using the Height Above Nearest Drainage (HAND) method which uses the local relative heights to provide a spatial representation of inundation depths. In order to validate the inundation produced by the models, Sentinel-1A synthetic aperture radar data in the VV and VH polarizations along with auxiliary data was used to produce a reference inundation map. A preliminary, binary comparison of the inundation maps to the reference, limited to the five HUC-12 areas of Goldsboro, NC, yielded that the flood inundation accuracies for NWM and SPRNT were 74.68% and 78.37%, respectively. The differences for all the relevant test statistics including accuracy, true positive rate, true negative rate, and positive predictive value were found

  12. Predictive models for moving contact line flows

    Science.gov (United States)

    Rame, Enrique; Garoff, Stephen

    2003-01-01

    Modeling flows with moving contact lines poses the formidable challenge that the usual assumptions of Newtonian fluid and no-slip condition give rise to a well-known singularity. This singularity prevents one from satisfying the contact angle condition to compute the shape of the fluid-fluid interface, a crucial calculation without which design parameters such as the pressure drop needed to move an immiscible 2-fluid system through a solid matrix cannot be evaluated. Some progress has been made for low Capillary number spreading flows. Combining experimental measurements of fluid-fluid interfaces very near the moving contact line with an analytical expression for the interface shape, we can determine a parameter that forms a boundary condition for the macroscopic interface shape when Ca much les than l. This parameter, which plays the role of an "apparent" or macroscopic dynamic contact angle, is shown by the theory to depend on the system geometry through the macroscopic length scale. This theoretically established dependence on geometry allows this parameter to be "transferable" from the geometry of the measurement to any other geometry involving the same material system. Unfortunately this prediction of the theory cannot be tested on Earth.

  13. Developmental prediction model for early alcohol initiation in Dutch adolescents

    NARCIS (Netherlands)

    Geels, L.M.; Vink, J.M.; Beijsterveldt, C.E.M. van; Bartels, M.; Boomsma, D.I.

    2013-01-01

    Objective: Multiple factors predict early alcohol initiation in teenagers. Among these are genetic risk factors, childhood behavioral problems, life events, lifestyle, and family environment. We constructed a developmental prediction model for alcohol initiation below the Dutch legal drinking age

  14. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  15. Lower Serum Zinc and Higher CRP Strongly Predict Prenatal Depression and Physio-somatic Symptoms, Which All Together Predict Postnatal Depressive Symptoms.

    Science.gov (United States)

    Roomruangwong, Chutima; Kanchanatawan, Buranee; Sirivichayakul, Sunee; Mahieu, Boris; Nowak, Gabriel; Maes, Michael

    2017-03-01

    Pregnancy and delivery are associated with activation of immune-inflammatory pathways which may prime parturients to develop postnatal depression. There are, however, few data on the associations between immune-inflammatory pathways and prenatal depression and physio-somatic symptoms. This study examined the associations between serum zinc, C-reactive protein (CRP), and haptoglobin at the end of term and prenatal physio-somatic symptoms (fatigue, back pain, muscle pain, dyspepsia, obstipation) and prenatal and postnatal depressive and anxiety symptoms as measured using the Edinburgh Postnatal Depression Scale (EPDS), Beck Depression Inventory (BDI), Hamilton Depression Rating Scale (HAMD), and Spielberger's State Anxiety Inventory (STAI). Zinc and haptoglobin were significantly lower and CRP increased at the end of term as compared with non-pregnant women. Prenatal depression was predicted by lower zinc and lifetime history of depression, anxiety, and premenstrual tension syndrome (PMS). The latter histories were also significantly and inversely related to lower zinc. The severity of prenatal EDPS, HAMD, BDI, STAI, and physio-somatic symptoms was predicted by fatigue in the first and second trimesters, a positive life history of depression, anxiety, and PMS, and lower zinc and higher CRP. Postnatal depressive symptoms are predicted by prenatal depression, physio-somatic symptoms, zinc and CRP. Prenatal depressive and physio-somatic symptoms have an immune-inflammatory pathophysiology, while postnatal depressive symptoms are highly predicted by prenatal immune activation, prenatal depression, and a lifetime history of depression and PMS. Previous episodes of depression, anxiety disorders, and PMS may prime pregnant females to develop prenatal and postnatal depressive symptoms via activated immune pathways.

  16. Predictability in models of the atmospheric circulation

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error

  17. Analytical modeling of light transport in scattering materials with strong absorption

    NARCIS (Netherlands)

    Meretska, M. L.; Uppu, R.; Vissenberg, Gilles; Lagendijk, A.; Ijzerman, W. L.; Vos, W. L.

    2017-01-01

    We have investigated the transport of light through slabs that both scatter and strongly absorb, a situation that occurs in diverse application fields ranging from biomedical optics, powder technology, to solid-state lighting. In particular, we study the transport of light in the visible wavelength

  18. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  19. Models for predicting compressive strength and water absorption of ...

    African Journals Online (AJOL)

    This work presents a mathematical model for predicting the compressive strength and water absorption of laterite-quarry dust cement block using augmented Scheffe's simplex lattice design. The statistical models developed can predict the mix proportion that will yield the desired property. The models were tested for lack of ...

  20. A predictive model for swallowing dysfunction after curative radiotherapy in head and neck cancer

    NARCIS (Netherlands)

    Langendijk, Johannes A.; Doornaert, Patricia; Rietveld, Derek H. F.; Verdonck-de Leeuw, Irma M.; Leemans, C. Rene; Slotman, Ben J.

    Introduction: Recently, we found that swallowing dysfunction after curative (chemo) radiation (CH) RT has a strong negative impact on health-related quality of life (HRQoL), even more than xerostomia. The purpose of this study was to design a predictive model for swallowing dysfunction after

  1. A predictive model for swallowing dysfunction after curative radiotherapy in head and neck cancer.

    NARCIS (Netherlands)

    Langendijk, J.A.; Doornaert, P.A.H.; Rietveld, D.H.F.; de Leeuw, I.M.; Leemans, C.R.; Slotman, B.J.

    2009-01-01

    Introduction: Recently, we found that swallowing dysfunction after curative (chemo) radiation (CH) RT has a strong negative impact on health-related quality of life (HRQoL), even more than xerostomia. The purpose of this study was to design a predictive model for swallowing dysfunction after

  2. Truncated exponential-rigid-rotor model for strong electron and ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.; Fleischmann, H.H.

    1979-01-01

    A comprehensive study of exponential-rigid-rotor equilibria for strong electron and ion rings indicates the presence of a sizeable percentage of untrapped particles in all equilibria with aspect-ratios R/a approximately <4. Such aspect-ratios are required in fusion-relevant rings. Significant changes in the equilibria are observed when untrapped particles are excluded by the use of a truncated exponential-rigid-rotor distribution function. (author)

  3. Regression models for predicting anthropometric measurements of ...

    African Journals Online (AJOL)

    measure anthropometric dimensions to predict difficult-to-measure dimensions required for ergonomic design of school furniture. A total of 143 students aged between 16 and 18 years from eight public secondary schools in Ogbomoso, Nigeria ...

  4. FINITE ELEMENT MODEL FOR PREDICTING RESIDUAL ...

    African Journals Online (AJOL)

    direction (σx) had a maximum value of 375MPa (tensile) and minimum value of ... These results shows that the residual stresses obtained by prediction from the finite element method are in fair agreement with the experimental results.

  5. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...... visualization to improve our understanding of the different attained performances, effectively compiling all the conducted experiments in a meaningful way. We complete our study with an entropy-based analysis that highlights the uncertainty handling properties provided by the GP, crucial for prediction tasks...

  6. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  7. From Predictive Models to Instructional Policies

    Science.gov (United States)

    Rollinson, Joseph; Brunskill, Emma

    2015-01-01

    At their core, Intelligent Tutoring Systems consist of a student model and a policy. The student model captures the state of the student and the policy uses the student model to individualize instruction. Policies require different properties from the student model. For example, a mastery threshold policy requires the student model to have a way…

  8. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  9. CFD modeling of particle behavior in supersonic flows with strong swirls for gas separation

    DEFF Research Database (Denmark)

    Yang, Yan; Wen, Chuang

    2017-01-01

    . The results showed that the gas flow was accelerated to supersonic velocity, and created the low pressure and temperature conditions for gas removal. Most of the particles collided with the walls or entered into the liquid-collection space directly, while only a few particles escaped together with the gas......The supersonic separator is a novel technique to remove the condensable components from gas mixtures. But the particle behavior is not well understood in this complex supersonic flow. The Discrete Particle Method was used here to study the particle motion in supersonic flows with a strong swirl...

  10. A model to predict the beginning of the pollen season

    DEFF Research Database (Denmark)

    Toldam-Andersen, Torben Bo

    1991-01-01

    In order to predict the beginning of the pollen season, a model comprising the Utah phenoclirnatography Chill Unit (CU) and ASYMCUR-Growing Degree Hour (GDH) submodels were used to predict the first bloom in Alms, Ulttirrs and Berirln. The model relates environmental temperatures to rest completion...... and bud development. As phenologic parameter 14 years of pollen counts were used. The observed datcs for the beginning of the pollen seasons were defined from the pollen counts and compared with the model prediction. The CU and GDH submodels were used as: 1. A fixed day model, using only the GDH model...... for fruit trees are generally applicable, and give a reasonable description of the growth processes of other trees. This type of model can therefore be of value in predicting the start of the pollen season. The predicted dates were generally within 3-5 days of the observed. Finally the possibility of frost...

  11. Risk prediction model: Statistical and artificial neural network approach

    Science.gov (United States)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  12. Lightning Forecasts and Data Assimilation into Numerical Weather Prediction Models

    Science.gov (United States)

    MacGorman, D. R.; Mansell, E. R.; Fierro, A.; Ziegler, C.

    2012-12-01

    This presentation reviews two aspects of lightning in numerical weather prediction (NWP) models: forecasting lightning and assimilating lightning data into NWP models to improve weather forecasts. One of the earliest routine forecasts of lightning was developed for fire weather operations. This approach used a multi-parameter regression analysis of archived cloud-to-ground (CG) lightning data and archived NWP data to optimize the combination of model state variables to use in forecast equations for various CG rates. Since then, understanding of how storms produce lightning has improved greatly. As the treatment of ice in microphysics packages used by NWP models has improved and the horizontal resolution of models has begun approaching convection-permitting scales (with convection-resolving scales on the horizon), it is becoming possible to use this improved understanding in NWP models to predict lightning more directly. An important role for data assimilation in NWP models is to depict the location, timing, and spatial extent of thunderstorms during model spin-up so that the effects of prior convection that can strongly influence future thunderstorm activity, such as updrafts and outflow boundaries, can be included in the initial state of a NWP model run. Radar data have traditionally been used, but systems that map lightning activity with varying degrees of coverage, detail, and detection efficiency are now available routinely over large regions and reveal information about storms that is complementary to the information provided by radar. Because data from lightning mapping systems are compact, easily handled, and reliably indicate the location and timing of thunderstorms, even in regions with little or no radar coverage, several groups have investigated techniques for assimilating these data into NWP models. This application will become even more valuable with the launch of the Geostationary Lightning Mapper on the GOES-R satellite, which will extend routine

  13. Evaluation of the US Army fallout prediction model

    International Nuclear Information System (INIS)

    Pernick, A.; Levanon, I.

    1987-01-01

    The US Army fallout prediction method was evaluated against an advanced fallout prediction model--SIMFIC (Simplified Fallout Interpretive Code). The danger zone areas of the US Army method were found to be significantly greater (up to a factor of 8) than the areas of corresponding radiation hazard as predicted by SIMFIC. Nonetheless, because the US Army's method predicts danger zone lengths that are commonly shorter than the corresponding hot line distances of SIMFIC, the US Army's method is not reliably conservative

  14. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    A computer program was adopted from the work of Hill et al. (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of ...

  15. Comparative Evaluation of Some Crop Yield Prediction Models ...

    African Journals Online (AJOL)

    (1982) to calibrate and test three of the existing yield prediction models using tropical cowpea yieldÐweather data. The models tested were Hanks Model (first and second versions). Stewart Model (first and second versions) and HallÐButcher Model. Three sets of cowpea yield-water use and weather data were collected.

  16. Asymmetric response of the Atlantic Meridional Ocean Circulation to freshwater anomalies in a strongly-eddying global ocean model

    NARCIS (Netherlands)

    Brunnabend, Sandra Esther|info:eu-repo/dai/nl/371740878; Dijkstra, Henk A.|info:eu-repo/dai/nl/073504467

    2017-01-01

    The Atlantic Meridional Overturning Circulation (AMOC) responds sensitively to density changes in regions of deepwater formation. In this paper, we investigate the nonlinear response of the AMOC to large amplitude freshwater changes around Greenland using a strongly-eddying global ocean model. Due

  17. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...

  18. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  19. Ocean wave prediction using numerical and neural network models

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Prabaharan, N.

    This paper presents an overview of the development of the numerical wave prediction models and recently used neural networks for ocean wave hindcasting and forecasting. The numerical wave models express the physical concepts of the phenomena...

  20. A Prediction Model of the Capillary Pressure J-Function.

    Directory of Open Access Journals (Sweden)

    W S Xu

    Full Text Available The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative.

  1. Predictive models for monitoring and analysis of the total zooplankton

    Directory of Open Access Journals (Sweden)

    Obradović Milica

    2014-01-01

    Full Text Available In recent years, modeling and prediction of total zooplankton abundance have been performed by various tools and techniques, among which data mining tools have been less frequent. The purpose of this paper is to automatically determine the dependency degree and the influence of physical, chemical and biological parameters on the total zooplankton abundance, through design of the specific data mining models. For this purpose, the analysis of key influencers was used. The analysis is based on the data obtained from the SeLaR information system - specifically, the data from the two reservoirs (Gruža and Grošnica with different morphometric characteristics and trophic state. The data is transformed into optimal structure for data analysis, upon which, data mining model based on the Naïve Bayes algorithm is constructed. The results of the analysis imply that in both reservoirs, parameters of groups and species of zooplankton have the greatest influence on the total zooplankton abundance. If these inputs (group and zooplankton species are left out, differences in the impact of physical, chemical and other biological parameters in dependences of reservoirs can be noted. In the Grošnica reservoir, analysis showed that the temporal dimension (months, nitrates, water temperature, chemical oxygen demand, chlorophyll and chlorides, had the key influence with strong relative impact. In the Gruža reservoir, key influence parameters for total zooplankton are: spatial dimension (location, water temperature and physiological groups of bacteria. The results show that the presented data mining model is usable on any kind of aquatic ecosystem and can also serve for the detection of inputs which could be the basis for the future analysis and modeling.

  2. Statistical model based gender prediction for targeted NGS clinical panels

    Directory of Open Access Journals (Sweden)

    Palani Kannan Kandavel

    2017-12-01

    The reference test dataset are being used to test the model. The sensitivity on predicting the gender has been increased from the current “genotype composition in ChrX” based approach. In addition, the prediction score given by the model can be used to evaluate the quality of clinical dataset. The higher prediction score towards its respective gender indicates the higher quality of sequenced data.

  3. comparative analysis of two mathematical models for prediction

    African Journals Online (AJOL)

    Abstract. A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data ob- tained from experimental work done in this study. The models used are Scheffes and Osadebes optimization theories to predict the compressive strength of ...

  4. Comparison of predictive models for the early diagnosis of diabetes

    NARCIS (Netherlands)

    M. Jahani (Meysam); M. Mahdavi (Mahdi)

    2016-01-01

    textabstractObjectives: This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. Methods: We used memetic algorithms to update weights and to improve

  5. Testing and analysis of internal hardwood log defect prediction models

    Science.gov (United States)

    R. Edward. Thomas

    2011-01-01

    The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...

  6. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  7. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    Dimitrakakis, C.

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

  8. Demonstrating the improvement of predictive maturity of a computational model

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M [Los Alamos National Laboratory; Unal, Cetin [Los Alamos National Laboratory; Atamturktur, Huriye S [CLEMSON UNIV.

    2010-01-01

    We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

  9. Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  10. Refining the committee approach and uncertainty prediction in hydrological modelling

    NARCIS (Netherlands)

    Kayastha, N.

    2014-01-01

    Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of

  11. Wind turbine control and model predictive control for uncertain systems

    DEFF Research Database (Denmark)

    Thomsen, Sven Creutz

    as disturbance models for controller design. The theoretical study deals with Model Predictive Control (MPC). MPC is an optimal control method which is characterized by the use of a receding prediction horizon. MPC has risen in popularity due to its inherent ability to systematically account for time...

  12. Hidden Markov Model for quantitative prediction of snowfall and ...

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  13. Model predictive control of a 3-DOF helicopter system using ...

    African Journals Online (AJOL)

    ... by simulation, and its performance is compared with that achieved by linear model predictive control (LMPC). Keywords: nonlinear systems, helicopter dynamics, MIMO systems, model predictive control, successive linearization. International Journal of Engineering, Science and Technology, Vol. 2, No. 10, 2010, pp. 9-19 ...

  14. Models for predicting fuel consumption in sagebrush-dominated ecosystems

    Science.gov (United States)

    Clinton S. Wright

    2013-01-01

    Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....

  15. Comparative Analysis of Two Mathematical Models for Prediction of ...

    African Journals Online (AJOL)

    A mathematical modeling for prediction of compressive strength of sandcrete blocks was performed using statistical analysis for the sandcrete block data obtained from experimental work done in this study. The models used are Scheffe's and Osadebe's optimization theories to predict the compressive strength of sandcrete ...

  16. A mathematical model for predicting earthquake occurrence ...

    African Journals Online (AJOL)

    We consider the continental crust under damage. We use the observed results of microseism in many seismic stations of the world which was established to study the time series of the activities of the continental crust with a view to predicting possible time of occurrence of earthquake. We consider microseism time series ...

  17. Model for predicting the injury severity score.

    Science.gov (United States)

    Hagiwara, Shuichi; Oshima, Kiyohiro; Murata, Masato; Kaneko, Minoru; Aoki, Makoto; Kanbe, Masahiko; Nakamura, Takuro; Ohyama, Yoshio; Tamura, Jun'ichi

    2015-07-01

    To determine the formula that predicts the injury severity score from parameters that are obtained in the emergency department at arrival. We reviewed the medical records of trauma patients who were transferred to the emergency department of Gunma University Hospital between January 2010 and December 2010. The injury severity score, age, mean blood pressure, heart rate, Glasgow coma scale, hemoglobin, hematocrit, red blood cell count, platelet count, fibrinogen, international normalized ratio of prothrombin time, activated partial thromboplastin time, and fibrin degradation products, were examined in those patients on arrival. To determine the formula that predicts the injury severity score, multiple linear regression analysis was carried out. The injury severity score was set as the dependent variable, and the other parameters were set as candidate objective variables. IBM spss Statistics 20 was used for the statistical analysis. Statistical significance was set at P  Watson ratio was 2.200. A formula for predicting the injury severity score in trauma patients was developed with ordinary parameters such as fibrin degradation products and mean blood pressure. This formula is useful because we can predict the injury severity score easily in the emergency department.

  18. Bentonite swelling pressure in strong NaCl solutions. Correlation between model calculations and experimentally determined data

    Energy Technology Data Exchange (ETDEWEB)

    Karnland, O. [Clay Technology, Lund (Sweden)

    1997-12-01

    A number of quite different quantitative models concerning swelling pressure in bentonite clay have been proposed by different researchers over the years. The present report examines some of the models which possibly may be used also for saline conditions. A discrepancy between calculated and measured values was noticed for all models at brine conditions. In general the models predicted a too low swelling pressure compared to what was experimentally found. An osmotic component in the clay/water system is proposed in order to improve the previous conservative use of the thermodynamic model. Calculations of this osmotic component is proposed to be made by use of the clay cation exchange capacity and Donnan equilibrium. Calculations made by this approach showed considerably better correlation to literature laboratory data, compared to calculations made by the previous conservative use of the thermodynamic model. A few verifying laboratory tests were made and are briefly described in the report. The improved thermodynamic model predicts substantial bentonite swelling pressures also in saturated sodium chloride solution if the density of the system is high enough. In practice, the model predicts a substantial swelling pressure for the buffer in a KBS-3 repository if the system is exposed to brines, but the positive effects of mixing bentonite into a backfill material will be lost, since the available compaction technique does not give a sufficiently high bentonite density 37 refs, 15 figs

  19. Bentonite swelling pressure in strong NaCl solutions. Correlation between model calculations and experimentally determined data

    International Nuclear Information System (INIS)

    Karnland, O.

    1997-12-01

    A number of quite different quantitative models concerning swelling pressure in bentonite clay have been proposed by different researchers over the years. The present report examines some of the models which possibly may be used also for saline conditions. A discrepancy between calculated and measured values was noticed for all models at brine conditions. In general the models predicted a too low swelling pressure compared to what was experimentally found. An osmotic component in the clay/water system is proposed in order to improve the previous conservative use of the thermodynamic model. Calculations of this osmotic component is proposed to be made by use of the clay cation exchange capacity and Donnan equilibrium. Calculations made by this approach showed considerably better correlation to literature laboratory data, compared to calculations made by the previous conservative use of the thermodynamic model. A few verifying laboratory tests were made and are briefly described in the report. The improved thermodynamic model predicts substantial bentonite swelling pressures also in saturated sodium chloride solution if the density of the system is high enough. In practice, the model predicts a substantial swelling pressure for the buffer in a KBS-3 repository if the system is exposed to brines, but the positive effects of mixing bentonite into a backfill material will be lost, since the available compaction technique does not give a sufficiently high bentonite density

  20. Field-theoretic Methods in Strongly-Coupled Models of General Gauge Mediation

    CERN Document Server

    Fortin, Jean-Francois

    2013-01-01

    An often-exploited feature of the operator product expansion (OPE) is that it incorporates a splitting of ultraviolet and infrared physics. In this paper we use this feature of the OPE to perform simple, approximate computations of soft masses in gauge-mediated supersymmetry breaking. The approximation amounts to truncating the OPEs for hidden-sector current-current operator products. Our method yields visible-sector superpartner spectra in terms of vacuum expectation values of a few hidden-sector IR elementary fields. We manage to obtain reasonable approximations to soft masses, even when the hidden sector is strongly coupled. We demonstrate our techniques in several examples, including a new framework where supersymmetry-breaking arises both from a hidden sector and dynamically.

  1. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  2. Consistent Particle-Continuum Modeling and Simulation of Flows in Strong Thermochemical Nonequilibrium

    Data.gov (United States)

    National Aeronautics and Space Administration — During hypersonic entry into a planetary atmosphere, a spacecraft transitions from free-molecular flow conditions to fully continuum conditions. When modeling and...

  3. A model-independent description of few-body system with strong interaction

    International Nuclear Information System (INIS)

    Simenog, I.V.

    1985-01-01

    In this contribution, the authors discuss the formulation of equations that provide model-independent description of systems of three and more nucleons irrespective of the details of the interaction, substantiate the approach, estimate the correction terms with respect to the force range, and give basic qualitative results obtained by means of the model-independent procedure. They consider three nucleons in the doublet state (spin S=I/2) taking into account only S-interaction. The elastic nd-scattering amplitude may be found from the model-independent equations that follow from the Faddeev equations in the short-range-force limit. They note that the solutions of several model-independent equations and basic results obtained with the use of this approach may serve both as a standard solution and starting point in the discussion of various conceptions concerning the details of nuclear interactions

  4. Fixed recurrence and slip models better predict earthquake behavior than the time- and slip-predictable models 1: repeating earthquakes

    Science.gov (United States)

    Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki

    2012-01-01

    The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.

  5. Adding propensity scores to pure prediction models fails to improve predictive performance

    Directory of Open Access Journals (Sweden)

    Amy S. Nowacki

    2013-08-01

    Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.

  6. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  7. Seismic attenuation relationship with homogeneous and heterogeneous prediction-error variance models

    Science.gov (United States)

    Mu, He-Qing; Xu, Rong-Rong; Yuen, Ka-Veng

    2014-03-01

    Peak ground acceleration (PGA) estimation is an important task in earthquake engineering practice. One of the most well-known models is the Boore-Joyner-Fumal formula, which estimates the PGA using the moment magnitude, the site-to-fault distance and the site foundation properties. In the present study, the complexity for this formula and the homogeneity assumption for the prediction-error variance are investigated and an efficiency-robustness balanced formula is proposed. For this purpose, a reduced-order Monte Carlo simulation algorithm for Bayesian model class selection is presented to obtain the most suitable predictive formula and prediction-error model for the seismic attenuation relationship. In this approach, each model class (a predictive formula with a prediction-error model) is evaluated according to its plausibility given the data. The one with the highest plausibility is robust since it possesses the optimal balance between the data fitting capability and the sensitivity to noise. A database of strong ground motion records in the Tangshan region of China is obtained from the China Earthquake Data Center for the analysis. The optimal predictive formula is proposed based on this database. It is shown that the proposed formula with heterogeneous prediction-error variance is much simpler than the attenuation model suggested by Boore, Joyner and Fumal (1993).

  8. A Comprehensive Analysis of Jet Quenching via a Hybrid Strong/Weak Coupling Model for Jet-Medium Interactions

    Energy Technology Data Exchange (ETDEWEB)

    Casalderrey-Solana, Jorge [Departament d' Estructura i Constituents de la Matèria and Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP (United Kingdom); Gulhan, Doga Can [Laboratory for Nuclear Science and Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Milhano, José Guilherme [CENTRA, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, P-1049-001 Lisboa (Portugal); Physics Department, Theory Unit, CERN, CH-1211 Genève 23 (Switzerland); Pablos, Daniel [Departament d' Estructura i Constituents de la Matèria and Institut de Ciències del Cosmos (ICCUB), Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona (Spain); Rajagopal, Krishna [Laboratory for Nuclear Science and Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2016-12-15

    Within a hybrid strong/weak coupling model for jets in strongly coupled plasma, we explore jet modifications in ultra-relativistic heavy ion collisions. Our approach merges the perturbative dynamics of hard jet evolution with the strongly coupled dynamics which dominates the soft exchanges between the fast partons in the jet shower and the strongly coupled plasma itself. We implement this approach in a Monte Carlo, which supplements the DGLAP shower with the energy loss dynamics as dictated by holographic computations, up to a single free parameter that we fit to data. We then augment the model by incorporating the transverse momentum picked up by each parton in the shower as it propagates through the medium, at the expense of adding a second free parameter. We use this model to discuss the influence of the transverse broadening of the partons in a jet on intra-jet observables. In addition, we explore the sensitivity of such observables to the back-reaction of the plasma to the passage of the jet.

  9. Models Predicting Success of Infertility Treatment: A Systematic Review

    Science.gov (United States)

    Zarinara, Alireza; Zeraati, Hojjat; Kamali, Koorosh; Mohammad, Kazem; Shahnazari, Parisa; Akhondi, Mohammad Mehdi

    2016-01-01

    Background: Infertile couples are faced with problems that affect their marital life. Infertility treatment is expensive and time consuming and occasionally isn’t simply possible. Prediction models for infertility treatment have been proposed and prediction of treatment success is a new field in infertility treatment. Because prediction of treatment success is a new need for infertile couples, this paper reviewed previous studies for catching a general concept in applicability of the models. Methods: This study was conducted as a systematic review at Avicenna Research Institute in 2015. Six data bases were searched based on WHO definitions and MESH key words. Papers about prediction models in infertility were evaluated. Results: Eighty one papers were eligible for the study. Papers covered years after 1986 and studies were designed retrospectively and prospectively. IVF prediction models have more shares in papers. Most common predictors were age, duration of infertility, ovarian and tubal problems. Conclusion: Prediction model can be clinically applied if the model can be statistically evaluated and has a good validation for treatment success. To achieve better results, the physician and the couples’ needs estimation for treatment success rate were based on history, the examination and clinical tests. Models must be checked for theoretical approach and appropriate validation. The privileges for applying the prediction models are the decrease in the cost and time, avoiding painful treatment of patients, assessment of treatment approach for physicians and decision making for health managers. The selection of the approach for designing and using these models is inevitable. PMID:27141461

  10. Long-term prediction of fish growth under varying ambient temperature using a multiscale dynamic model

    Directory of Open Access Journals (Sweden)

    Radde Nicole

    2009-11-01

    Full Text Available Abstract Background Feed composition has a large impact on the growth of animals, particularly marine fish. We have developed a quantitative dynamic model that can predict the growth and body composition of marine fish for a given feed composition over a timespan of several months. The model takes into consideration the effects of environmental factors, particularly temperature, on growth, and it incorporates detailed kinetics describing the main metabolic processes (protein, lipid, and central metabolism known to play major roles in growth and body composition. Results For validation, we compared our model's predictions with the results of several experimental studies. We showed that the model gives reliable predictions of growth, nutrient utilization (including amino acid retention, and body composition over a timespan of several months, longer than most of the previously developed predictive models. Conclusion We demonstrate that, despite the difficulties involved, multiscale models in biology can yield reasonable and useful results. The model predictions are reliable over several timescales and in the presence of strong temperature fluctuations, which are crucial factors for modeling marine organism growth. The model provides important improvements over existing models.

  11. Towards a generalized energy prediction model for machine tools.

    Science.gov (United States)

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  12. Boson-Jet Correlations in a Hybrid Strong/Weak Coupling Model for Jet Quenching in Heavy Ion Collisions

    CERN Document Server

    Casalderrey-Solana, Jorge; Milhano, Jose Guilherme; Pablos, Daniel; Rajagopal, Krishna

    2016-06-11

    We confront a hybrid strong/weak coupling model for jet quenching to data from LHC heavy ion collisions. The model combines the perturbative QCD physics at high momentum transfer and the strongly coupled dynamics of non- abelian gauge theories plasmas in a phenomenological way. By performing a full Monte Carlo simulation, and after fitting one single parameter, we successfully describe several jet observables at the LHC, including dijet and photon jet measurements. Within current theoretical and experimental uncertainties, we find that such observables show little sensitivity to the specifics of the microscopic energy loss mechanism. We also present a new observable, the ratio of the fragmentation function of inclusive jets to that of the associated jets in dijet pairs, which can discriminate among different medium models. Finally, we discuss the importance of plasma response to jet passage in jet shapes.

  13. Poisson Mixture Regression Models for Heart Disease Prediction

    Science.gov (United States)

    Erol, Hamza

    2016-01-01

    Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611

  14. Comparison of Predictive Models for the Early Diagnosis of Diabetes.

    Science.gov (United States)

    Jahani, Meysam; Mahdavi, Mahdi

    2016-04-01

    This study develops neural network models to improve the prediction of diabetes using clinical and lifestyle characteristics. Prediction models were developed using a combination of approaches and concepts. We used memetic algorithms to update weights and to improve prediction accuracy of models. In the first step, the optimum amount for neural network parameters such as momentum rate, transfer function, and error function were obtained through trial and error and based on the results of previous studies. In the second step, optimum parameters were applied to memetic algorithms in order to improve the accuracy of prediction. This preliminary analysis showed that the accuracy of neural networks is 88%. In the third step, the accuracy of neural network models was improved using a memetic algorithm and resulted model was compared with a logistic regression model using a confusion matrix and receiver operating characteristic curve (ROC). The memetic algorithm improved the accuracy from 88.0% to 93.2%. We also found that memetic algorithm had a higher accuracy than the model from the genetic algorithm and a regression model. Among models, the regression model has the least accuracy. For the memetic algorithm model the amount of sensitivity, specificity, positive predictive value, negative predictive value, and ROC are 96.2, 95.3, 93.8, 92.4, and 0.958 respectively. The results of this study provide a basis to design a Decision Support System for risk management and planning of care for individuals at risk of diabetes.

  15. Applications of modeling in polymer-property prediction

    Science.gov (United States)

    Case, F. H.

    1996-08-01

    A number of molecular modeling techniques have been applied for the prediction of polymer properties and behavior. Five examples illustrate the range of methodologies used. A simple atomistic simulation of small polymer fragments is used to estimate drug compatibility with a polymer matrix. The analysis of molecular dynamics results from a more complex model of a swollen hydrogel system is used to study gas diffusion in contact lenses. Statistical mechanics are used to predict conformation dependent properties — an example is the prediction of liquid-crystal formation. The effect of the molecular weight distribution on phase separation in polyalkanes is predicted using thermodynamic models. In some cases, the properties of interest cannot be directly predicted using simulation methods or polymer theory. Correlation methods may be used to bridge the gap between molecular structure and macroscopic properties. The final example shows how connectivity-indices-based quantitative structure-property relationships were used to predict properties for candidate polyimids in an electronics application.

  16. Artificial Neural Network Model for Predicting Compressive

    OpenAIRE

    Salim T. Yousif; Salwa M. Abdullah

    2013-01-01

      Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum...

  17. Prediction of hourly solar radiation with multi-model framework

    International Nuclear Information System (INIS)

    Wu, Ji; Chan, Chee Keong

    2013-01-01

    Highlights: • A novel approach to predict solar radiation through the use of clustering paradigms. • Development of prediction models based on the intrinsic pattern observed in each cluster. • Prediction based on proper clustering and selection of model on current time provides better results than other methods. • Experiments were conducted on actual solar radiation data obtained from a weather station in Singapore. - Abstract: In this paper, a novel multi-model prediction framework for prediction of solar radiation is proposed. The framework started with the assumption that there are several patterns embedded in the solar radiation series. To extract the underlying pattern, the solar radiation series is first segmented into smaller subsequences, and the subsequences are further grouped into different clusters. For each cluster, an appropriate prediction model is trained. Hence a procedure for pattern identification is developed to identify the proper pattern that fits the current period. Based on this pattern, the corresponding prediction model is applied to obtain the prediction value. The prediction result of the proposed framework is then compared to other techniques. It is shown that the proposed framework provides superior performance as compared to others

  18. In Vitro Fertilization and Embryo Culture Strongly Impact the Placental Transcriptome in the Mouse Model

    Science.gov (United States)

    Fauque, Patricia; Mondon, Françoise; Letourneur, Franck; Ripoche, Marie-Anne; Journot, Laurent; Barbaux, Sandrine; Dandolo, Luisa; Patrat, Catherine; Wolf, Jean-Philippe; Jouannet, Pierre; Jammes, Hélène; Vaiman, Daniel

    2010-01-01

    Background Assisted Reproductive Technologies (ART) are increasingly used in humans; however, their impact is now questioned. At blastocyst stage, the trophectoderm is directly in contact with an artificial medium environment, which can impact placental development. This study was designed to carry out an in-depth analysis of the placental transcriptome after ART in mice. Methodology/Principal Findings Blastocysts were transferred either (1) after in vivo fertilization and development (control group) or (2) after in vitro fertilization and embryo culture. Placentas were then analyzed at E10.5. Six percent of transcripts were altered at the two-fold threshold in placentas of manipulated embryos, 2/3 of transcripts being down-regulated. Strikingly, the X-chromosome harbors 11% of altered genes, 2/3 being induced. Imprinted genes were modified similarly to the X. Promoter composition analysis indicates that FOXA transcription factors may be involved in the transcriptional deregulations. Conclusions For the first time, our study shows that in vitro fertilization associated with embryo culture strongly modify the placental expression profile, long after embryo manipulations, meaning that the stress of artificial environment is memorized after implantation. Expression of X and imprinted genes is also greatly modulated probably to adapt to adverse conditions. Our results highlight the importance of studying human placentas from ART. PMID:20169163

  19. In vitro fertilization and embryo culture strongly impact the placental transcriptome in the mouse model.

    Directory of Open Access Journals (Sweden)

    Patricia Fauque

    Full Text Available BACKGROUND: Assisted Reproductive Technologies (ART are increasingly used in humans; however, their impact is now questioned. At blastocyst stage, the trophectoderm is directly in contact with an artificial medium environment, which can impact placental development. This study was designed to carry out an in-depth analysis of the placental transcriptome after ART in mice. METHODOLOGY/PRINCIPAL FINDINGS: Blastocysts were transferred either (1 after in vivo fertilization and development (control group or (2 after in vitro fertilization and embryo culture. Placentas were then analyzed at E10.5. Six percent of transcripts were altered at the two-fold threshold in placentas of manipulated embryos, 2/3 of transcripts being down-regulated. Strikingly, the X-chromosome harbors 11% of altered genes, 2/3 being induced. Imprinted genes were modified similarly to the X. Promoter composition analysis indicates that FOXA transcription factors may be involved in the transcriptional deregulations. CONCLUSIONS: For the first time, our study shows that in vitro fertilization associated with embryo culture strongly modify the placental expression profile, long after embryo manipulations, meaning that the stress of artificial environment is memorized after implantation. Expression of X and imprinted genes is also greatly modulated probably to adapt to adverse conditions. Our results highlight the importance of studying human placentas from ART.

  20. Posterior Predictive Model Checking for Multidimensionality in Item Response Theory

    Science.gov (United States)

    Levy, Roy; Mislevy, Robert J.; Sinharay, Sandip

    2009-01-01

    If data exhibit multidimensionality, key conditional independence assumptions of unidimensional models do not hold. The current work pursues posterior predictive model checking, a flexible family of model-checking procedures, as a tool for criticizing models due to unaccounted for dimensions in the context of item response theory. Factors…

  1. Model predictive control of a crude oil distillation column

    Directory of Open Access Journals (Sweden)

    Morten Hovd

    1999-04-01

    Full Text Available The project of designing and implementing model based predictive control on the vacuum distillation column at the Nynäshamn Refinery of Nynäs AB is described in this paper. The paper describes in detail the modeling for the model based control, covers the controller implementation, and documents the benefits gained from the model based controller.

  2. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    Science.gov (United States)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  3. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  4. Modeling number of claims and prediction of total claim amount

    Science.gov (United States)

    Acar, Aslıhan Şentürk; Karabey, Uǧur

    2017-07-01

    In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.

  5. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  6. Characteristic Model-Based Robust Model Predictive Control for Hypersonic Vehicles with Constraints

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2017-06-01

    Full Text Available Designing robust control for hypersonic vehicles in reentry is difficult, due to the features of the vehicles including strong coupling, non-linearity, and multiple constraints. This paper proposed a characteristic model-based robust model predictive control (MPC for hypersonic vehicles with reentry constraints. First, the hypersonic vehicle is modeled by a characteristic model composed of a linear time-varying system and a lumped disturbance. Then, the identification data are regenerated by the accumulative sum idea in the gray theory, which weakens effects of the random noises and strengthens regularity of the identification data. Based on the regenerated data, the time-varying parameters and the disturbance are online estimated according to the gray identification. At last, the mixed H2/H∞ robust predictive control law is proposed based on linear matrix inequalities (LMIs and receding horizon optimization techniques. Using active tackling system constraints of MPC, the input and state constraints are satisfied in the closed-loop control system. The validity of the proposed control is verified theoretically according to Lyapunov theory and illustrated by simulation results.

  7. Breaking of SU(4) symmetry and interplay between strongly-correlated phases in the Hubbard model

    Czech Academy of Sciences Publication Activity Database

    Golubeva, A.; Sotnikov, A.; Cichy, A.; Kuneš, Jan; Hofstetter, W.

    2017-01-01

    Roč. 95, č. 12 (2017), s. 1-7, č. článku 125108. ISSN 2469-9950 EU Projects: European Commission(XE) 646807 - EXMAG Institutional support: RVO:68378271 Keywords : Hubbard model * SU(4) Subject RIV: BE - Theoretical Physics OBOR OECD: Atomic, molecular and chemical physics (physics of atoms and molecules including collision, interaction with radiation, magnetic resonances, Mössbauer effect) Impact factor: 3.836, year: 2016

  8. Strong constraint on modelled global carbon uptake using solar-induced chlorophyll fluorescence data.

    Science.gov (United States)

    MacBean, Natasha; Maignan, Fabienne; Bacour, Cédric; Lewis, Philip; Peylin, Philippe; Guanter, Luis; Köhler, Philipp; Gómez-Dans, Jose; Disney, Mathias

    2018-01-31

    Accurate terrestrial biosphere model (TBM) simulations of gross carbon uptake (gross primary productivity - GPP) are essential for reliable future terrestrial carbon sink projections. However, uncertainties in TBM GPP estimates remain. Newly-available satellite-derived sun-induced chlorophyll fluorescence (SIF) data offer a promising direction for addressing this issue by constraining regional-to-global scale modelled GPP. Here, we use monthly 0.5° GOME-2 SIF data from 2007 to 2011 to optimise GPP parameters of the ORCHIDEE TBM. The optimisation reduces GPP magnitude across all vegetation types except C4 plants. Global mean annual GPP therefore decreases from 194 ± 57 PgCyr -1 to 166 ± 10 PgCyr -1 , bringing the model more in line with an up-scaled flux tower estimate of 133 PgCyr -1 . Strongest reductions in GPP are seen in boreal forests: the result is a shift in global GPP distribution, with a ~50% increase in the tropical to boreal productivity ratio. The optimisation resulted in a greater reduction in GPP than similar ORCHIDEE parameter optimisation studies using satellite-derived NDVI from MODIS and eddy covariance measurements of net CO 2 fluxes from the FLUXNET network. Our study shows that SIF data will be instrumental in constraining TBM GPP estimates, with a consequent improvement in global carbon cycle projections.

  9. Model-based uncertainty in species range prediction

    DEFF Research Database (Denmark)

    Pearson, R. G.; Thuiller, Wilfried; Bastos Araujo, Miguel

    2006-01-01

    algorithm when extrapolating beyond the range of data used to build the model. The effects of these factors should be carefully considered when using this modelling approach to predict species ranges. Main conclusions We highlight an important source of uncertainty in assessments of the impacts of climate......Aim Many attempts to predict the potential range of species rely on environmental niche (or 'bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions......, identify key reasons why model output may differ and discuss the implications that model uncertainty has for policy-guiding applications. Location The Western Cape of South Africa. Methods We applied nine of the most widely used modelling techniques to model potential distributions under current...

  10. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Science.gov (United States)

    Eom, Bang Wool; Joo, Jungnam; Kim, Sohee; Shin, Aesun; Yang, Hye-Ryung; Park, Junghyun; Choi, Il Ju; Kim, Young-Woo; Kim, Jeongseon; Nam, Byung-Ho

    2015-01-01

    Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea. Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope. During a median of 11.4 years of follow-up, 19,465 (1.4%) and 5,579 (0.7%) newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women). In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  11. AN EFFICIENT PATIENT INFLOW PREDICTION MODEL FOR HOSPITAL RESOURCE MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Kottalanka Srikanth

    2017-07-01

    Full Text Available There has been increasing demand in improving service provisioning in hospital resources management. Hospital industries work with strict budget constraint at the same time assures quality care. To achieve quality care with budget constraint an efficient prediction model is required. Recently there has been various time series based prediction model has been proposed to manage hospital resources such ambulance monitoring, emergency care and so on. These models are not efficient as they do not consider the nature of scenario such climate condition etc. To address this artificial intelligence is adopted. The issues with existing prediction are that the training suffers from local optima error. This induces overhead and affects the accuracy in prediction. To overcome the local minima error, this work presents a patient inflow prediction model by adopting resilient backpropagation neural network. Experiment are conducted to evaluate the performance of proposed model inter of RMSE and MAPE. The outcome shows the proposed model reduces RMSE and MAPE over existing back propagation based artificial neural network. The overall outcomes show the proposed prediction model improves the accuracy of prediction which aid in improving the quality of health care management.

  12. Risk Prediction Model for Severe Postoperative Complication in Bariatric Surgery.

    Science.gov (United States)

    Stenberg, Erik; Cao, Yang; Szabo, Eva; Näslund, Erik; Näslund, Ingmar; Ottosson, Johan

    2018-01-12

    Factors associated with risk for adverse outcome are important considerations in the preoperative assessment of patients for bariatric surgery. As yet, prediction models based on preoperative risk factors have not been able to predict adverse outcome sufficiently. This study aimed to identify preoperative risk factors and to construct a risk prediction model based on these. Patients who underwent a bariatric surgical procedure in Sweden between 2010 and 2014 were identified from the Scandinavian Obesity Surgery Registry (SOReg). Associations between preoperative potential risk factors and severe postoperative complications were analysed using a logistic regression model. A multivariate model for risk prediction was created and validated in the SOReg for patients who underwent bariatric surgery in Sweden, 2015. Revision surgery (standardized OR 1.19, 95% confidence interval (CI) 1.14-0.24, p prediction model. Despite high specificity, the sensitivity of the model was low. Revision surgery, high age, low BMI, large waist circumference, and dyspepsia/GERD were associated with an increased risk for severe postoperative complication. The prediction model based on these factors, however, had a sensitivity that was too low to predict risk in the individual patient case.

  13. Prediction Model for Gastric Cancer Incidence in Korean Population.

    Directory of Open Access Journals (Sweden)

    Bang Wool Eom

    Full Text Available Predicting high risk groups for gastric cancer and motivating these groups to receive regular checkups is required for the early detection of gastric cancer. The aim of this study is was to develop a prediction model for gastric cancer incidence based on a large population-based cohort in Korea.Based on the National Health Insurance Corporation data, we analyzed 10 major risk factors for gastric cancer. The Cox proportional hazards model was used to develop gender specific prediction models for gastric cancer development, and the performance of the developed model in terms of discrimination and calibration was also validated using an independent cohort. Discrimination ability was evaluated using Harrell's C-statistics, and the calibration was evaluated using a calibration plot and slope.During a median of 11.4 years of follow-up, 19,465 (1.4% and 5,579 (0.7% newly developed gastric cancer cases were observed among 1,372,424 men and 804,077 women, respectively. The prediction models included age, BMI, family history, meal regularity, salt preference, alcohol consumption, smoking and physical activity for men, and age, BMI, family history, salt preference, alcohol consumption, and smoking for women. This prediction model showed good accuracy and predictability in both the developing and validation cohorts (C-statistics: 0.764 for men, 0.706 for women.In this study, a prediction model for gastric cancer incidence was developed that displayed a good performance.

  14. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  16. Building a Unified Computational Model for the Resonant X-Ray Scattering of Strongly Correlated Materials

    Energy Technology Data Exchange (ETDEWEB)

    Bansil, Arun [Northeastern Univ., Boston, MA (United States)

    2016-12-01

    Basic-Energy Sciences of the Department of Energy (BES/DOE) has made large investments in x-ray sources in the U.S. (NSLS-II, LCLS, NGLS, ALS, APS) as powerful enabling tools for opening up unprecedented new opportunities for exploring properties of matter at various length and time scales. The coming online of the pulsed photon source literally allows us to see and follow the dynamics of processes in materials at their natural timescales. There is an urgent need therefore to develop theoretical methodologies and computational models for understanding how x-rays interact with matter and the related spectroscopies of materials. The present project addressed aspects of this grand challenge of X-ray science. In particular, our Collaborative Research Team (CRT) focused on understanding and modeling of elastic and inelastic resonant X-ray scattering processes. We worked to unify the three different computational approaches currently used for modeling X-ray scattering—density functional theory, dynamical mean-field theory, and small-cluster exact diagonalization—to achieve a more realistic material-specific picture of the interaction between X-rays and complex matter. To achieve a convergence in the interpretation and to maximize complementary aspects of different theoretical methods, we concentrated on the cuprates, where most experiments have been performed. Our team included both US and international researchers, and it fostered new collaborations between researchers currently working with different approaches. In addition, we developed close relationships with experimental groups working in the area at various synchrotron facilities in the US. Our CRT thus helped toward enabling the US to assume a leadership role in the theoretical development of the field, and to create a global network and community of scholars dedicated to X-ray scattering research.

  17. Modeling cavities exhibiting strong lateral confinement using open geometry Fourier modal method

    DEFF Research Database (Denmark)

    Häyrynen, Teppo; Gregersen, Niels

    2016-01-01

    We have developed a computationally efficient Fourier-Bessel expansion based open geometry formalism for modeling the optical properties of rotationally symmetric photonic nanostructures. The lateral computation domain is assumed infinite so that no artificial boundary conditions are needed. Instead......, the leakage of the modes due to an imperfect field confinement is taken into account by using a basis functions that expand the whole infinite space. The computational efficiency is obtained by using a non-uniform discretization in the frequency space in which the lateral expansion modes are more densely sampled...

  18. Femtocells Sharing Management using mobility prediction model

    OpenAIRE

    Barth, Dominique; Choutri, Amira; Kloul, Leila; Marcé, Olivier

    2013-01-01

    Bandwidth sharing paradigm constitutes an incentive solution for the serious capacity management problem faced by operators as femtocells owners are able to offer a QoS guaranteed network access to mobile users in their femtocell coverage. In this paper, we consider a technico-economic bandwidth sharing model based on a reinforcement learning algorithm. Because such a model does not allow the convergence of the learning algorithm, due to the small size of the femtocells, the mobile users velo...

  19. Validating predictions from climate envelope models

    Science.gov (United States)

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  20. Validating predictions from climate envelope models.

    Directory of Open Access Journals (Sweden)

    James I Watling

    Full Text Available Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species' distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967-1971 (t1 and evaluated using occurrence data from 1998-2002 (t2. Model sensitivity (the ability to correctly classify species presences was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on

  1. Predictor characteristics necessary for building a clinically useful risk prediction model: a simulation study

    Directory of Open Access Journals (Sweden)

    Laura Schummers

    2016-09-01

    Full Text Available Abstract Background Compelled by the intuitive appeal of predicting each individual patient’s risk of an outcome, there is a growing interest in risk prediction models. While the statistical methods used to build prediction models are increasingly well understood, the literature offers little insight to researchers seeking to gauge a priori whether a prediction model is likely to perform well for their particular research question. The objective of this study was to inform the development of new risk prediction models by evaluating model performance under a wide range of predictor characteristics. Methods Data from all births to overweight or obese women in British Columbia, Canada from 2004 to 2012 (n = 75,225 were used to build a risk prediction model for preeclampsia. The data were then augmented with simulated predictors of the outcome with pre-set prevalence values and univariable odds ratios. We built 120 risk prediction models that included known demographic and clinical predictors, and one, three, or five of the simulated variables. Finally, we evaluated standard model performance criteria (discrimination, risk stratification capacity, calibration, and Nagelkerke’s r2 for each model. Results Findings from our models built with simulated predictors demonstrated the predictor characteristics required for a risk prediction model to adequately discriminate cases from non-cases and to adequately classify patients into clinically distinct risk groups. Several predictor characteristics can yield well performing risk prediction models; however, these characteristics are not typical of predictor-outcome relationships in many population-based or clinical data sets. Novel predictors must be both strongly associated with the outcome and prevalent in the population to be useful for clinical prediction modeling (e.g., one predictor with prevalence ≥20 % and odds ratio ≥8, or 3 predictors with prevalence ≥10 % and odds ratios ≥4. Area

  2. RETRACTED: Flap side edge noise modeling and prediction

    Science.gov (United States)

    Guo, Yueping

    2013-08-01

    This article has been retracted: please see Elsevier Policy on Article Withdrawal (http://www.elsevier.com/locate/withdrawalpolicy).This article has been retracted at the request of the first author because of the overlap with previously published papers. The first author takes full responsibility and sincerely apologizes for the error made.This article has been retracted at the request of the Editor-in-Chief.The article duplicates significant parts of an earlier paper by the same author, published in AIAA (Y.P. Guo, Aircraft flap side edge noise modeling and prediction. American Institute of Aeronautics and Astronautics, (2011), 10.2514/6.2011-2731). Prior to republication, conference papers should be comprehensively extended, and re-use of any data should be appropriately cited. As such this article represents a severe abuse of the scientific publishing system. The scientific community takes a very strong view on this matter and apologies are offered to readers of the journal that this was not detected during the submission process.

  3. The Vlasov equation with strong magnetic field and oscillating electric field as a model for isotop resonant separation

    Directory of Open Access Journals (Sweden)

    Emmanuel Frenod

    2002-01-01

    Full Text Available We study the qualitative behavior of solutions to the Vlasov equation with strong external magnetic field and oscillating electric field. This model is relevant to the understanding of isotop resonant separation. We show that the effective equation is a kinetic equation with a memory term. This memory term involves a pseudo-differential operator whose kernel is characterized by an integral equation involving Bessel functions. The kernel is explicitly given in some particular cases.

  4. Climate predictability and prediction skill on seasonal time scales over South America from CHFP models

    Science.gov (United States)

    Osman, Marisol; Vera, C. S.

    2017-10-01

    This work presents an assessment of the predictability and skill of climate anomalies over South America. The study was made considering a multi-model ensemble of seasonal forecasts for surface air temperature, precipitation and regional circulation, from coupled global circulation models included in the Climate Historical Forecast Project. Predictability was evaluated through the estimation of the signal-to-total variance ratio while prediction skill was assessed computing anomaly correlation coefficients. Both indicators present over the continent higher values at the tropics than at the extratropics for both, surface air temperature and precipitation. Moreover, predictability and prediction skill for temperature are slightly higher in DJF than in JJA while for precipitation they exhibit similar levels in both seasons. The largest values of predictability and skill for both variables and seasons are found over northwestern South America while modest but still significant values for extratropical precipitation at southeastern South America and the extratropical Andes. The predictability levels in ENSO years of both variables are slightly higher, although with the same spatial distribution, than that obtained considering all years. Nevertheless, predictability at the tropics for both variables and seasons diminishes in both warm and cold ENSO years respect to that in all years. The latter can be attributed to changes in signal rather than in the noise. Predictability and prediction skill for low-level winds and upper-level zonal winds over South America was also assessed. Maximum levels of predictability for low-level winds were found were maximum mean values are observed, i.e. the regions associated with the equatorial trade winds, the midlatitudes westerlies and the South American Low-Level Jet. Predictability maxima for upper-level zonal winds locate where the subtropical jet peaks. Seasonal changes in wind predictability are observed that seem to be related to

  5. Prediction skill of rainstorm events over India in the TIGGE weather prediction models

    Science.gov (United States)

    Karuna Sagar, S.; Rajeevan, M.; Vijaya Bhaskara Rao, S.; Mitra, A. K.

    2017-12-01

    Extreme rainfall events pose a serious threat of leading to severe floods in many countries worldwide. Therefore, advance prediction of its occurrence and spatial distribution is very essential. In this paper, an analysis has been made to assess the skill of numerical weather prediction models in predicting rainstorms over India. Using gridded daily rainfall data set and objective criteria, 15 rainstorms were identified during the monsoon season (June to September). The analysis was made using three TIGGE (THe Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble) models. The models considered are the European Centre for Medium-Range Weather Forecasts (ECMWF), National Centre for Environmental Prediction (NCEP) and the UK Met Office (UKMO). Verification of the TIGGE models for 43 observed rainstorm days from 15 rainstorm events has been made for the period 2007-2015. The comparison reveals that rainstorm events are predictable up to 5 days in advance, however with a bias in spatial distribution and intensity. The statistical parameters like mean error (ME) or Bias, root mean square error (RMSE) and correlation coefficient (CC) have been computed over the rainstorm region using the multi-model ensemble (MME) mean. The study reveals that the spread is large in ECMWF and UKMO followed by the NCEP model. Though the ensemble spread is quite small in NCEP, the ensemble member averages are not well predicted. The rank histograms suggest that the forecasts are under prediction. The modified Contiguous Rain Area (CRA) technique was used to verify the spatial as well as the quantitative skill of the TIGGE models. Overall, the contribution from the displacement and pattern errors to the total RMSE is found to be more in magnitude. The volume error increases from 24 hr forecast to 48 hr forecast in all the three models.

  6. Micro-mechanical studies on graphite strength prediction models

    Science.gov (United States)

    Kanse, Deepak; Khan, I. A.; Bhasin, V.; Vaze, K. K.

    2013-06-01

    The influence of type of loading and size-effects on the failure strength of graphite were studied using Weibull model. It was observed that this model over-predicts size effect in tension. However, incorporation of grain size effect in Weibull model, allows a more realistic simulation of size effects. Numerical prediction of strength of four-point bend specimen was made using the Weibull parameters obtained from tensile test data. Effective volume calculations were carried out and subsequently predicted strength was compared with experimental data. It was found that Weibull model can predict mean flexural strength with reasonable accuracy even when grain size effect was not incorporated. In addition, the effects of microstructural parameters on failure strength were analyzed using Rose and Tucker model. Uni-axial tensile, three-point bend and four-point bend strengths were predicted using this model and compared with the experimental data. It was found that this model predicts flexural strength within 10%. For uni-axial tensile strength, difference was 22% which can be attributed to less number of tests on tensile specimens. In order to develop failure surface of graphite under multi-axial state of stress, an open ended hollow tube of graphite was subjected to internal pressure and axial load and Batdorf model was employed to calculate failure probability of the tube. Bi-axial failure surface was generated in the first and fourth quadrant for 50% failure probability by varying both internal pressure and axial load.

  7. Model-Based Water Wall Fault Detection and Diagnosis of FBC Boiler Using Strong Tracking Filter

    Directory of Open Access Journals (Sweden)

    Li Sun

    2014-01-01

    Full Text Available Fluidized bed combustion (FBC boilers have received increasing attention in recent decades. The erosion issue on the water wall is one of the most common and serious faults for FBC boilers. Unlike direct measurement of tube thickness used by ultrasonic methods, the wastage of water wall is reconsidered equally as the variation of the overall heat transfer coefficient in the furnace. In this paper, a model-based approach is presented to estimate internal states and heat transfer coefficient dually from the noisy measurable outputs. The estimated parameter is compared with the normal value. Then the modified Bayesian algorithm is adopted for fault detection and diagnosis (FDD. The simulation results demonstrate that the approach is feasible and effective.

  8. Smooth approximation model of dispersion with strong space charge for continuous beams

    Directory of Open Access Journals (Sweden)

    S. Bernal

    2011-10-01

    Full Text Available We apply the Venturini-Reiser (V-R envelope-dispersion equations [M. Venturini and M. Reiser, Phys. Rev. Lett. 81, 96 (1998PRLTAO0031-900710.1103/PhysRevLett.81.96] to a continuous beam in a uniform focusing/bending lattice to study the combined effects of linear dispersion and space charge. Within this simple model we investigate the scaling of average dispersion and the effects on beam dimensions and show that the V-R equations lead to the correct zero-current limits. We also introduce a generalization of the space charge intensity parameter and apply it to the University of Maryland Electron Ring and other machines. In addition, we present results of calculations to test the smooth approximation by solving the V-R original equations and also through simulations with the matrix code ELEGANT.

  9. Modeling cavities exhibiting strong lateral confinement using open geometry Fourier modal method

    Science.gov (United States)

    Häyrynen, Teppo; Gregersen, Niels

    2016-04-01

    We have developed a computationally efficient Fourier-Bessel expansion based open geometry formalism for modeling the optical properties of rotationally symmetric photonic nanostructures. The lateral computation domain is assumed infinite so that no artificial boundary conditions are needed. Instead, the leakage of the modes due to an imperfect field confinement is taken into account by using a basis functions that expand the whole infinite space. The computational efficiency is obtained by using a non-uniform discretization in the frequency space in which the lateral expansion modes are more densely sampled around a geometry specific dominant transverse wavenumber region. We will use the developed approach to investigate the Q factor and mode confinement in cavities where top DBR mirror has small rectangular defect confining the modes laterally on the defect region.

  10. New Approaches for Channel Prediction Based on Sinusoidal Modeling

    Directory of Open Access Journals (Sweden)

    Ekman Torbjörn

    2007-01-01

    Full Text Available Long-range channel prediction is considered to be one of the most important enabling technologies to future wireless communication systems. The prediction of Rayleigh fading channels is studied in the frame of sinusoidal modeling in this paper. A stochastic sinusoidal model to represent a Rayleigh fading channel is proposed. Three different predictors based on the statistical sinusoidal model are proposed. These methods outperform the standard linear predictor (LP in Monte Carlo simulations, but underperform with real measurement data, probably due to nonstationary model parameters. To mitigate these modeling errors, a joint moving average and sinusoidal (JMAS prediction model and the associated joint least-squares (LS predictor are proposed. It combines the sinusoidal model with an LP to handle unmodeled dynamics in the signal. The joint LS predictor outperforms all the other sinusoidal LMMSE predictors in suburban environments, but still performs slightly worse than the standard LP in urban environments.

  11. Bayesian Age-Period-Cohort Modeling and Prediction - BAMP

    Directory of Open Access Journals (Sweden)

    Volker J. Schmid

    2007-10-01

    Full Text Available The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted.

  12. Modeling for prediction of restrained shrinkage effect in concrete repair

    International Nuclear Information System (INIS)

    Yuan Yingshu; Li Guo; Cai Yue

    2003-01-01

    A general model of autogenous shrinkage caused by chemical reaction (chemical shrinkage) is developed by means of Arrhenius' law and a degree of chemical reaction. Models of tensile creep and relaxation modulus are built based on a viscoelastic, three-element model. Tests of free shrinkage and tensile creep were carried out to determine some coefficients in the models. Two-dimensional FEM analysis based on the models and other constitutions can predict the development of tensile strength and cracking. Three groups of patch-repaired beams were designed for analysis and testing. The prediction from the analysis shows agreement with the test results. The cracking mechanism after repair is discussed

  13. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing so...... decisions need to be made in terms of statistical distributions of walking parameters and in terms of the parameters describing the statistical distributions. The paper explores how sensitive computations of bridge response are to some of the decisions to be made in this respect. This is useful...

  14. Bentonite swelling pressure in strong NaCl solutions. Correlation of model calculations to experimentally determined data

    Energy Technology Data Exchange (ETDEWEB)

    Karnland, O. [Clay Technology, Lund (Sweden)

    1998-01-01

    A number of quite different quantitative models concerning swelling pressure in bentonite clay have been proposed. This report discusses a number of models which possibly can be used also for saline conditions. A discrepancy between calculated and measured values was noticed for all models at brine conditions. In general the models predicted a too low swelling pressure compared to what was experimentally found. An osmotic component in the clay/water system is proposed in order to improve the previous conservative use of the thermodynamic model. Calculations of this osmotic component is proposed to be made by use of the clay cation exchange capacity and Donnan equilibrium. Calculations made by this approach showed considerably better correlation to literature laboratory data, compared to calculations made by the previous conservative use of the thermodynamic model. A few verifying laboratory tests were made and are briefly described in the report. The improved model predicts a substantial bentonite swelling pressure also in a saturated sodium chloride solution if the density of the system is sufficiently high. This means in practice that the buffer in a KBS-3 repository will give rise to an acceptable swelling pressure, but that the positive effects of mixing bentonite into a backfill material will be lost if the system is exposed to brines. (orig.). 14 refs.

  15. Bentonite swelling pressure in strong NaCl solutions. Correlation of model calculations to experimentally determined data

    International Nuclear Information System (INIS)

    Karnland, O.

    1998-01-01

    A number of quite different quantitative models concerning swelling pressure in bentonite clay have been proposed. This report discusses a number of models which possibly can be used also for saline conditions. A discrepancy between calculated and measured values was noticed for all models at brine conditions. In general the models predicted a too low swelling pressure compared to what was experimentally found. An osmotic component in the clay/water system is proposed in order to improve the previous conservative use of the thermodynamic model. Calculations of this osmotic component is proposed to be made by use of the clay cation exchange capacity and Donnan equilibrium. Calculations made by this approach showed considerably better correlation to literature laboratory data, compared to calculations made by the previous conservative use of the thermodynamic model. A few verifying laboratory tests were made and are briefly described in the report. The improved model predicts a substantial bentonite swelling pressure also in a saturated sodium chloride solution if the density of the system is sufficiently high. This means in practice that the buffer in a KBS-3 repository will give rise to an acceptable swelling pressure, but that the positive effects of mixing bentonite into a backfill material will be lost if the system is exposed to brines. (orig.)

  16. Strong disorder real-space renormalization for the many-body-localized phase of random Majorana models

    Science.gov (United States)

    Monthus, Cécile

    2018-03-01

    For the many-body-localized phase of random Majorana models, a general strong disorder real-space renormalization procedure known as RSRG-X (Pekker et al 2014 Phys. Rev. X 4 011052) is described to produce the whole set of excited states, via the iterative construction of the local integrals of motion (LIOMs). The RG rules are then explicitly derived for arbitrary quadratic Hamiltonians (free-fermions models) and for the Kitaev chain with local interactions involving even numbers of consecutive Majorana fermions. The emphasis is put on the advantages of the Majorana language over the usual quantum spin language to formulate unified RSRG-X rules.

  17. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  18. Validation of a tuber blight (Phytophthora infestans) prediction model

    Science.gov (United States)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  19. Geospatial application of the Water Erosion Prediction Project (WEPP) Model

    Science.gov (United States)

    D. C. Flanagan; J. R. Frankenberger; T. A. Cochrane; C. S. Renschler; W. J. Elliot

    2011-01-01

    The Water Erosion Prediction Project (WEPP) model is a process-based technology for prediction of soil erosion by water at hillslope profile, field, and small watershed scales. In particular, WEPP utilizes observed or generated daily climate inputs to drive the surface hydrology processes (infiltration, runoff, ET) component, which subsequently impacts the rest of the...

  20. Reduced order modelling and predictive control of multivariable ...

    Indian Academy of Sciences (India)

    Anuj Abraham

    2018-03-16

    Mar 16, 2018 ... The performance of constraint generalized predictive control scheme is found to be superior to that of the conventional PID controller in terms of overshoot, settling time and performance indices, mainly ISE, IAE and MSE. Keywords. Predictive control; distillation column; reduced order model; dominant pole; ...

  1. Evaluation of predictive models for delayed graft function of deceased kidney transplantation.

    Science.gov (United States)

    Zhang, Huanxi; Zheng, Linli; Qin, Shuhang; Liu, Longshan; Yuan, Xiaopeng; Fu, Qian; Li, Jun; Deng, Ronghai; Deng, Suxiong; Yu, Fangchao; He, Xiaoshun; Wang, Changxi

    2018-01-05

    This study aimed to evaluate the predictive power of five available delayed graft function (DGF)-prediction models for kidney transplants in the Chinese population. Among the five models, the Irish 2010 model scored the best in performance for the Chinese population. Irish 2010 model had an area under the receiver operating characteristic (ROC) curve of 0.737. Hosmer-Lemeshow goodness-of-fit test showed that the Irish 2010 model had a strong correlation between the calculated DGF risk and the observed DGF incidence ( p = 0.887). When Irish 2010 model was used in the clinic, the optimal upper cut-off was set to 0.5 with the best positive likelihood ratio, while the lower cut-off was set to 0.1 with the best negative likelihood ratio. In the subgroup of donor aged ≤ 5, the observed DGF incidence was significantly higher than the calculated DGF risk by Irish 2010 model (27% vs. 9%). A total of 711 renal transplant cases using deceased donors from China Donation after Citizen's Death Program at our center between February 2007 and August 2016 were included in the analysis using the five predictive models (Irish 2010, Irish 2003, Chaphal 2014, Zaza 2015, Jeldres 2009). Irish 2010 model has the best predictive power for DGF risk in Chinese population among the five models. However, it may not be suitable for allograft recipients whose donor aged ≤ 5-year-old.

  2. The collision of a strong shock with a gas cloud: a model for Cassiopeia A

    International Nuclear Information System (INIS)

    Sgro, A.G.

    1975-01-01

    The result of the collision of the shock with the cloud is a shock traveling around the cloud, a shock transmitted into the cloud, and a shock reflected from the cloud. By equating the cooling time of the posttransmitted shock gas to the time required for the transmitted shock to travel the length of the cloud, a critical cloud density n/subc/ /sup prime/ is defined. For clouds with density greater than n/subc/ /sup prime/, the posttransmitted shock gas cools rapidly and then emits the lines of the lower ionization stages of its constituent elements. The structure of such and its expected appearance to an observer are discussed and compared with the quasi-stationary condensations of Cas A. Conversely, clouds with density less than n/subc//sup prime/ remain hot for several thousand years, and are sources of X-radiation whose temperatures are much less than that of the intercloud gas. After the transmitted shock passes, the cloud pressure is greater than the pressure in the surrounding gas, causing the cloud to expand and the emission to decrease from its value just after the collision. A model in which the soft X-radiation of Cas A is due to a collection of such clouds is discussed. The faint emission patches to the north of Cas A are interpreted as preshocked clouds which will probably become quasi-stationary condensations after being hit by the shock

  3. Predicting 30-Day Readmissions in an Asian Population: Building a Predictive Model by Incorporating Markers of Hospitalization Severity.

    Science.gov (United States)

    Low, Lian Leng; Liu, Nan; Wang, Sijia; Thumboo, Julian; Ong, Marcus Eng Hock; Lee, Kheng Hock

    2016-01-01

    To reduce readmissions, it may be cost-effective to consider risk stratification, with targeting intervention programs to patients at high risk of readmissions. In this study, we aimed to derive and validate a prediction model including several novel markers of hospitalization severity, and compare the model with the LACE index (Length of stay, Acuity of admission, Charlson comorbidity index, Emergency department visits in past 6 months), an established risk stratification tool. This was a retrospective cohort study of all patients ≥ 21 years of age, who were admitted to a tertiary hospital in Singapore from January 1, 2013 through May 31, 2015. Data were extracted from the hospital's electronic health records. The outcome was defined as unplanned readmissions within 30 days of discharge from the index hospitalization. Candidate predictive variables were broadly grouped into five categories: Patient demographics, social determinants of health, past healthcare utilization, medical comorbidities, and markers of hospitalization severity. Multivariable logistic regression was used to predict the outcome, and receiver operating characteristic analysis was performed to compare our model with the LACE index. 74,102 cases were enrolled for analysis. Of these, 11,492 patient cases (15.5%) were readmitted within 30 days of discharge. A total of fifteen predictive variables were strongly associated with the risk of 30-day readmissions, including number of emergency department visits in the past 6 months, Charlson Comorbidity Index, markers of hospitalization severity such as 'requiring inpatient dialysis during index admission, and 'treatment with intravenous furosemide 40 milligrams or more' during index admission. Our predictive model outperformed the LACE index by achieving larger area under the curve values: 0.78 (95% confidence interval [CI]: 0.77-0.79) versus 0.70 (95% CI: 0.69-0.71). Several factors are important for the risk of 30-day readmissions, including proxy

  4. Predicting 30-Day Readmissions in an Asian Population: Building a Predictive Model by Incorporating Markers of Hospitalization Severity.

    Directory of Open Access Journals (Sweden)

    Lian Leng Low

    Full Text Available To reduce readmissions, it may be cost-effective to consider risk stratification, with targeting intervention programs to patients at high risk of readmissions. In this study, we aimed to derive and validate a prediction model including several novel markers of hospitalization severity, and compare the model with the LACE index (Length of stay, Acuity of admission, Charlson comorbidity index, Emergency department visits in past 6 months, an established risk stratification tool.This was a retrospective cohort study of all patients ≥ 21 years of age, who were admitted to a tertiary hospital in Singapore from January 1, 2013 through May 31, 2015. Data were extracted from the hospital's electronic health records. The outcome was defined as unplanned readmissions within 30 days of discharge from the index hospitalization. Candidate predictive variables were broadly grouped into five categories: Patient demographics, social determinants of health, past healthcare utilization, medical comorbidities, and markers of hospitalization severity. Multivariable logistic regression was used to predict the outcome, and receiver operating characteristic analysis was performed to compare our model with the LACE index.74,102 cases were enrolled for analysis. Of these, 11,492 patient cases (15.5% were readmitted within 30 days of discharge. A total of fifteen predictive variables were strongly associated with the risk of 30-day readmissions, including number of emergency department visits in the past 6 months, Charlson Comorbidity Index, markers of hospitalization severity such as 'requiring inpatient dialysis during index admission, and 'treatment with intravenous furosemide 40 milligrams or more' during index admission. Our predictive model outperformed the LACE index by achieving larger area under the curve values: 0.78 (95% confidence interval [CI]: 0.77-0.79 versus 0.70 (95% CI: 0.69-0.71.Several factors are important for the risk of 30-day readmissions

  5. <strong>Driving forces behind the increasing cardiovascular treatment intensity.A dynamic epidemiologic model of trends in Danish cardiovascular drug utilization.strong>

    DEFF Research Database (Denmark)

    Kildemoes, Helle Wallach; Andersen, Morten

    -state (untreated, treated, dead) semi-Markov model to analyse the dynamics of drug use. Transitions were from untreated to treated (incidence), the reverse (discontinuation), and from either untreated or treated to dead. Stratified by sex and age categories, prevalence trends of "growth driving" drug categories...

  6. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  7. Consensus models to predict endocrine disruption for all ...

    Science.gov (United States)

    Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). Collaborative Estrogen Receptor Activity Prediction Project (CERAPP) was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an exte

  8. Dietary information improves cardiovascular disease risk prediction models.

    Science.gov (United States)

    Baik, I; Cho, N H; Kim, S H; Shin, C

    2013-01-01

    Data are limited on cardiovascular disease (CVD) risk prediction models that include dietary predictors. Using known risk factors and dietary information, we constructed and evaluated CVD risk prediction models. Data for modeling were from population-based prospective cohort studies comprised of 9026 men and women aged 40-69 years. At baseline, all were free of known CVD and cancer, and were followed up for CVD incidence during an 8-year period. We used Cox proportional hazard regression analysis to construct a traditional risk factor model, an office-based model, and two diet-containing models and evaluated these models by calculating Akaike information criterion (AIC), C-statistics, integrated discrimination improvement (IDI), net reclassification improvement (NRI) and calibration statistic. We constructed diet-containing models with significant dietary predictors such as poultry, legumes, carbonated soft drinks or green tea consumption. Adding dietary predictors to the traditional model yielded a decrease in AIC (delta AIC=15), a 53% increase in relative IDI (P-value for IDI NRI (category-free NRI=0.14, P NRI (category-free NRI=0.08, P<0.01) compared with the office-based model. The calibration plots for risk prediction demonstrated that the inclusion of dietary predictors contributes to better agreement in persons at high risk for CVD. C-statistics for the four models were acceptable and comparable. We suggest that dietary information may be useful in constructing CVD risk prediction models.

  9. Scanpath Based N-Gram Models for Predicting Reading Behavior

    DEFF Research Database (Denmark)

    Mishra, Abhijit; Bhattacharyya, Pushpak; Carl, Michael

    2013-01-01

    Predicting reading behavior is a difficult task. Reading behavior depends on various linguistic factors (e.g. sentence length, structural complexity etc.) and other factors (e.g individual's reading style, age etc.). Ideally, a reading model should be similar to a language model where the model i...

  10. Unsupervised ship trajectory modeling and prediction using compression and clustering

    NARCIS (Netherlands)

    de Vries, G.; van Someren, M.; van Erp, M.; Stehouwer, H.; van Zaanen, M.

    2009-01-01

    In this paper we show how to build a model of ship trajectories in a certain maritime region and use this model to predict future ship movements. The presented method is unsupervised and based on existing compression (line-simplification) and clustering techniques. We evaluate the model with a

  11. Prediction of annual rainfall pattern using Hidden Markov Model ...

    African Journals Online (AJOL)

    A hidden Markov model to predict annual rainfall pattern has been presented in this paper. The model is developed to provide necessary information for the farmers, agronomists, water resource management scientists and policy makers to enable them plan for the uncertainty of annual rainfall. The model classified annual ...

  12. The Selection of Turbulence Models for Prediction of Room Airflow

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    This paper discusses the use of different turbulence models and their advantages in given situations. As an example, it is shown that a simple zero-equation model can be used for the prediction of special situations as flow with a low level of turbulence. A zero-equation model with compensation...

  13. Model Predictive Control of Wind Turbines using Uncertain LIDAR Measurements

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Soltani, Mohsen; Poulsen, Niels Kjølstad

    2013-01-01

    The problem of Model predictive control (MPC) of wind turbines using uncertain LIDAR (LIght Detection And Ranging) measurements is considered. A nonlinear dynamical model of the wind turbine is obtained. We linearize the obtained nonlinear model for different operating points, which are determined...

  14. Strong influence of vapor pressure deficit on plants' water-use efficiency: a modelling approach

    Science.gov (United States)

    Yi, K.; Zhang, Q.; Novick, K. A.

    2017-12-01

    The plant's trade-off between carbon uptake and water loss, which is often represented as intrinsic water-use efficiency (iWUE), is an important determinant of how plants will respond to expected changes in climate. Here, we present on work that assesses how the response of iWUE to the climatic drivers differs across the isohydricity spectrum, and to evaluate the relative influence of climatic drivers (vapor pressure deficit (D), soil moisture (θ), and atmospheric CO2 (ca)) on iWUE. The results suggested noticeable difference in the response of iWUE to climatic drivers among the species. The iWUE of the isohydric species, which tends to regulate stomata more actively, was more responsive to the variation of θ and D compared to the anisohydric species, of which stomata regulation is less active. Among the climatic drivers, D was the most influential driver on iWUE for all species. These results are consistent with those from a complementary effort to leverage long-term eddy covariance flux records from the FLUXNET 2015 database to compare the influence of D and θ on iWUE across a wide range of biomes; this analysis revealed that D is a more influential driver of iWUE than θ in the most cases. These findings highlight the importance of atmospheric dryness on trees' physiological response, which is important to understand given the large, global increases in D expected in coming decades. As a final step, we will report on early results to evaluate performance of widely-used ecosystem models in capturing the response of iWUE to climatic drivers across regions and to find out if the projection agrees well with flux tower observations. We also attempt to seek whether the relationship between iWUE and climatic drivers can be generalized for each vegetation type or climate regime.

  15. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  16. Integrating geophysics and hydrology for reducing the uncertainty of groundwater model predictions and improved prediction performance

    DEFF Research Database (Denmark)

    Christensen, Nikolaj Kruse; Christensen, Steen; Ferre, Ty

    constructed from geological and hydrological data. However, geophysical data are increasingly used to inform hydrogeologic models because they are collected at lower cost and much higher density than geological and hydrological data. Despite increased use of geophysics, it is still unclear whether...... the integration of geophysical data in the construction of a groundwater model increases the prediction performance. We suggest that modelers should perform a hydrogeophysical “test-bench” analysis of the likely value of geophysics data for improving groundwater model prediction performance before actually...... collecting geophysical data. At a minimum, an analysis should be conducted assuming settings that are favorable for the chosen geophysical method. If the analysis suggests that data collected by the geophysical method is unlikely to improve model prediction performance under these favorable settings...

  17. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  18. Preoperative prediction model of outcome after cholecystectomy for symptomatic gallstones

    DEFF Research Database (Denmark)

    Borly, L; Anderson, I B; Bardram, Linda

    1999-01-01

    and sonography evaluated gallbladder motility, gallstones, and gallbladder volume. Preoperative variables in patients with or without postcholecystectomy pain were compared statistically, and significant variables were combined in a logistic regression model to predict the postoperative outcome. RESULTS: Eighty...... and by the absence of 'agonizing' pain and of symptoms coinciding with pain (P model 15 of 18 predicted patients had postoperative pain (PVpos = 0.83). Of 62 patients predicted as having no pain postoperatively, 56 were pain-free (PVneg = 0.90). Overall accuracy...... was 89%. CONCLUSION: From this prospective study a model based on preoperative symptoms was developed to predict postcholecystectomy pain. Since intrastudy reclassification may give too optimistic results, the model should be validated in future studies....

  19. Prediction of Chemical Function: Model Development and Application

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (...

  20. Linear regression crash prediction models : issues and proposed solutions.

    Science.gov (United States)

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  1. FPGA implementation of predictive degradation model for engine oil lifetime

    Science.gov (United States)

    Idros, M. F. M.; Razak, A. H. A.; Junid, S. A. M. Al; Suliman, S. I.; Halim, A. K.

    2018-03-01

    This paper presents the implementation of linear regression model for degradation prediction on Register Transfer Logic (RTL) using QuartusII. A stationary model had been identified in the degradation trend for the engine oil in a vehicle in time series method. As for RTL implementation, the degradation model is written in Verilog HDL and the data input are taken at a certain time. Clock divider had been designed to support the timing sequence of input data. At every five data, a regression analysis is adapted for slope variation determination and prediction calculation. Here, only the negative value are taken as the consideration for the prediction purposes for less number of logic gate. Least Square Method is adapted to get the best linear model based on the mean values of time series data. The coded algorithm has been implemented on FPGA for validation purposes. The result shows the prediction time to change the engine oil.

  2. Predictive Modeling: A New Paradigm for Managing Endometrial Cancer.

    Science.gov (United States)

    Bendifallah, Sofiane; Daraï, Emile; Ballester, Marcos

    2016-03-01

    With the abundance of new options in diagnostic and treatment modalities, a shift in the medical decision process for endometrial cancer (EC) has been observed. The emergence of individualized medicine and the increasing complexity of available medical data has lead to the development of several prediction models. In EC, those clinical models (algorithms, nomograms, and risk scoring systems) have been reported, especially for stratifying and subgrouping patients, with various unanswered questions regarding such things as the optimal surgical staging for lymph node metastasis as well as the assessment of recurrence and survival outcomes. In this review, we highlight existing prognostic and predictive models in EC, with a specific focus on their clinical applicability. We also discuss the methodologic aspects of the development of such predictive models and the steps that are required to integrate these tools into clinical decision making. In the future, the emerging field of molecular or biochemical markers research may substantially improve predictive and treatment approaches.

  3. On the Predictiveness of Single-Field Inflationary Models

    CERN Document Server

    Burgess, C.P.; Trott, Michael

    2014-01-01

    We re-examine the predictiveness of single-field inflationary models and discuss how an unknown UV completion can complicate determining inflationary model parameters from observations, even from precision measurements. Besides the usual naturalness issues associated with having a shallow inflationary potential, we describe another issue for inflation, namely, unknown UV physics modifies the running of Standard Model (SM) parameters and thereby introduces uncertainty into the potential inflationary predictions. We illustrate this point using the minimal Higgs Inflationary scenario, which is arguably the most predictive single-field model on the market, because its predictions for $A_s$, $r$ and $n_s$ are made using only one new free parameter beyond those measured in particle physics experiments, and run up to the inflationary regime. We find that this issue can already have observable effects. At the same time, this UV-parameter dependence in the Renormalization Group allows Higgs Inflation to occur (in prin...

  4. Predictive modeling in catalysis - from dream to reality

    NARCIS (Netherlands)

    Maldonado, A.G.; Rothenberg, G.

    2009-01-01

    In silico catalyst optimization is the ultimate application of computers in catalysis. This article provides an overview of the basic concepts of predictive modeling and describes how this technique can be used in catalyst and reaction design.

  5. Fuzzy model predictive control algorithm applied in nuclear power plant

    International Nuclear Information System (INIS)

    Zuheir, Ahmad

    2006-01-01

    The aim of this paper is to design a predictive controller based on a fuzzy model. The Takagi-Sugeno fuzzy model with an Adaptive B-splines neuro-fuzzy implementation is used and incorporated as a predictor in a predictive controller. An optimization approach with a simplified gradient technique is used to calculate predictions of the future control actions. In this approach, adaptation of the fuzzy model using dynamic process information is carried out to build the predictive controller. The easy description of the fuzzy model and the easy computation of the gradient sector during the optimization procedure are the main advantages of the computation algorithm. The algorithm is applied to the control of a U-tube steam generation unit (UTSG) used for electricity generation. (author)

  6. Compensatory versus noncompensatory models for predicting consumer preferences

    Directory of Open Access Journals (Sweden)

    Anja Dieckmann

    2009-04-01

    Full Text Available Standard preference models in consumer research assume that people weigh and add all attributes of the available options to derive a decision, while there is growing evidence for the use of simplifying heuristics. Recently, a greedoid algorithm has been developed (Yee, Dahan, Hauser and Orlin, 2007; Kohli and Jedidi, 2007 to model lexicographic heuristics from preference data. We compare predictive accuracies of the greedoid approach and standard conjoint analysis in an online study with a rating and a ranking task. The lexicographic model derived from the greedoid algorithm was better at predicting ranking compared to rating data, but overall, it achieved lower predictive accuracy for hold-out data than the compensatory model estimated by conjoint analysis. However, a considerable minority of participants was better predicted by lexicographic strategies. We conclude that the new algorithm will not replace standard tools for analyzing preferences, but can boost the study of situational and individual differences in preferential choice processes.

  7. Predictive Modeling of Partitioned Systems: Implementation and Applications

    OpenAIRE

    Latten, Christine

    2014-01-01

    A general mathematical methodology for predictive modeling of coupled multi-physics systems is implemented and has been applied without change to an illustrative heat conduction example and reactor physics benchmarks.

  8. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  9. Model Predictive Control for Ethanol Steam Reformers

    OpenAIRE

    Li, Mingming

    2014-01-01

    This thesis firstly proposes a new approach of modelling an ethanol steam reformer (ESR) for producing pure hydrogen. Hydrogen has obvious benefits as an alternative for feeding the proton exchange membrane fuel cells (PEMFCs) to produce electricity. However, an important drawback is that the hydrogen distribution and storage have high cost. So the ESR is regarded as a way to overcome these difficulties. Ethanol is currently considered as a promising energy source under the res...

  10. Haskell financial data modeling and predictive analytics

    CERN Document Server

    Ryzhov, Pavel

    2013-01-01

    This book is a hands-on guide that teaches readers how to use Haskell's tools and libraries to analyze data from real-world sources in an easy-to-understand manner.This book is great for developers who are new to financial data modeling using Haskell. A basic knowledge of functional programming is not required but will be useful. An interest in high frequency finance is essential.

  11. Wireless model predictive control: Application to water-level system

    Directory of Open Access Journals (Sweden)

    Ramdane Hedjar

    2016-04-01

    Full Text Available This article deals with wireless model predictive control of a water-level control system. The objective of the model predictive control algorithm is to constrain the control signal inside saturation limits and maintain the water level around the desired level. Linear modeling of any nonlinear plant leads to parameter uncertainties and non-modeled dynamics in the linearized mathematical model. These uncertainties induce a steady-state error in the output response of the water level. To eliminate this steady-state error and increase the robustness of the control algorithm, an integral action is included in the closed loop. To control the water-level system remotely, the communication between the controller and the process is performed using radio channel. To validate the proposed scheme, simulation and real-time implementation of the algorithm have been conducted, and the results show the effectiveness of wireless model predictive control with integral action.

  12. Robust human body model injury prediction in simulated side impact crashes.

    Science.gov (United States)

    Golman, Adam J; Danelson, Kerry A; Stitzel, Joel D

    2016-01-01

    This study developed a parametric methodology to robustly predict occupant injuries sustained in real-world crashes using a finite element (FE) human body model (HBM). One hundred and twenty near-side impact motor vehicle crashes were simulated over a range of parameters using a Toyota RAV4 (bullet vehicle), Ford Taurus (struck vehicle) FE models and a validated human body model (HBM) Total HUman Model for Safety (THUMS). Three bullet vehicle crash parameters (speed, location and angle) and two occupant parameters (seat position and age) were varied using a Latin hypercube design of Experiments. Four injury metrics (head injury criterion, half deflection, thoracic trauma index and pelvic force) were used to calculate injury risk. Rib fracture prediction and lung strain metrics were also analysed. As hypothesized, bullet speed had the greatest effect on each injury measure. Injury risk was reduced when bullet location was further from the B-pillar or when the bullet angle was more oblique. Age had strong correlation to rib fractures frequency and lung strain severity. The injuries from a real-world crash were predicted using two different methods by (1) subsampling the injury predictors from the 12 best crush profile matching simulations and (2) using regression models. Both injury prediction methods successfully predicted the case occupant's low risk for pelvic injury, high risk for thoracic injury, rib fractures and high lung strains with tight confidence intervals. This parametric methodology was successfully used to explore crash parameter interactions and to robustly predict real-world injuries.

  13. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  14. Approximating prediction uncertainty for random forest regression models

    Science.gov (United States)

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  15. Prediction of cloud droplet number in a general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-04-01

    We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.

  16. The Next Page Access Prediction Using Makov Model

    OpenAIRE

    Deepti Razdan

    2011-01-01

    Predicting the next page to be accessed by the Webusers has attracted a large amount of research. In this paper, anew web usage mining approach is proposed to predict next pageaccess. It is proposed to identify similar access patterns from weblog using K-mean clustering and then Markov model is used forprediction for next page accesses. The tightness of clusters isimproved by setting similarity threshold while forming clusters.In traditional recommendation models, clustering by nonsequentiald...

  17. Body image dissatisfaction in pregnant and non-pregnant females is strongly predicted by immune activation and mucosa-derived activation of the tryptophan catabolite (TRYCAT) pathway.

    Science.gov (United States)

    Roomruangwong, Chutima; Kanchanatawan, Buranee; Carvalho, André F; Sirivichayakul, Sunee; Duleu, Sebastien; Geffard, Michel; Maes, Michael

    2018-04-01

    The aim of the present study is to delineate the associations between body image dissatisfaction in pregnant women and immune-inflammatory biomarkers, i.e., C-reactive protein (CRP), zinc and IgA/IgM responses to tryptophan and tryptophan catabolites (TRYCATs). We assessed 49 pregnant and 24 non-pregnant females and assessed Body Image Satisfaction (BIS) scores at the end of term (T1), and 2-4 days (T2) and 4-6 weeks (T3) after delivery. Subjects were divided in those with a lowered BIS score (≤ 3) versus those with a higher score. Logistic regression analysis showed that a lowered T1 BIS score was predicted by CRP levels and IgA responses to tryptophan (negative) and TRYCATs (positive), perinatal depression, body mass index (BMI) and age. The sum of quinolinic acid, kynurenine, 3-OH-kynurenine and 3-OH-anthranilic acid (reflecting brain quinolinic acid contents) was the single best predictor. In addition, a large part of the variance in the T1, T2 and T3 BIS scores was explained by IgA responses to tryptophan and TRYCATs, especially quinolinic acid. Body image dissatisfaction is strongly associated with inflammation and mucosa-derived IDO activation independently from depression, pregnancy, BMI and age. IgA responses to peripheral TRYCATs, which determine brain quinolinic acid concentrations, also predict body image dissatisfaction.

  18. Midregional-proAtrial Natriuretic Peptide and High Sensitive Troponin T Strongly Predict Adverse Outcome in Patients Undergoing Percutaneous Repair of Mitral Valve Regurgitation.

    Directory of Open Access Journals (Sweden)

    Jochen Wöhrle

    Full Text Available It is not known whether biomarkers of hemodynamic stress, myocardial necrosis, and renal function might predict adverse outcome in patients undergoing percutaneous repair of severe mitral valve insufficiency. Thus, we aimed to assess the predictive value of various established and emerging biomarkers for major adverse cardiovascular events (MACE in these patients.Thirty-four patients with symptomatic severe mitral valve insufficiency with a mean STS-Score for mortality of 12.6% and a mean logistic EuroSCORE of 19.7% undergoing MitraClip therapy were prospectively included in this study. Plasma concentrations of mid regional-proatrial natriuretic peptide (MR-proANP, Cystatin C, high-sensitive C-reactive protein (hsCRP, high-sensitive troponin T (hsTnT, N-terminal B-type natriuretic peptide (NT-proBNP, galectin-3, and soluble ST-2 (interleukin 1 receptor-like 1 were measured directly before procedure. MACE was defined as cardiovascular death and hospitalization for heart failure (HF.During a median follow-up of 211 days (interquartile range 133 to 333 days, 9 patients (26.5% experienced MACE (death: 7 patients, rehospitalization for HF: 2 patients. Thirty day MACE-rate was 5.9% (death: 2 patients, no rehospitalization for HF. Baseline concentrations of hsTnT (Median 92.6 vs 25.2 ng/L, NT-proBNP (Median 11251 vs 1974 pg/mL and MR-proANP (Median 755.6 vs 318.3 pmol/L, all p<0.001 were clearly higher in those experiencing an event vs event-free patients, while other clinical variables including STS-Score and logistic EuroSCORE did not differ significantly. In Kaplan-Meier analyses, NT-proBNP and in particular hsTnT and MR-proANP above the median discriminated between those experiencing an event vs event-free patients. This was further corroborated by C-statistics where areas under the ROC curve for prediction of MACE using the respective median values were 0.960 for MR-proANP, 0.907 for NT-proBNP, and 0.822 for hsTnT.MR-proANP and hsTnT strongly

  19. Working Towards a Risk Prediction Model for Neural Tube Defects

    Science.gov (United States)

    Agopian, A.J.; Lupo, Philip J.; Tinker, Sarah C.; Canfield, Mark A.; Mitchell, Laura E.

    2015-01-01

    BACKGROUND Several risk factors have been consistently associated with neural tube defects (NTDs). However, the predictive ability of these risk factors in combination has not been evaluated. METHODS To assess the predictive ability of established risk factors for NTDs, we built predictive models using data from the National Birth Defects Prevention Study, which is a large, population-based study of nonsyndromic birth defects. Cases with spina bifida or anencephaly, or both (n = 1239), and controls (n = 8494) were randomly divided into separate training (75% of cases and controls) and validation (remaining 25%) samples. Multivariable logistic regression models were constructed with the training samples. The predictive ability of these models was evaluated in the validation samples by assessing the area under the receiver operator characteristic curves. An ordinal predictive risk index was also constructed and evaluated. In addition, the ability of classification and regression tree (CART) analysis to identify subgroups of women at increased risk for NTDs in offspring was evaluated. RESULTS The predictive ability of the multivariable models was poor (area under the receiver operating curve: 0.55 for spina bifida only, 0.59 for anencephaly only, and 0.56 for anencephaly and spina bifida combined). The predictive abilities of the ordinal risk indexes and CART models were also low. CONCLUSION Current established risk factors for NTDs are insufficient for population-level prediction of a women’s risk for having affected offspring. Identification of genetic risk factors and novel nongenetic risk factors will be critical to establishing models, with good predictive ability, for NTDs. PMID:22253139

  20. Predictive QSAR Models for the Toxicity of Disinfection Byproducts.

    Science.gov (United States)

    Qin, Litang; Zhang, Xin; Chen, Yuhan; Mo, Lingyun; Zeng, Honghu; Liang, Yanpeng

    2017-10-09

    Several hundred disinfection byproducts (DBPs) in drinking water have been identified, and are known to have potentially adverse health effects. There are toxicological data gaps for most DBPs, and the predictive method may provide an effective way to address this. The development of an in-silico model of toxicology endpoints of DBPs is rarely studied. The main aim of the present study is to develop predictive quantitative structure-activity relationship (QSAR) models for the reactive toxicities of 50 DBPs in the five bioassays of X-Microtox, GSH+, GSH-, DNA+ and DNA-. All-subset regression was used to select the optimal descriptors, and multiple linear-regression models were built. The developed QSAR models for five endpoints satisfied the internal and external validation criteria: coefficient of determination ( R ²) > 0.7, explained variance in leave-one-out prediction ( Q ² LOO ) and in leave-many-out prediction ( Q ² LMO ) > 0.6, variance explained in external prediction ( Q ² F1 , Q ² F2 , and Q ² F3 ) > 0.7, and concordance correlation coefficient ( CCC ) > 0.85. The application domains and the meaning of the selective descriptors for the QSAR models were discussed. The obtained QSAR models can be used in predicting the toxicities of the 50 DBPs.

  1. Predictive QSAR Models for the Toxicity of Disinfection Byproducts

    Directory of Open Access Journals (Sweden)

    Litang Qin

    2017-10-01

    Full Text Available Several hundred disinfection byproducts (DBPs in drinking water have been identified, and are known to have potentially adverse health effects. There are toxicological data gaps for most DBPs, and the predictive method may provide an effective way to address this. The development of an in-silico model of toxicology endpoints of DBPs is rarely studied. The main aim of the present study is to develop predictive quantitative structure–activity relationship (QSAR models for the reactive toxicities of 50 DBPs in the five bioassays of X-Microtox, GSH+, GSH−, DNA+ and DNA−. All-subset regression was used to select the optimal descriptors, and multiple linear-regression models were built. The developed QSAR models for five endpoints satisfied the internal and external validation criteria: coefficient of determination (R2 > 0.7, explained variance in leave-one-out prediction (Q2LOO and in leave-many-out prediction (Q2LMO > 0.6, variance explained in external prediction (Q2F1, Q2F2, and Q2F3 > 0.7, and concordance correlation coefficient (CCC > 0.85. The application domains and the meaning of the selective descriptors for the QSAR models were discussed. The obtained QSAR models can be used in predicting the toxicities of the 50 DBPs.

  2. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimization method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...... capacity associated with large penetration of intermittent renewable energy sources in a future smart grid....

  3. FAST VARIABILITY AND MILLIMETER/IR FLARES IN GRMHD MODELS OF Sgr A* FROM STRONG-FIELD GRAVITATIONAL LENSING

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Chi-kwan; Psaltis, Dimitrios; Özel, Feryal; Marrone, Daniel [Steward Observatory and Department of Astronomy, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States); Medeiros, Lia [Department of Physics, Broida Hall, University of California, Santa Barbara, Santa Barbara, CA 93106 (United States); Sadowski, Aleksander [MIT Kavli Institute for Astrophysics and Space Research, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Narayan, Ramesh, E-mail: chanc@email.arizona.edu [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2015-10-20

    We explore the variability properties of long, high-cadence general relativistic magnetohydrodynamic (GRMHD) simulations across the electromagnetic spectrum using an efficient, GPU-based radiative transfer algorithm. We focus on both standard and normal evolution (SANE) and magnetically arrested disk (MAD) simulations with parameters that successfully reproduce the time-averaged spectral properties of Sgr A* and the size of its image at 1.3 mm. We find that the SANE models produce short-timescale variability with amplitudes and power spectra that closely resemble those inferred observationally. In contrast, MAD models generate only slow variability at lower flux levels. Neither set of models shows any X-ray flares, which most likely indicates that additional physics, such as particle acceleration mechanisms, need to be incorporated into the GRMHD simulations to account for them. The SANE models show strong, short-lived millimeter/infrared (IR) flares, with short (≲1 hr) time lags between the millimeter and IR wavelengths, that arise from the combination of short-lived magnetic flux tubes and strong-field gravitational lensing near the horizon. Such events provide a natural explanation for the observed IR flares with no X-ray counterparts.

  4. Application of the nuclear liquid drop model to a negative hydrogen ion in the strong electric field of a laser

    Energy Technology Data Exchange (ETDEWEB)

    Amusia, M.Ya.; Kornyushin, Y. [Racah Institute of Physics, Hebrew University, Jerusalem (Israel)]. E-mail: yurik@vms.huji.ac.il

    2000-09-01

    The nuclear liquid drop model is applied to describe some basic properties of a negative hydrogen ion in the strong electric field of a laser. The equilibrium ionic size, energy and polarizability of the ion are calculated. Collective modes of the dipole oscillations are considered. A barrier which arises in a strong electric field is studied. The barrier vanishes at some large value of the electric field, which is defined as a critical value. The dependence of the critical field on frequency is studied. At frequencies {omega}{>=}({omega}{sub d}/2{sup 1/2}) ({omega}{sub d} is the frequency of the dipole oscillations of the electronic cloud relative to the nucleus) the barrier remains for any field. At high frequencies a 'stripping' mechanism for instability arises. At the resonant frequency a rather low amplitude of the electric field causes the 'stripping' instability. (author)

  5. Maxent modelling for predicting the potential distribution of Thai Palms

    DEFF Research Database (Denmark)

    Tovaranonte, Jantrararuk; Barfod, Anders S.; Overgaard, Anne Blach

    2011-01-01

    Increasingly species distribution models are being used to address questions related to ecology, biogeography and species conservation on global and regional scales. We used the maximum entropy approach implemented in the MAXENT programme to build a habitat suitability model for Thai palms based...... overprediction of species distribution ranges. The models with the best predictive power were found by calculating the area under the curve (AUC) of receiver-operating characteristic (ROC). Here, we provide examples of contrasting predicted species distribution ranges as well as a map of modeled palm diversity...

  6. Validation of Fatigue Modeling Predictions in Aviation Operations

    Science.gov (United States)

    Gregory, Kevin; Martinez, Siera; Flynn-Evans, Erin

    2017-01-01

    Bio-mathematical fatigue models that predict levels of alertness and performance are one potential tool for use within integrated fatigue risk management approaches. A number of models have been developed that provide predictions based on acute and chronic sleep loss, circadian desynchronization, and sleep inertia. Some are publicly available and gaining traction in settings such as commercial aviation as a means of evaluating flight crew schedules for potential fatigue-related risks. Yet, most models have not been rigorously evaluated and independently validated for the operations to which they are being applied and many users are not fully aware of the limitations in which model results should be interpreted and applied.

  7. Aero-acoustic noise of wind turbines. Noise prediction models

    Energy Technology Data Exchange (ETDEWEB)

    Maribo Pedersen, B. [ed.

    1997-12-31

    Semi-empirical and CAA (Computational AeroAcoustics) noise prediction techniques are the subject of this expert meeting. The meeting presents and discusses models and methods. The meeting may provide answers to the following questions: What Noise sources are the most important? How are the sources best modeled? What needs to be done to do better predictions? Does it boil down to correct prediction of the unsteady aerodynamics around the rotor? Or is the difficult part to convert the aerodynamics into acoustics? (LN)

  8. Using a Prediction Model to Manage Cyber Security Threats.

    Science.gov (United States)

    Jaganathan, Venkatesh; Cherurveettil, Priyesh; Muthu Sivashanmugam, Premapriya

    2015-01-01

    Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  9. Using a Prediction Model to Manage Cyber Security Threats

    Directory of Open Access Journals (Sweden)

    Venkatesh Jaganathan

    2015-01-01

    Full Text Available Cyber-attacks are an important issue faced by all organizations. Securing information systems is critical. Organizations should be able to understand the ecosystem and predict attacks. Predicting attacks quantitatively should be part of risk management. The cost impact due to worms, viruses, or other malicious software is significant. This paper proposes a mathematical model to predict the impact of an attack based on significant factors that influence cyber security. This model also considers the environmental information required. It is generalized and can be customized to the needs of the individual organization.

  10. Predictions for mt and MW in minimal supersymmetric models

    International Nuclear Information System (INIS)

    Buchmueller, O.; Ellis, J.R.; Flaecher, H.; Isidori, G.

    2009-12-01

    Using a frequentist analysis of experimental constraints within two versions of the minimal supersymmetric extension of the Standard Model, we derive the predictions for the top quark mass, m t , and the W boson mass, m W . We find that the supersymmetric predictions for both m t and m W , obtained by incorporating all the relevant experimental information and state-of-the-art theoretical predictions, are highly compatible with the experimental values with small remaining uncertainties, yielding an improvement compared to the case of the Standard Model. (orig.)

  11. Webinar of paper 2013, Which method predicts recidivism best? A comparison of statistical, machine learning and data mining predictive models

    NARCIS (Netherlands)

    Tollenaar, N.; Van der Heijden, P.G.M.

    2013-01-01

    Using criminal population criminal conviction history information, prediction models are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining

  12. Preprocedural Prediction Model for Contrast-Induced Nephropathy Patients.

    Science.gov (United States)

    Yin, Wen-Jun; Yi, Yi-Hu; Guan, Xiao-Feng; Zhou, Ling-Yun; Wang, Jiang-Lin; Li, Dai-Yang; Zuo, Xiao-Cong

    2017-02-03

    Several models have been developed for prediction of contrast-induced nephropathy (CIN); however, they only contain patients receiving intra-arterial contrast media for coronary angiographic procedures, which represent a small proportion of all contrast procedures. In addition, most of them evaluate radiological interventional procedure-related variables. So it is necessary for us to develop a model for prediction of CIN before radiological procedures among patients administered contrast media. A total of 8800 patients undergoing contrast administration were randomly assigned in a 4:1 ratio to development and validation data sets. CIN was defined as an increase of 25% and/or 0.5 mg/dL in serum creatinine within 72 hours above the baseline value. Preprocedural clinical variables were used to develop the prediction model from the training data set by the machine learning method of random forest, and 5-fold cross-validation was used to evaluate the prediction accuracies of the model. Finally we tested this model in the validation data set. The incidence of CIN was 13.38%. We built a prediction model with 13 preprocedural variables selected from 83 variables. The model obtained an area under the receiver-operating characteristic (ROC) curve (AUC) of 0.907 and gave prediction accuracy of 80.8%, sensitivity of 82.7%, specificity of 78.8%, and Matthews correlation coefficient of 61.5%. For the first time, 3 new factors are included in the model: the decreased sodium concentration, the INR value, and the preprocedural glucose level. The newly established model shows excellent predictive ability of CIN development and thereby provides preventative measures for CIN. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  13. Risk Prediction Models for Oral Clefts Allowing for Phenotypic Heterogeneity

    Directory of Open Access Journals (Sweden)

    Yalu eWen

    2015-08-01

    Full Text Available Oral clefts are common birth defects that have a major impact on the affected individual, their family and society. World-wide, the incidence of oral clefts is 1/700 live births, making them the most common craniofacial birth defects. The successful prediction of oral clefts may help identify sub-population at high risk, and promote new diagnostic and therapeutic strategies. Nevertheless, developing a clinically useful oral clefts risk prediction model remains a great challenge. Compelling evidences suggest the etiologies of oral clefts are highly heterogeneous, and the development of a risk prediction model with consideration of phenotypic heterogeneity may potentially improve the accuracy of a risk prediction model. In this study, we applied a previously developed statistical method to investigate the risk prediction on sub-phenotypes of oral clefts. Our results suggested subtypes of cleft lip and palate have similar genetic etiologies (AUC=0.572 with subtypes of cleft lip only (AUC=0.589, while the subtypes of cleft palate only (CPO have heterogeneous underlying mechanisms (AUCs for soft CPO and hard CPO are 0.617 and 0.623, respectively. This highlighted the potential that the hard and soft forms of CPO have their own mechanisms despite sharing some of the genetic risk factors. Comparing with conventional methods for risk prediction modeling, our method considers phenotypic heterogeneity of a disease, which potentially improves the accuracy for predicting each sub-phenotype of oral clefts.

  14. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  15. Survival prediction model for postoperative hepatocellular carcinoma patients.

    Science.gov (United States)

    Ren, Zhihui; He, Shasha; Fan, Xiaotang; He, Fangping; Sang, Wei; Bao, Yongxing; Ren, Weixin; Zhao, Jinming; Ji, Xuewen; Wen, Hao

    2017-09-01

    This study is to establish a predictive index (PI) model of 5-year survival rate for patients with hepatocellular carcinoma (HCC) after radical resection and to evaluate its prediction sensitivity, specificity, and accuracy.Patients underwent HCC surgical resection were enrolled and randomly divided into prediction model group (101 patients) and model evaluation group (100 patients). Cox regression model was used for univariate and multivariate survival analysis. A PI model was established based on multivariate analysis and receiver operating characteristic (ROC) curve was drawn accordingly. The area under ROC (AUROC) and PI cutoff value was identified.Multiple Cox regression analysis of prediction model group showed that neutrophil to lymphocyte ratio, histological grade, microvascular invasion, positive resection margin, number of tumor, and postoperative transcatheter arterial chemoembolization treatment were the independent predictors for the 5-year survival rate for HCC patients. The model was PI = 0.377 × NLR + 0.554 × HG + 0.927 × PRM + 0.778 × MVI + 0.740 × NT - 0.831 × transcatheter arterial chemoembolization (TACE). In the prediction model group, AUROC was 0.832 and the PI cutoff value was 3.38. The sensitivity, specificity, and accuracy were 78.0%, 80%, and 79.2%, respectively. In model evaluation group, AUROC was 0.822, and the PI cutoff value was well corresponded to the prediction model group with sensitivity, specificity, and accuracy of 85.0%, 83.3%, and 84.0%, respectively.The PI model can quantify the mortality risk of hepatitis B related HCC with high sensitivity, specificity, and accuracy.

  16. A prediction model for assessing residential radon concentration in Switzerland

    International Nuclear Information System (INIS)

    Hauri, Dimitri D.; Huss, Anke; Zimmermann, Frank; Kuehni, Claudia E.; Röösli, Martin

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the nationwide Swiss radon database collected between 1994 and 2004. Of these, 80% randomly selected measurements were used for model development and the remaining 20% for an independent model validation. A multivariable log-linear regression model was fitted and relevant predictors selected according to evidence from the literature, the adjusted R², the Akaike's information criterion (AIC), and the Bayesian information criterion (BIC). The prediction model was evaluated by calculating Spearman rank correlation between measured and predicted values. Additionally, the predicted values were categorised into three categories (50th, 50th–90th and 90th percentile) and compared with measured categories using a weighted Kappa statistic. The most relevant predictors for indoor radon levels were tectonic units and year of construction of the building, followed by soil texture, degree of urbanisation, floor of the building where the measurement was taken and housing type (P-values <0.001 for all). Mean predicted radon values (geometric mean) were 66 Bq/m³ (interquartile range 40–111 Bq/m³) in the lowest exposure category, 126 Bq/m³ (69–215 Bq/m³) in the medium category, and 219 Bq/m³ (108–427 Bq/m³) in the highest category. Spearman correlation between predictions and measurements was 0.45 (95%-CI: 0.44; 0.46) for the development dataset and 0.44 (95%-CI: 0.42; 0.46) for the validation dataset. Kappa coefficients were 0.31 for the development and 0.30 for the validation dataset, respectively. The model explained 20% overall variability (adjusted R²). In conclusion, this residential radon prediction model, based on a large number of measurements, was demonstrated to be

  17. Numerical modeling capabilities to predict repository performance

    International Nuclear Information System (INIS)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used

  18. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    In this thesis, we consider control strategies for flexible distributed energy resources in the future intelligent energy system – the Smart Grid. The energy system is a large-scale complex network with many actors and objectives in different hierarchical layers. Specifically the power system must...... significantly. A Smart Grid calls for flexible consumers that can adjust their consumption based on the amount of green energy in the grid. This requires coordination through new large-scale control and optimization algorithms. Trading of flexibility is key to drive power consumption in a sustainable direction....... In Denmark, we expect that distributed energy resources such as heat pumps, and batteries in electric vehicles will mobilize part of the needed flexibility. Our primary objectives in the thesis were threefold: 1.Simulate the components in the power system based on simple models from literature (e.g. heat...

  19. Model Predictive Control of Wind Turbines

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian

    Wind turbines play a major role in the transformation from a fossil fuel based energy production to a more sustainable production of energy. Total-cost-of-ownership is an important parameter when investors decide in which energy technology they should place their capital. Modern wind turbines...... are controlled by pitching the blades and by controlling the electro-magnetic torque of the generator, thus slowing the rotation of the blades. Improved control of wind turbines, leading to reduced fatigue loads, can be exploited by using less materials in the construction of the wind turbine or by reducing...... the need for maintenance of the wind turbine. Either way, better total-cost-of-ownership for wind turbine operators can be achieved by improved control of the wind turbines. Wind turbine control can be improved in two ways, by improving the model on which the controller bases its design or by improving...

  20. Comparison of Linear Prediction Models for Audio Signals

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available While linear prediction (LP has become immensely popular in speech modeling, it does not seem to provide a good approach for modeling audio signals. This is somewhat surprising, since a tonal signal consisting of a number of sinusoids can be perfectly predicted based on an (all-pole LP model with a model order that is twice the number of sinusoids. We provide an explanation why this result cannot simply be extrapolated to LP of audio signals. If noise is taken into account in the tonal signal model, a low-order all-pole model appears to be only appropriate when the tonal components are uniformly distributed in the Nyquist interval. Based on this observation, different alternatives to the conventional LP model can be suggested. Either the model should be changed to a pole-zero, a high-order all-pole, or a pitch prediction model, or the conventional LP model should be preceded by an appropriate frequency transform, such as a frequency warping or downsampling. By comparing these alternative LP models to the conventional LP model in terms of frequency estimation accuracy, residual spectral flatness, and perceptual frequency resolution, we obtain several new and promising approaches to LP-based audio modeling.

  1. Model Predictive Control of a Wave Energy Converter

    DEFF Research Database (Denmark)

    Andersen, Palle; Pedersen, Tom Søndergård; Nielsen, Kirsten Mølgaard

    2015-01-01

    In this paper reactive control and Model Predictive Control (MPC) for a Wave Energy Converter (WEC) are compared. The analysis is based on a WEC from Wave Star A/S designed as a point absorber. The model predictive controller uses wave models based on the dominating sea states combined with a model......'s are designed for each sea state using a model assuming a linear loss torque. The mean power results from two controllers are compared using both loss models. Simulation results show that MPC can outperform a reactive controller if a good model of the conversion losses is available....... connecting undisturbed wave sequences to sequences of torque. Losses in the conversion from mechanical to electrical power are taken into account in two ways. Conventional reactive controllers are tuned for each sea state with the assumption that the converter has the same efficiency back and forth. MPC...

  2. Review of Model Predictions for Extensive Air Showers

    Science.gov (United States)

    Pierog, Tanguy

    In detailed air shower simulations, the uncertainty in the prediction of shower observable for different primary particles and energies is currently dominated by differences between hadronic interaction models. With the results of the first run of the LHC, the difference between post-LHC model predictions has been reduced at the same level as experimental uncertainties of cosmic ray experiments. At the same time new types of air shower observables, like the muon production depth, have been measured, adding new constraints on hadronic models. Currently no model is able to reproduce consistently all mass composition measurements possible with the Pierre Auger Observatory for instance. We review the current model predictions for various particle production observables and their link with air shower observables and discuss the future possible improvements.

  3. Integrating predictive frameworks and cognitive models of face perception.

    Science.gov (United States)

    Trapp, Sabrina; Schweinberger, Stefan R; Hayward, William G; Kovács, Gyula

    2018-02-08

    The idea of a "predictive brain"-that is, the interpretation of internal and external information based on prior expectations-has been elaborated intensely over the past decade. Several domains in cognitive neuroscience have embraced this idea, including studies in perception, motor control, language, and affective, social, and clinical neuroscience. Despite the various studies that have used face stimuli to address questions related to predictive processing, there has been surprisingly little connection between this work and established cognitive models of face recognition. Here we suggest that the predictive framework can serve as an important complement of established cognitive face models. Conversely, the link to cognitive face models has the potential to shed light on issues that remain open in predictive frameworks.

  4. A model for predicting lung cancer response to therapy

    International Nuclear Information System (INIS)

    Seibert, Rebecca M.; Ramsey, Chester R.; Hines, J. Wesley; Kupelian, Patrick A.; Langen, Katja M.; Meeks, Sanford L.; Scaperoth, Daniel D.

    2007-01-01

    Purpose: Volumetric computed tomography (CT) images acquired by image-guided radiation therapy (IGRT) systems can be used to measure tumor response over the course of treatment. Predictive adaptive therapy is a novel treatment technique that uses volumetric IGRT data to actively predict the future tumor response to therapy during the first few weeks of IGRT treatment. The goal of this study was to develop and test a model for predicting lung tumor response during IGRT treatment using serial megavoltage CT (MVCT). Methods and Materials: Tumor responses were measured for 20 lung cancer lesions in 17 patients that were imaged and treated with helical tomotherapy with doses ranging from 2.0 to 2.5 Gy per fraction. Five patients were treated with concurrent chemotherapy, and 1 patient was treated with neoadjuvant chemotherapy. Tumor response to treatment was retrospectively measured by contouring 480 serial MVCT images acquired before treatment. A nonparametric, memory-based locally weight regression (LWR) model was developed for predicting tumor response using the retrospective tumor response data. This model predicts future tumor volumes and the associated confidence intervals based on limited observations during the first 2 weeks of treatment. The predictive accuracy of the model was tested using a leave-one-out cross-validation technique with the measured tumor responses. Results: The predictive algorithm was used to compare predicted verse-measured tumor volume response for all 20 lesions. The average error for the predictions of the final tumor volume was 12%, with the true volumes always bounded by the 95% confidence interval. The greatest model uncertainty occurred near the middle of the course of treatment, in which the tumor response relationships were more complex, the model has less information, and the predictors were more varied. The optimal days for measuring the tumor response on the MVCT images were on elapsed Days 1, 2, 5, 9, 11, 12, 17, and 18 during

  5. A model for strong attenuation and dispersion of seismic P-waves in a partially saturated fractured reservoir

    Science.gov (United States)

    Brajanovski, Miroslav; Müller, Tobias M.; Parra, Jorge O.

    2010-08-01

    In this work we interpret the data showing unusually strong velocity dispersion of P-waves (up to 30%) and attenuation in a relatively narrow frequency range. The cross-hole and VSP data were measured in a reservoir, which is in the porous zone of the Silurian Kankakee Limestone Formation formed by vertical fractures within a porous matrix saturated by oil, and gas patches. Such a medium exhibits significant attenuation due to wave-induced fluid flow across the interfaces between different types of inclusions (fractures, fluid patches) and background. Other models of intrinsic attenuation (in particular squirt flow models) cannot explain the amount of observed dispersion when using realistic rock properties. In order to interpret data in a satisfactory way we develop a superposition model for fractured porous rocks accounting also for the patchy saturation effect.

  6. Construction method and application of 3D velocity model for evaluation of strong seismic motion and its cost performance

    International Nuclear Information System (INIS)

    Matsuyama, Hisanori; Fujiwara, Hiroyuki

    2014-01-01

    Based on experiences of making subsurface structure models for seismic strong motion evaluation, the advantages and disadvantages in terms of convenience and cost for several methods used to make such models were reported. As for the details, gravity and micro-tremor surveys were considered to be highly valid in terms of convenience and cost. However, stratigraphy and seismic velocity structure are required to make accurate 3-D subsurface structures. To realize these, methods for directly examining subsurface ground or using controlled tremor sources (at high cost) are needed. As a result, it was summarized that in modeling subsurface structures, some sort of plan including both types of methods is desirable and that several methods must be combined to match one's intended purposes and budget. (authors)

  7. Predictive modeling of coupled multi-physics systems: I. Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2014-01-01

    Highlights: • We developed “predictive modeling of coupled multi-physics systems (PMCMPS)”. • PMCMPS reduces predicted uncertainties in predicted model responses and parameters. • PMCMPS treats efficiently very large coupled systems. - Abstract: This work presents an innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS).” This methodology takes into account fully the coupling terms between the systems but requires only the computational resources that would be needed to perform predictive modeling on each system separately. The PMCMPS methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution based on a priori known mean values and uncertainties characterizing the parameters and responses for both multi-physics models. This “maximum entropy”-approximate a priori distribution is combined, using Bayes’ theorem, with the “likelihood” provided by the multi-physics simulation models. Subsequently, the posterior distribution thus obtained is evaluated using the saddle-point method to obtain analytical expressions for the optimally predicted values for the multi-physics models parameters and responses along with corresponding reduced uncertainties. Noteworthy, the predictive modeling methodology for the coupled systems is constructed such that the systems can be considered sequentially rather than simultaneously, while preserving exactly the same results as if the systems were treated simultaneously. Consequently, very large coupled systems, which could perhaps exceed available computational resources if treated simultaneously, can be treated with the PMCMPS methodology presented in this work sequentially and without any loss of generality or information, requiring just the resources that would be needed if the systems were treated sequentially

  8. Embryo quality predictive models based on cumulus cells gene expression

    Directory of Open Access Journals (Sweden)

    Devjak R

    2016-06-01

    Full Text Available Since the introduction of in vitro fertilization (IVF in clinical practice of infertility treatment, the indicators for high quality embryos were investigated. Cumulus cells (CC have a specific gene expression profile according to the developmental potential of the oocyte they are surrounding, and therefore, specific gene expression could be used as a biomarker. The aim of our study was to combine more than one biomarker to observe improvement in prediction value of embryo development. In this study, 58 CC samples from 17 IVF patients were analyzed. This study was approved by the Republic of Slovenia National Medical Ethics Committee. Gene expression analysis [quantitative real time polymerase chain reaction (qPCR] for five genes, analyzed according to embryo quality level, was performed. Two prediction models were tested for embryo quality prediction: a binary logistic and a decision tree model. As the main outcome, gene expression levels for five genes were taken and the area under the curve (AUC for two prediction models were calculated. Among tested genes, AMHR2 and LIF showed significant expression difference between high quality and low quality embryos. These two genes were used for the construction of two prediction models: the binary logistic model yielded an AUC of 0.72 ± 0.08 and the decision tree model yielded an AUC of 0.73 ± 0.03. Two different prediction models yielded similar predictive power to differentiate high and low quality embryos. In terms of eventual clinical decision making, the decision tree model resulted in easy-to-interpret rules that are highly applicable in clinical practice.

  9. Comparison of Predictive Modeling Methods of Aircraft Landing Speed

    Science.gov (United States)

    Diallo, Ousmane H.

    2012-01-01

    Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.

  10. Model Predictive Control of Three Phase Inverter for PV Systems

    OpenAIRE

    Irtaza M. Syed; Kaamran Raahemifar

    2015-01-01

    This paper presents a model predictive control (MPC) of a utility interactive three phase inverter (TPI) for a photovoltaic (PV) system at commercial level. The proposed model uses phase locked loop (PLL) to synchronize the TPI with the power electric grid (PEG) and performs MPC control in a dq reference frame. TPI model consists of a boost converter (BC), maximum power point tracking (MPPT) control, and a three-leg voltage source inverter (VSI). The operational model of ...

  11. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  12. Individualized prediction of perineural invasion in colorectal cancer: development and validation of a radiomics prediction model.

    Science.gov (United States)

    Huang, Yanqi; He, Lan; Dong, Di; Yang, Caiyun; Liang, Cuishan; Chen, Xin; Ma, Zelan; Huang, Xiaomei; Yao, Su; Liang, Changhong; Tian, Jie; Liu, Zaiyi

    2018-02-01

    To develop and validate a radiomics prediction model for individualized prediction of perineural invasion (PNI) in colorectal cancer (CRC). After computed tomography (CT) radiomics features extraction, a radiomics signature was constructed in derivation cohort (346 CRC patients). A prediction model was developed to integrate the radiomics signature and clinical candidate predictors [age, sex, tumor location, and carcinoembryonic antigen (CEA) level]. Apparent prediction performance was assessed. After internal validation, independent temporal validation (separate from the cohort used to build the model) was then conducted in 217 CRC patients. The final model was converted to an easy-to-use nomogram. The developed radiomics nomogram that integrated the radiomics signature and CEA level showed good calibration and discrimination performance [Harrell's concordance index (c-index): 0.817; 95% confidence interval (95% CI): 0.811-0.823]. Application of the nomogram in validation cohort gave a comparable calibration and discrimination (c-index: 0.803; 95% CI: 0.794-0.812). Integrating the radiomics signature and CEA level into a radiomics prediction model enables easy and effective risk assessment of PNI in CRC. This stratification of patients according to their PNI status may provide a basis for individualized auxiliary treatment.

  13. Fournier's gangrene: a model for early prediction.

    Science.gov (United States)

    Palvolgyi, Roland; Kaji, Amy H; Valeriano, Javier; Plurad, David; Rajfer, Jacob; de Virgilio, Christian

    2014-10-01

    Early diagnosis remains the cornerstone of management of Fournier's gangrene. As a result of variable progression of disease, identifying early predictors of necrosis becomes a diagnostic challenge. We present a scoring system based on objective admission criteria, which can help distinguish Fournier's gangrene from nonnecrotizing scrotal infections. Ninety-six patients were identified, 38 diagnosed with Fournier's gangrene and 58 diagnosed with scrotal cellulitis or abscess. Statistical analyses comparing admission vital signs, laboratory values, and imaging studies were performed and Classification and Regression Tree analysis was used to construct a scoring system. Admission heart rate greater than 110 beats/minute, serum sodium less than 135 mmol/L, blood urea nitrogen greater than 15 mg/dL, and white blood cell count greater than 15 × 10(3)/μL were significant predictors of Fournier's gangrene. Using a threshold score of two or greater, our model differentiates patients with Fournier's gangrene from those with nonnecrotizing infections with a sensitivity of 84.2 per cent. Only 34.2 per cent of patients with Fournier's gangrene had hard signs of necrotizing infection on admission, which were not observed in patients with nonnecrotizing infections. Objective admission criteria assist in distinguishing Fournier's gangrene from scrotal cellulitis or abscess. In situations in which results of the physical examination are ambiguous, this scoring system can heighten the index of suspicion for Fournier's gangrene and prompt rapid surgical intervention.

  14. Colloid facilitated transport of strongly sorbing contaminants in natural porous media: mathematical modeling and laboratory column experiments.

    Science.gov (United States)

    Grolimund, Daniel; Borkovec, Michal

    2005-09-01

    Mobile colloidal particles may act as carriers of strongly sorbing contaminants in subsurface materials. Such colloid-facilitated transport can be induced by changes in salinity, similar to freshwater intrusion to a contaminated aquifer saturated with saltwater, or groundwater penetration into a contaminated site saturated with a dumpsite leachate. This process is studied for noncalcareous soil material with laboratory column experiments with sodium and calcium as major cations and with lead as a strongly sorbing model contaminant. The measured breakthrough curves of these elements were described with a mathematical transport model, which invokes release and deposition kinetics of the colloids, together with adsorption and desorption of the relevant ions to the solid matrix as well as to the suspended colloids. In particular, the specific coupling between colloid and solute transport is considered. The crux of a successful description of such colloidal transport processes is to capture the inhibition of the particle release by adsorbed divalent ions properly and explicitly to considerthe dependence of colloid release on the solution chemistry and the chemical conditions at the solid-liquid interface. Experiments and modeling address colloid-facilitated transport of lead out of a contaminated zone and through a noncontaminated zone, including effects of flow velocity and length of the noncontaminated zone. We finally show that colloid-facilitated transport can be suppressed by the injection of a suitably chosen solution of a calcium salt.

  15. A deep auto-encoder model for gene expression prediction.

    Science.gov (United States)

    Xie, Rui; Wen, Jia; Quitadamo, Andrew; Cheng, Jianlin; Shi, Xinghua

    2017-11-17

    Gene expression is a key intermediate level that genotypes lead to a particular trait. Gene expression is affected by various factors including genotypes of genetic variants. With an aim of delineating the genetic impact on gene expression, we build a deep auto-encoder model to assess how good genetic variants will contribute to gene expression changes. This new deep learning model is a regression-based predictive model based on the MultiLayer Perceptron and Stacked Denoising Auto-encoder (MLP-SAE). The model is trained using a stacked denoising auto-encoder for feature selection and a multilayer perceptron framework for backpropagation. We further improve the model by introducing dropout to prevent overfitting and improve performance. To demonstrate the usage of this model, we apply MLP-SAE to a real genomic datasets with genotypes and gene expression profiles measured in yeast. Our results show that the MLP-SAE model with dropout outperforms other models including Lasso, Random Forests and the MLP-SAE model without dropout. Using the MLP-SAE model with dropout, we show that gene expression quantifications predicted by the model solely based on genotypes, align well with true gene expression patterns. We provide a deep auto-encoder model for predicting gene expression from SNP genotypes. This study demonstrates that deep learning is appropriate for tackling another genomic problem, i.e., building predictive models to understand genotypes' contribution to gene expression. With the emerging availability of richer genomic data, we anticipate that deep learning models play a bigger role in modeling and interpreting genomics.

  16. Analytical solutions by squeezing to the anisotropic Rabi model in the nonperturbative deep-strong-coupling regime

    Science.gov (United States)

    Zhang, Yu-Yu; Chen, Xiang-You

    2017-12-01

    An unexplored nonperturbative deep strong coupling (npDSC) achieved in superconducting circuits has been studied in the anisotropic Rabi model by the generalized squeezing rotating-wave approximation. Energy levels are evaluated analytically from the reformulated Hamiltonian and agree well with numerical ones in a wide range of coupling strength. Such improvement ascribes to deformation effects in the displaced-squeezed state presented by the squeezed momentum variance, which are omitted in previous displaced states. The atom population dynamics confirms the validity of our approach for the npDSC strength. Our approach offers the possibility to explore interesting phenomena analytically in the npDSC regime in qubit-oscillator experiments.

  17. Global well-posedness and decay estimates of strong solutions to a two-phase model with magnetic field

    Science.gov (United States)

    Wen, Huanyao; Zhu, Limei

    2018-02-01

    In this paper, we consider the Cauchy problem for a two-phase model with magnetic field in three dimensions. The global existence and uniqueness of strong solution as well as the time decay estimates in H2 (R3) are obtained by introducing a new linearized system with respect to (nγ -n˜γ , n - n ˜ , P - P ˜ , u , H) for constants n ˜ ≥ 0 and P ˜ > 0, and doing some new a priori estimates in Sobolev Spaces to get the uniform upper bound of (n - n ˜ ,nγ -n˜γ) in H2 (R3) norm.

  18. Strong coupling expansion for scattering phases in hamiltonian lattice field theories. Pt. 1. The (d+1)-dimensional Ising model

    International Nuclear Information System (INIS)

    Dahmen, Bernd

    1994-01-01

    A systematic method to obtain strong coupling expansions for scattering quantities in hamiltonian lattice field theories is presented. I develop the conceptual ideas for the case of the hamiltonian field theory analogue of the Ising model, in d space and one time dimension. The main result is a convergent series representation for the scattering states and the transition matrix. To be explicit, the special cases of d=1 and d=3 spatial dimensions are discussed in detail. I compute the next-to-leading order approximation for the phase shifts. The application of the method to investigate low-energy scattering phenomena in lattice gauge theory and QCD is proposed. ((orig.))

  19. High-latitude dayside electric fields and currents during strong northward interplanetary magnetic field: Observations and model simulation

    International Nuclear Information System (INIS)

    Clauer, C.R.; Friis-Christensen, E.

    1988-01-01

    On July 23, 1983, the Interplanetary Magnetic Field turned strongly northward, becoming about 22 nT for several hours. Using a combined data set of ionospheric convection measurements made by the Sondre Stromfjord incoherent scatter radar and convection inferred from Greenland magnetometer measurements, we observe the onset of the reconfiguration of the high-latitude ionospheric currents to occur about 3 min following the northward IMF encountering the magnetopause. The large-scale reconfiguration of currents, however, appears to evolve over a period of about 22 min. Using a computer model in which the distribution of field-aligned current in the polar cleft is directly determined by the strength and orientation of the interplanetary electric field, we are able to simulate the time-varying pattern of ionospheric convection, including the onset of high-latitude ''reversed convection'' cells observed to form during the interval of strong northward IMF. These observations and the simulation results indicate that the dayside polar cap electric field observed during strong northward IMF is produced by a direct electrical current coupling with the solar wind. copyright American Geophysical Union 1988

  20. From Near-Neutral to Strongly Stratified: Adequately Modelling the Clear-Sky Nocturnal Boundary Layer at Cabauw

    Science.gov (United States)

    Baas, P.; van de Wiel, B. J. H.; van der Linden, S. J. A.; Bosveld, F. C.

    2018-02-01

    The performance of an atmospheric single-column model (SCM) is studied systematically for stably-stratified conditions. To this end, 11 years (2005-2015) of daily SCM simulations were compared to observations from the Cabauw observatory, The Netherlands. Each individual clear-sky night was classified in terms of the ambient geostrophic wind speed with a 1 m s^{-1} bin-width. Nights with overcast conditions were filtered out by selecting only those nights with an average net radiation of less than - 30 W m^{-2}. A similar procedure was applied to the observational dataset. A comparison of observed and modelled ensemble-averaged profiles of wind speed and potential temperature and time series of turbulent fluxes showed that the model represents the dynamics of the nocturnal boundary layer (NBL) at Cabauw very well for a broad range of mechanical forcing conditions. No obvious difference in model performance was found between near-neutral and strongly-stratified conditions. Furthermore, observed NBL regime transitions are represented in a natural way. The reference model version performs much better than a model version that applies excessive vertical mixing as is done in several (global) operational models. Model sensitivity runs showed that for weak-wind conditions the inversion strength depends much more on details of the land-atmosphere coupling than on the turbulent mixing. The presented results indicate that in principle the physical parametrizations of large-scale atmospheric models are sufficiently equipped for modelling stably-stratified conditions for a wide range of forcing conditions.