Nonlinear turbulence models for predicting strong curvature effects
Institute of Scientific and Technical Information of China (English)
XU Jing-lei; MA Hui-yang; HUANG Yu-ning
2008-01-01
Prediction of the characteristics of turbulent flows with strong streamline curvature, such as flows in turbomachines, curved channel flows, flows around airfoils and buildings, is of great importance in engineering applicatious and poses a very practical challenge for turbulence modeling. In this paper, we analyze qualitatively the curvature effects on the structure of turbulence and conduct numerical simulations of a turbulent U- duct flow with a number of turbulence models in order to assess their overall performance. The models evaluated in this work are some typical linear eddy viscosity turbulence models, nonlinear eddy viscosity turbulence models (NLEVM) (quadratic and cubic), a quadratic explicit algebraic stress model (EASM) and a Reynolds stress model (RSM) developed based on the second-moment closure. Our numerical results show that a cubic NLEVM that performs considerably well in other benchmark turbulent flows, such as the Craft, Launder and Suga model and the Huang and Ma model, is able to capture the major features of the highly curved turbulent U-duct flow, including the damping of turbulence near the convex wall, the enhancement of turbulence near the concave wall, and the subsequent turbulent flow separation. The predictions of the cubic models are quite close to that of the RSM, in relatively good agreement with the experimental data, which suggests that these inodels may be employed to simulate the turbulent curved flows in engineering applications.
Strong ground-motion prediction from Stochastic-dynamic source models
Guatteri, Mariagiovanna; Mai, P.M.; Beroza, G.C.; Boatwright, J.
2003-01-01
In the absence of sufficient data in the very near source, predictions of the intensity and variability of ground motions from future large earthquakes depend strongly on our ability to develop realistic models of the earthquake source. In this article we simulate near-fault strong ground motion using dynamic source models. We use a boundary integral method to simulate dynamic rupture of earthquakes by specifying dynamic source parameters (fracture energy and stress drop) as spatial random fields. We choose these quantities such that they are consistent with the statistical properties of slip heterogeneity found in finite-source models of past earthquakes. From these rupture models we compute theoretical strong-motion seismograms up to a frequency of 2 Hz for several realizations of a scenario strike-slip Mw 7.0 earthquake and compare empirical response spectra, spectra obtained from our dynamic models, and spectra determined from corresponding kinematic simulations. We find that spatial and temporal variations in slip, slip rise time, and rupture propagation consistent with dynamic rupture models exert a strong influence on near-source ground motion. Our results lead to a feasible approach to specify the variability in the rupture time distribution in kinematic models through a generalization of Andrews' (1976) result relating rupture speed to apparent fracture energy, stress drop, and crack length to 3D dynamic models. This suggests that a simplified representation of dynamic rupture may be obtained to approximate the effects of dynamic rupture without having to do full dynamic simulations.
Directory of Open Access Journals (Sweden)
John M. Hanesiak
2013-07-01
Full Text Available Strong northeasterly wind events are infrequent over Baffin Island, but are potentially hazardous for aviation and the local community of Iqaluit (the capital of Nunavut, Canada. Three strong northeasterly wind events in this region are examined in this study, using the Canadian Global Environmental Multiscale-Limited Area Model (GEM-LAM with a horizontal grid spacing of 2.5 km; in-situ observations; and reanalysis data. The skill of the GEM-LAM in simulating these events is examined. With the exception of one event, the GEM-LAM was successful at predicting the large-scale flow in terms of the circulation pattern, timing of the synoptic set-up and the low-level flow over the Hall Peninsula. The onset and cessation of strong winds and timing of major wind shifts was typically well handled by the model to within ~3 h, but with a tendency to underestimate the peak wind speed. The skill of the surface wind forecasts at Iqaluit is critically dependent on the predicted timing and location of the hydraulic jump and the grid point selected to represent Iqaluit. Examination of the observed and modelled data suggest that the strong northeasterly wind events have several features in common: (1 strong gradient-driven flow across the Hall Peninsula, (2 mean-state critical layer (or reverse shear over the Hall Peninsula, (3 a low-level inversion, typically above the maximum barrier height immediately upstream of the Hall Peninsula, (4 subcritical flow, typically present upstream of the Hall Peninsula and (5 a hydraulic jump in the vicinity of Frobisher Bay. The modelled atmospheric conditions upwind of the Hall Peninsula immediately prior to the formation of the hydraulic jump (and acceleration of winds over the lee slope are largely consistent with the prediction of propagating hydraulic jumps presented in the literature.
Strong Scaling for Numerical Weather Prediction at Petascale with the Atmospheric Model NUMA
Müller, Andreas; Marras, Simone; Wilcox, Lucas C; Isaac, Tobin; Giraldo, Francis X
2015-01-01
Numerical weather prediction (NWP) has proven to be computationally challenging due to its inherent multiscale nature. Currently, the highest resolution NWP models use a horizontal resolution of approximately 15km. At this resolution many important processes in the atmosphere are not resolved. Needless to say this introduces errors. In order to increase the resolution of NWP models highly scalable atmospheric models are needed. The Non-hydrostatic Unified Model of the Atmosphere (NUMA), developed by the authors at the Naval Postgraduate School, was designed to achieve this purpose. NUMA is used by the Naval Research Laboratory, Monterey as the engine inside its next generation weather prediction system NEPTUNE. NUMA solves the fully compressible Navier-Stokes equations by means of high-order Galerkin methods (both spectral element as well as discontinuous Galerkin methods can be used). Mesh generation is done using the p4est library. NUMA is capable of running middle and upper atmosphere simulations since it ...
Kellum, John A.
2016-01-01
Understanding acid-base regulation is often reduced to pigeonholing clinical states into categories of disorders based on arterial blood sampling. An earlier ambition to quantitatively explain disorders by measuring production and elimination of acid has not become standard clinical practice. Seeking back to classical physical chemistry we propose that in any compartment, the requirement of electroneutrality leads to a strong relationship between charged moieties. This relationship is derived in the form of a general equation stating charge balance, making it possible to calculate [H+] and pH based on all other charged moieties. Therefore, to validate this construct we investigated a large number of blood samples from intensive care patients, where both data and pathology is plentiful, by comparing the measured pH to the modeled pH. We were able to predict both the mean pattern and the individual fluctuation in pH based on all other measured charges with a correlation of approximately 90% in individual patient series. However, there was a shift in pH so that fitted pH in general is overestimated (95% confidence interval -0.072–0.210) and we examine some explanations for this shift. Having confirmed the relationship between charged species we then examine some of the classical and recent literature concerning the importance of charge balance. We conclude that focusing on the charges which are predictable such as strong ions and total concentrations of weak acids leads to new insights with important implications for medicine and physiology. Importantly this construct should pave the way for quantitative acid-base models looking into the underlying mechanisms of disorders rather than just classifying them. PMID:27631369
Directory of Open Access Journals (Sweden)
D. W. Hardekopf
2007-09-01
Full Text Available Two branches forming the headwaters of a stream in the Czech Republic were studied. Both streams have similar catchment characteristics and historical deposition; however one is rain-fed and strongly affected by acid atmospheric deposition, the other spring-fed and only moderately acidified. The MAGIC model was used to reconstruct past stream water and soil chemistry of the rain-fed branch, and predict future recovery up to 2050 under current proposed emissions levels. A future increase in air temperature calculated by a regional climate model was then used to derive climate-related scenarios to test possible factors affecting chemical recovery up to 2100. Macroinvertebrates were sampled from both branches, and differences in stream chemistry were reflected in the community structures. According to modelled forecasts, recovery of the rain-fed branch will be gradual and limited, and continued high levels of sulphate release from the soils will continue to dominate stream water chemistry, while scenarios related to a predicted increase temperature will have little impact. The likelihood of colonization of species from the spring-fed branch was evaluated considering the predicted extent of chemical recovery. The results suggest that the possibility of colonization of species from the spring-fed branch to the rain-fed will be limited to only the acid-tolerant stonefly, caddisfly and dipteran taxa in the modelled period.
Directory of Open Access Journals (Sweden)
D. W. Hardekopf
2008-03-01
Full Text Available Two branches forming the headwaters of a stream in the Czech Republic were studied. Both streams have similar catchment characteristics and historical deposition; however one is rain-fed and strongly affected by acid atmospheric deposition, the other spring-fed and only moderately acidified. The MAGIC model was used to reconstruct past stream water and soil chemistry of the rain-fed branch, and predict future recovery up to 2050 under current proposed emissions levels. A future increase in air temperature calculated by a regional climate model was then used to derive climate-related scenarios to test possible factors affecting chemical recovery up to 2100. Macroinvertebrates were sampled from both branches, and differences in stream chemistry were reflected in the community structures. According to modelled forecasts, recovery of the rain-fed branch will be gradual and limited, and continued high levels of sulphate release from the soils will continue to dominate stream water chemistry, while scenarios related to a predicted increase in temperature will have little impact. The likelihood of colonization of species from the spring-fed branch was evaluated considering the predicted extent of chemical recovery. The results suggest that the possibility of colonization of species from the spring-fed branch to the rain-fed will be limited to only the acid-tolerant stonefly, caddisfly and dipteran taxa in the modelled period.
DEFF Research Database (Denmark)
Ring, Troels; Kellum, John A
2016-01-01
confirmed the relationship between charged species we then examine some of the classical and recent literature concerning the importance of charge balance. We conclude that focusing on the charges which are predictable such as strong ions and total concentrations of weak acids leads to new insights......Understanding acid-base regulation is often reduced to pigeonholing clinical states into categories of disorders based on arterial blood sampling. An earlier ambition to quantitatively explain disorders by measuring production and elimination of acid has not become standard clinical practice....... Seeking back to classical physical chemistry we propose that in any compartment, the requirement of electroneutrality leads to a strong relationship between charged moieties. This relationship is derived in the form of a general equation stating charge balance, making it possible to calculate [H+] and p...
Amsallem, Myriam; Sweatt, Andrew J; Aymami, Marie C; Kuznetsova, Tatiana; Selej, Mona; Lu, HongQuan; Mercier, Olaf; Fadel, Elie; Schnittger, Ingela; McConnell, Michael V; Rabinovitch, Marlene; Zamanian, Roham T; Haddad, Francois
2017-06-01
Right ventricular (RV) end-systolic dimensions provide information on both size and function. We investigated whether an internally scaled index of end-systolic dimension is incremental to well-validated prognostic scores in pulmonary arterial hypertension. From 2005 to 2014, 228 patients with pulmonary arterial hypertension were prospectively enrolled. RV end-systolic remodeling index (RVESRI) was defined by lateral length divided by septal height. The incremental values of RV free wall longitudinal strain and RVESRI to risk scores were determined. Mean age was 49±14 years, 78% were female, 33% had connective tissue disease, 52% were in New York Heart Association class ≥III, and mean pulmonary vascular resistance was 11.2±6.4 WU. RVESRI and right atrial area were strongly connected to the other right heart metrics. Three zones of adaptation (adapted, maladapted, and severely maladapted) were identified based on the RVESRI to RV systolic pressure relationship. During a mean follow-up of 3.9±2.4 years, the primary end point of death, transplant, or admission for heart failure was reached in 88 patients. RVESRI was incremental to risk prediction scores in pulmonary arterial hypertension, including the Registry to Evaluate Early and Long-Term PAH Disease Management score, the Pulmonary Hypertension Connection equation, and the Mayo Clinic model. Using multivariable analysis, New York Heart Association class III/IV, RVESRI, and log NT-proBNP (N-Terminal Pro-B-Type Natriuretic Peptide) were retained (χ(2), 62.2; Pright heart metrics, RVESRI demonstrated the best test-retest characteristics. RVESRI is a simple reproducible prognostic marker in patients with pulmonary arterial hypertension. © 2017 American Heart Association, Inc.
Strong ground motion prediction using virtual earthquakes.
Denolle, M A; Dunham, E M; Prieto, G A; Beroza, G C
2014-01-24
Sedimentary basins increase the damaging effects of earthquakes by trapping and amplifying seismic waves. Simulations of seismic wave propagation in sedimentary basins capture this effect; however, there exists no method to validate these results for earthquakes that have not yet occurred. We present a new approach for ground motion prediction that uses the ambient seismic field. We apply our method to a suite of magnitude 7 scenario earthquakes on the southern San Andreas fault and compare our ground motion predictions with simulations. Both methods find strong amplification and coupling of source and structure effects, but they predict substantially different shaking patterns across the Los Angeles Basin. The virtual earthquake approach provides a new approach for predicting long-period strong ground motion.
2016-01-01
Understanding acid-base regulation is often reduced to pigeonholing clinical states into categories of disorders based on arterial blood sampling. An earlier ambition to quantitatively explain disorders by measuring production and elimination of acid has not become standard clinical practice. Seeking back to classical physical chemistry we propose that in any compartment, the requirement of electroneutrality leads to a strong relationship between charged moieties. This relationship is derived...
Institute of Scientific and Technical Information of China (English)
林爱兰
2002-01-01
Reanalysis data from NCEP/NCAR are used to systematically study preceding signals of monthly precipitation anomalies in the early raining season of Guangdong province, from the viewpoints of 500-hPa geopotential height field, outgoing longwave radiation (OLR) field, sea surface temperature (SST) and fourteen indexes of general circulation depicting atmosphere activity at high, middle and low latitutes. Being multiple tools of information, a number of conceptual models are formulated that are useful for prediction of the magnitude of monthly precipitation (drought, flood and normal conditionss).
Is It Possible to Predict Strong Earthquakes?
Polyakov, Y. S.; Ryabinin, G. V.; Solovyeva, A. B.; Timashev, S. F.
2015-07-01
The possibility of earthquake prediction is one of the key open questions in modern geophysics. We propose an approach based on the analysis of common short-term candidate precursors (2 weeks to 3 months prior to strong earthquake) with the subsequent processing of brain activity signals generated in specific types of rats (kept in laboratory settings) who reportedly sense an impending earthquake a few days prior to the event. We illustrate the identification of short-term precursors using the groundwater sodium-ion concentration data in the time frame from 2010 to 2014 (a major earthquake occurred on 28 February 2013) recorded at two different sites in the southeastern part of the Kamchatka Peninsula, Russia. The candidate precursors are observed as synchronized peaks in the nonstationarity factors, introduced within the flicker-noise spectroscopy framework for signal processing, for the high-frequency component of both time series. These peaks correspond to the local reorganizations of the underlying geophysical system that are believed to precede strong earthquakes. The rodent brain activity signals are selected as potential "immediate" (up to 2 weeks) deterministic precursors because of the recent scientific reports confirming that rodents sense imminent earthquakes and the population-genetic model of K irshvink (Soc Am 90, 312-323, 2000) showing how a reliable genetic seismic escape response system may have developed over the period of several hundred million years in certain animals. The use of brain activity signals, such as electroencephalograms, in contrast to conventional abnormal animal behavior observations, enables one to apply the standard "input-sensor-response" approach to determine what input signals trigger specific seismic escape brain activity responses.
Johnson, Traci L
2016-01-01
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with $>15$ image systems, the image plane rms does not decrease significantly when more systems are added; however the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to $10$ image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved por...
Is It Possible to Predict Strong Earthquakes?
Polyakov, Yuriy S; Solovyeva, Anna B; Timashev, Serge F
2015-01-01
The possibility of earthquake prediction is one of the key open questions in modern geophysics. We propose an approach based on the analysis of common short-term candidate precursors (2 weeks to 3 months prior to strong earthquake) with the subsequent processing of brain activity signals generated in specific types of rats (kept in laboratory settings) who reportedly sense an impending earthquake few days prior to the event. We illustrate the identification of short-term precursors using the groundwater sodium-ion concentration data in the time frame from 2010 to 2014 (a major earthquake occurred on February 28, 2013), recorded at two different sites in the south-eastern part of the Kamchatka peninsula, Russia. The candidate precursors are observed as synchronized peaks in the nonstationarity factors, introduced within the flicker-noise spectroscopy framework for signal processing, for the high-frequency component of both time series. These peaks correspond to the local reorganizations of the underlying geoph...
Casalderrey-Solana, Jorge; Milhano, José Guilherme; Pablos, Daniel; Rajagopal, Krishna
2016-01-01
We have previously introduced a hybrid strong/weak coupling model for jet quenching in heavy ion collisions that describes the production and fragmentation of jets at weak coupling, using PYTHIA, and describes the rate at which each parton in the jet shower loses energy as it propagates through the strongly coupled plasma, dE/dx, using an expression computed holographically at strong coupling. The model has a single free parameter that we fit to a single experimental measurement. We then confront our model with experimental data on many other jet observables, focusing here on boson-jet observables, finding that it provides a good description of present jet data. Next, we provide the predictions of our hybrid model for many measurements to come, including those for inclusive jet, dijet, photon-jet and Z-jet observables in heavy ion collisions with energy $\\sqrt{s}=5.02$ ATeV coming soon at the LHC. As the statistical uncertainties on near-future measurements of photon-jet observables are expected to be much sm...
Sato, T.; Dan, K.; Irikura, K.; Furumura, M.
2001-12-01
Based on the existing ideas on characterizing complex fault rupture process, we constructed four different characterized fault models for predicting strong motions from the most likely scenario earthquake along the active fault zone of the Itoigawa-Shizuoka Tectonic Line in central Japan. The Headquarters for Earthquake Research Promotion in Japanese government (2001) estimated that the earthquake (8 +/- 0.5) has the total fault length of 112 km with four segments. We assumed that the characterized fault model consisted of two regions: asperity and background (Somerville et al., 1999; Irikura, 2000; Dan et al., 2000). The main differences in the four fault models were 1) how to determine a seismic moment Mo from a fault rupture area S, 2) number of asperities N, 3) how to determine a stress parameter σ , and 4) fmax. We calculated broadband strong motions at three stations near the fault by a hybrid method of the semi-empirical and theoretical approaches. A comparison between the results from the hybrid method and those from empirical attenuation relations showed that the hybrid method using the characterized fault model could evaluate near-fault rupture directivity effects more reliably than the empirical attenuation relations. We also discussed the characterized fault models and the strong motion characteristics. The Mo extrapolated from the empirical Mo-S relation by Somerville et al. (1999) was a half of that determined from the mean value of the Wells and Coppersmith (1994) data. The latter Mo was consistent with that for the 1891 Nobi, Japan, earthquake whose fault length was almost the same as the length of the target earthquake. In addition, the fault model using the latter Mo produced a slip amount of about 6 m on the largest asperity, which was consistent with the displacement of 6 m to 9 m per event obtained from a trench survey. High-frequency strong motions were greatly influenced by the σ for the asperities (188 bars, 246 bars, 108 bars, and 134
Cosmogenic photons strongly constrain UHECR source models
van Vliet, Arjen
2016-01-01
With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.
Cosmogenic photons strongly constrain UHECR source models
van Vliet, Arjen
2017-03-01
With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR) propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB) by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT's IGRB, as long as their number density is not strongly peaked at recent times.
Cosmogenic photons strongly constrain UHECR source models
Directory of Open Access Journals (Sweden)
van Vliet Arjen
2017-01-01
Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.
Energy Technology Data Exchange (ETDEWEB)
Berge-Thierry, C
2007-05-15
The defence to obtain the 'Habilitation a Diriger des Recherches' is a synthesis of the research work performed since the end of my Ph D. thesis in 1997. This synthesis covers the two years as post doctoral researcher at the Bureau d'Evaluation des Risques Sismiques at the Institut de Protection (BERSSIN), and the seven consecutive years as seismologist and head of the BERSSIN team. This work and the research project are presented in the framework of the seismic risk topic, and particularly with respect to the seismic hazard assessment. Seismic risk combines seismic hazard and vulnerability. Vulnerability combines the strength of building structures and the human and economical consequences in case of structural failure. Seismic hazard is usually defined in terms of plausible seismic motion (soil acceleration or velocity) in a site for a given time period. Either for the regulatory context or the structural specificity (conventional structure or high risk construction), seismic hazard assessment needs: to identify and locate the seismic sources (zones or faults), to characterize their activity, to evaluate the seismic motion to which the structure has to resist (including the site effects). I specialized in the field of numerical strong-motion prediction using high frequency seismic sources modelling and forming part of the IRSN allowed me to rapidly working on the different tasks of seismic hazard assessment. Thanks to the expertise practice and the participation to the regulation evolution (nuclear power plants, conventional and chemical structures), I have been able to work on empirical strong-motion prediction, including site effects. Specific questions related to the interface between seismologists and structural engineers are also presented, especially the quantification of uncertainties. This is part of the research work initiated to improve the selection of the input ground motion in designing or verifying the stability of structures. (author)
Model reduction of strong-weak neurons
Steven James Cox; Bosen eDu; Danny eSorensen
2014-01-01
We consider neurons with large dendritic trees that are weakly excitable in the sense that back propagating action potentials are severly attenuated as they travel from the small, strongly excitable, spike initiation zone. In previous work we have shown that the computational size of weakly excitable cell models may be reduced by two or more orders of magnitude, and that the size of strongly excitable models may be reduced by at least one order of magnitude, without sacrificing the spatio–tem...
Modeling and synthesis of strong ground motion
Indian Academy of Sciences (India)
S T G Raghu Kanth
2008-11-01
Success of earthquake resistant design practices critically depends on how accurately the future ground motion can be determined at a desired site. But very limited recorded data are available about ground motion in India for engineers to rely upon. To identify the needs of engineers, under such circumstances, in estimating ground motion time histories, this article presents a detailed review of literature on modeling and synthesis of strong ground motion data. In particular, modeling of seismic sources and earth medium, analytical and empirical Green’s functions approaches for ground motion simulation, stochastic models for strong motion and ground motion relations are covered. These models can be used to generate realistic near-field and far-field ground motion in regions lacking strong motion data. Numerical examples are shown for illustration by taking Kutch earthquake-2001 as a case study.
Simple supersymmetric strongly coupled preon model
Fajfer, S.; Tadić, D.
1988-08-01
This supersymmetric-SU(5) composite model is a natural generalization of the usual strong-coupling models. Preon superfields are in representations 5* and 10. The product representations 5*×10, 5×10, 5×5, and 5*×5 contain only those strongly hypercolor bound states which are needed in the standard electroweak theory. There are no superfluous quarklike states. The neutrino is massless. Only one strongly hypercolor bound singlet (10×10*) can exist as a free particle. At higher energies one should expect to see a plethora of new particles. Grand unification happens at the scale M~1014 GeV. Cabibbo mixing can be incorporated by using a transposed Kobayashi-Maskawa mixing matrix.
Model reduction of strong-weak neurons.
Du, Bosen; Sorensen, Danny; Cox, Steven J
2014-01-01
We consider neurons with large dendritic trees that are weakly excitable in the sense that back propagating action potentials are severly attenuated as they travel from the small, strongly excitable, spike initiation zone. In previous work we have shown that the computational size of weakly excitable cell models may be reduced by two or more orders of magnitude, and that the size of strongly excitable models may be reduced by at least one order of magnitude, without sacrificing the spatio-temporal nature of its inputs (in the sense we reproduce the cell's precise mapping of inputs to outputs). We combine the best of these two strategies via a predictor-corrector decomposition scheme and achieve a drastically reduced highly accurate model of a caricature of the neuron responsible for collision detection in the locust.
Electroweak and Strong Interactions Phenomenology, Concepts, Models
Scheck, Florian
2012-01-01
Electroweak and Strong Interaction: Phenomenology, Concepts, Models, begins with relativistic quantum mechanics and some quantum field theory which lay the foundation for the rest of the text. The phenomenology and the physics of the fundamental interactions are emphasized through a detailed discussion of the empirical fundamentals of unified theories of strong, electromagnetic, and weak interactions. The principles of local gauge theories are described both in a heuristic and a geometric framework. The minimal standard model of the fundamental interactions is developed in detail and characteristic applications are worked out. Possible signals of physics beyond that model, notably in the physics of neutrinos are also discussed. Among the applications scattering on nucleons and on nuclei provide salient examples. Numerous exercises with solutions make the text suitable for advanced courses or individual study. This completely updated revised new edition contains an enlarged chapter on quantum chromodynamics an...
Exactly solvable models of strongly correlated electrons
Korepin, Vladimir E
1994-01-01
Systems of strongly correlated electrons are at the heart of recent developments in condensed matter theory. They have applications to phenomena like high-T c superconductivity and the fractional quantum hall effect. Analytical solutions to such models, though mainly limited to one spatial dimension, provide a complete and unambiguous picture of the dynamics involved. This volume is devoted to such solutions obtained using the Bethe Ansatz, and concentrates on the most important of such models, the Hubbard model. The reprints are complemented by reviews at the start of each chapter and an exte
Quantitative prediction of strong motion for a potential earthquake fault
Directory of Open Access Journals (Sweden)
Shamita Das
2010-02-01
Full Text Available This paper describes a new method for calculating strong motion records for a given seismic region on the basis of the laws of physics using information on the tectonics and physical properties of the earthquake fault. Our method is based on a earthquake model, called a «barrier model», which is characterized by five source parameters: fault length, width, maximum slip, rupture velocity, and barrier interval. The first three parameters may be constrained from plate tectonics, and the fourth parameter is roughly a constant. The most important parameter controlling the earthquake strong motion is the last parameter, «barrier interval». There are three methods to estimate the barrier interval for a given seismic region: 1 surface measurement of slip across fault breaks, 2 model fitting with observed near and far-field seismograms, and 3 scaling law data for small earthquakes in the region. The barrier intervals were estimated for a dozen earthquakes and four seismic regions by the above three methods. Our preliminary results for California suggest that the barrier interval may be determined if the maximum slip is given. The relation between the barrier interval and maximum slip varies from one seismic region to another. For example, the interval appears to be unusually long for Kilauea, Hawaii, which may explain why only scattered evidence of strong ground shaking was observed in the epicentral area of the Island of Hawaii earthquake of November 29, 1975. The stress drop associated with an individual fault segment estimated from the barrier interval and maximum slip lies between 100 and 1000 bars. These values are about one order of magnitude greater than those estimated earlier by the use of crack models without barriers. Thus, the barrier model can resolve, at least partially, the well known discrepancy between the stress-drops measured in the laboratory and those estimated for earthquakes.
Heat treatment modelling using strongly continuous semigroups.
Malek, Alaeddin; Abbasi, Ghasem
2015-07-01
In this paper, mathematical simulation of bioheat transfer phenomenon within the living tissue is studied using the thermal wave model. Three different sources that have therapeutic applications in laser surgery, cornea laser heating and cancer hyperthermia are used. Spatial and transient heating source, on the skin surface and inside biological body, are considered by using step heating, sinusoidal and constant heating. Mathematical simulations describe a non-Fourier process. Exact solution for the corresponding non-Fourier bioheat transfer model that has time lag in its heat flux is proposed using strongly continuous semigroup theory in conjunction with variational methods. The abstract differential equation, infinitesimal generator and corresponding strongly continuous semigroup are proposed. It is proved that related semigroup is a contraction semigroup and is exponentially stable. Mathematical simulations are done for skin burning and thermal therapy in 10 different models and the related solutions are depicted. Unlike numerical solutions, which suffer from uncertain physical results, proposed analytical solutions do not have unwanted numerical oscillations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Strong solutions of semilinear matched microstructure models
Escher, Joachim
2011-01-01
The subject of this article is a matched microstructure model for Newtonian fluid flows in fractured porous media. This is a homogenized model which takes the form of two coupled parabolic differential equations with boundary conditions in a given (two-scale) domain in Euclidean space. The main objective is to establish the local well-posedness in the strong sense of the flow. Two main settings are investigated: semi-linear systems with linear boundary conditions and semi-linear systems with nonlinear boundary conditions. With the help of analytic semigoups we establish local well-posedness and investigate the long-time behaviour of the solutions in the first case: we establish global existence and show that solutions converge to zero at an exponential rate.
Model Reduction of Strong-Weak Neurons
Directory of Open Access Journals (Sweden)
Steven James Cox
2014-12-01
Full Text Available We consider neurons with large dendritic trees that are weakly excitable in the sense that back propagating action potentials are severly attenuated as they travelfrom the small, strongly excitable, spike initiation zone. In previous workwe have shown that the computational size of weakly excitable cell modelsmay be reduced by two or more orders of magnitude, and that the size of stronglyexcitable models may be reduced by at least one order of magnitude,without sacrificing thespatio-temporal nature of its inputs (in the sense we reproduce the cell's precise mapping of inputs to outputs. We combine the best of these twostrategies via a predictor--corrector decomposition scheme andachieve a drastically reduced highly accurate model of a caricature of the neuron responsible for collision detection in the locust.
Convex Modeling of Interactions with Strong Heredity
Haris, Asad; Witten, Daniela; Simon, Noah
2015-01-01
We consider the task of fitting a regression model involving interactions among a potentially large set of covariates, in which we wish to enforce strong heredity. We propose FAMILY, a very general framework for this task. Our proposal is a generalization of several existing methods, such as VANISH [Radchenko and James, 2010], hierNet [Bien et al., 2013], the all-pairs lasso, and the lasso using only main effects. It can be formulated as the solution to a convex optimization problem, which we solve using an efficient alternating directions method of multipliers (ADMM) algorithm. This algorithm has guaranteed convergence to the global optimum, can be easily specialized to any convex penalty function of interest, and allows for a straightforward extension to the setting of generalized linear models. We derive an unbiased estimator of the degrees of freedom of FAMILY, and explore its performance in a simulation study and on an HIV sequence data set. PMID:28316461
Linear Sigma Models With Strongly Coupled Phases -- One Parameter Models
Hori, Kentaro
2013-01-01
We systematically construct a class of two-dimensional $(2,2)$ supersymmetric gauged linear sigma models with phases in which a continuous subgroup of the gauge group is totally unbroken. We study some of their properties by employing a recently developed technique. The focus of the present work is on models with one K\\"ahler parameter. The models include those corresponding to Calabi-Yau threefolds, extending three examples found earlier by a few more, as well as Calabi-Yau manifolds of other dimensions and non-Calabi-Yau manifolds. The construction leads to predictions of equivalences of D-brane categories, systematically extending earlier examples. There is another type of surprise. Two distinct superconformal field theories corresponding to Calabi-Yau threefolds with different Hodge numbers, $h^{2,1}=23$ versus $h^{2,1}=59$, have exactly the same quantum K\\"ahler moduli space. The strong-weak duality plays a crucial r\\^ole in confirming this, and also is useful in the actual computation of the metric on t...
Jacques-Coper, Martín; Falvey, Mark; Muñoz, Ricardo C.
2015-07-01
Crucial aspects of a strong thermally-driven wind system in the Atacama Desert in northern Chile during the extended austral winter season (May-September) are studied using 2 years of measurement data from the Sierra Gorda 80-m meteorological mast (SGO, 22° 56' 24″ S; 69° 7' 58″ W, 2,069 m above sea level (a.s.l.)). Daily cycles of atmospheric variables reveal a diurnal (nocturnal) regime, with northwesterly (easterly) flow and maximum mean wind speed of 8 m/s (13 m/s) on average. These distinct regimes are caused by pronounced topographic conditions and the diurnal cycle of the local radiative balance. Wind speed extreme events of each regime are negatively correlated at the inter-daily time scale: High diurnal wind speed values are usually observed together with low nocturnal wind speed values and vice versa. The associated synoptic conditions indicate that upper-level troughs at the coastline of southwestern South America reinforce the diurnal northwesterly wind, whereas mean undisturbed upper-level conditions favor the development of the nocturnal easterly flow. We analyze the skill of the numerical weather model Global Forecast System (GFS) in predicting wind speed at SGO. Although forecasted wind speeds at 800 hPa do show the diurnal and nocturnal phases, observations at 80 m are strongly underestimated by the model. This causes a pronounced daily cycle of root-mean-squared error (RMSE) and bias in the forecasts. After applying a simple Model Output Statistics (MOS) post-processing, we achieve a good representation of the wind speed intra-daily and inter-daily variability, a first step toward reducing the uncertainties related to potential wind energy projects in the region.
Institute of Scientific and Technical Information of China (English)
单志龙; 刘兰辉; 张迎胜; 黄广雄
2014-01-01
定位技术是无线传感器网络的关键技术，而关于移动节点的定位又是其中的技术难点。该文针对移动节点定位问题提出基于灰度预测模型的强自适应性移动节点定位算法(GPLA)。算法在基于蒙特卡罗定位思想的基础上，利用灰度预测模型进行运动预测，精确采样区域，用估计距离进行滤波，提高采样粒子的有效性，通过限制性的线性交叉操作来生成新粒子，从而加快样本生成，减少采样次数，提高算法效率。仿真实验中，该算法在通信半径、锚节点密度、样本大小等条件变化的情况下，表现出较好的性能与较强的自适应性。%Localization of sensor nodes is an important issue in Wireless Sensor Networks (WSNs), and positioning of the mobile nodes is one of the difficulties. To deal with this issue, a strong self-adaptive Localization Algorithm based on Gray Prediction model for mobile nodes (GPLA) is proposed. On the background of Monte Carlo Localization Algoritm, gray prediction model is used in GPLA, which can accurate sampling area is used to predict nodes motion situation. In filtering process, estimated distance is taken to improve the validity of the sample particles. Finally, restrictive linear crossover is used to generate new particles, which can accelerate the sampling process, reduce the times of sampling and heighten the efficiency of GPLA. Simulation results show that the algorithm has excellent performance and strong self-adaptivity in different communication radius, anchor node, sample size, and other conditions.
Levy, R.; Mcginness, H.
1976-01-01
Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.
The hadronic standard model for strong and electroweak interactions
Energy Technology Data Exchange (ETDEWEB)
Raczka, R. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)
1993-12-31
We propose a new model for strong and electro-weak interactions. First, we review various QCD predictions for hadron-hadron and lepton-hadron processes. We indicate that the present formulation of strong interactions in the frame work of Quantum Chromodynamics encounters serious conceptual and numerical difficulties in a reliable description of hadron-hadron and lepton-hadron interactions. Next we propose to replace the strong sector of Standard Model based on unobserved quarks and gluons by the strong sector based on the set of the observed baryons and mesons determined by the spontaneously broken SU(6) gauge field theory model. We analyse various properties of this model such as asymptotic freedom, Reggeization of gauge bosons and fundamental fermions, baryon-baryon and meson-baryon high energy scattering, generation of {Lambda}-polarization in inclusive processes and others. Finally we extend this model by electro-weak sector. We demonstrate a remarkable lepton and hadron anomaly cancellation and we analyse a series of important lepton-hadron and hadron-hadron processes such as e{sup +} + e{sup -} {yields} hadrons, e{sup +} + e{sup -} {yields} W{sup +} + W{sup -}, e{sup +} + e{sup -} {yields} p + anti-p, e + p {yields} e + p and p + anti-p {yields} p + anti-p processes. We obtained a series of interesting new predictions in this model especially for processes with polarized particles. We estimated the value of the strong coupling constant {alpha}(M{sub z}) and we predicted the top baryon mass M{sub {Lambda}{sub t}} {approx_equal} 240 GeV. Since in our model the proton, neutron, {Lambda}-particles, vector mesons like {rho}, {omega}, {phi}, J/{psi} ect. and leptons are elementary most of experimentally analysed lepton-hadron and hadron-hadron processes in LEP1, LEP2, LEAR, HERA, HERMES, LHC and SSC experiments may be relatively easily analysed in our model. (author). 252 refs, 65 figs, 1 tab.
Melanoma risk prediction models
Directory of Open Access Journals (Sweden)
Nikolić Jelena
2014-01-01
only present in melanoma patients and thus were strongly associated with melanoma. The percentage of correctly classified subjects in the LR model was 74.9%, sensitivity 71%, specificity 78.7% and AUC 0.805. For the ADT percentage of correctly classified instances was 71.9%, sensitivity 71.9%, specificity 79.4% and AUC 0.808. Conclusion. Application of different models for risk assessment and prediction of melanoma should provide efficient and standardized tool in the hands of clinicians. The presented models offer effective discrimination of individuals at high risk, transparent decision making and real-time implementation suitable for clinical practice. A continuous melanoma database growth would provide for further adjustments and enhancements in model accuracy as well as offering a possibility for successful application of more advanced data mining algorithms.
A strong diffusive ion mode in dense ionized matter predicted by Langevin dynamics.
Mabey, P; Richardson, S; White, T G; Fletcher, L B; Glenzer, S H; Hartley, N J; Vorberger, J; Gericke, D O; Gregori, G
2017-01-30
The state and evolution of planets, brown dwarfs and neutron star crusts is determined by the properties of dense and compressed matter. Due to the inherent difficulties in modelling strongly coupled plasmas, however, current predictions of transport coefficients differ by orders of magnitude. Collective modes are a prominent feature, whose spectra may serve as an important tool to validate theoretical predictions for dense matter. With recent advances in free electron laser technology, X-rays with small enough bandwidth have become available, allowing the investigation of the low-frequency ion modes in dense matter. Here, we present numerical predictions for these ion modes and demonstrate significant changes to their strength and dispersion if dissipative processes are included by Langevin dynamics. Notably, a strong diffusive mode around zero frequency arises, which is not present, or much weaker, in standard simulations. Our results have profound consequences in the interpretation of transport coefficients in dense plasmas.
Strong screening in the plum pudding model
Chepelianskii, A. D.; Closa, F.; Raphaël, E.; Trizac, E.
2011-06-01
We study a generalized Thomson problem that appears in several condensed matter settings: identical point-charge particles can penetrate inside a homogeneously charged sphere, with global electro-neutrality. The emphasis is on scaling laws at large Coulombic couplings, and deviations from mean-field behaviour, by a combination of Monte Carlo simulations and an analytical treatment within a quasi-localized charge approximation, which provides reliable predictions. We also uncover a local overcharging phenomenon driven by ionic correlations alone.
Cestari, Andrea
2013-01-01
Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.
Strongly interacting matter from holographic QCD model
Chen, Yidian; Huang, Mei
2016-01-01
We introduce the 5-dimension dynamical holographic QCD model, which is constructed in the graviton-dilaton-scalar framework with the dilaton background field $\\Phi$ and the scalar field $X$ responsible for the gluodynamics and chiral dynamics, respectively. We review our results on the hadron spectra including the glueball and light meson spectra, QCD phase transitions and transport properties in the framework of the dynamical holographic QCD model.
MODEL PREDICTIVE CONTROL FUNDAMENTALS
African Journals Online (AJOL)
2012-07-02
Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.
Prediction of near-field strong ground motions for scenario earthquakes on active fault
Institute of Scientific and Technical Information of China (English)
Wang Haiyun; Xie Lili; Tao Xiaxin; Li Jie
2006-01-01
A method to predict near-field strong ground motions for scenario earthquakes on active faults is proposed. First,macro-source parameters characterizing the entire source area, i.e., global source parameters, including fault length, fault width,rupture area, average slip on the fault plane, etc., are estimated by seismogeology survey, seismicity and seismic scaling laws.Second, slip distributions characterizing heterogeneity or roughness on the fault plane, i.e., local source parameters, are reproduced/evaluated by the hybrid slip model. Finally, the finite fault source model, developed from both the global and local source parameters, is combined with the stochastically synthetic technique of ground motion using the dynamic corner frequency based on seismology. The proposed method is applied to simulate the acceleration time histories on three base-rock stations during the 1994 Northridge earthquake. Comparisons between the predicted and recorded acceleration time histories show that the method is feasible and practicable.
Does a Strong El Niño Imply a Higher Predictability of Extreme Drought?
Wang, Shanshan; Yuan, Xing; Li, Yaohui
2017-01-01
The devastating North China drought in the summer of 2015 was roughly captured by a dynamical seasonal climate forecast model with a good prediction of the 2015/16 big El Niño. This raises a question of whether strong El Niños imply higher predictability of extreme droughts. Here we show that a strong El Niño does not necessarily result in an extreme drought, but it depends on whether the El Niño evolves synergistically with Eurasian spring snow cover reduction to trigger a positive summer Eurasian teleconnection (EU) pattern that favors anomalous northerly and air sinking over North China. The dynamical forecast model that only well represents the El Niño underpredicts the drought severity, while a dynamical-statistical forecasting approach that combines both the low- and high-latitudes precursors is more skillful at long lead. In a warming future, the vanishing cryosphere should be better understood to improve predictability of extreme droughts.
Nominal model predictive control
Grüne, Lars
2013-01-01
5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...
Nominal Model Predictive Control
Grüne, Lars
2014-01-01
5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....
Predictive Surface Complexation Modeling
Energy Technology Data Exchange (ETDEWEB)
Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...
Central Charge of the Parallelogram Lattice Strong Coupling Schwinger Model
Yee, K
1993-01-01
We put forth a Fierzed hopping expansion for strong coupling Wilson fermions. As an application, we show that the strong coupling Schwinger model on parallelogram lattices with nonbacktracking Wilson fermions span, as a function of the lattice skewness angle, the $\\Delta = -1$ critical line of $6$-vertex models. This Fierzed formulation also applies to backtracking Wilson fermions, which as we describe apparently correspond to richer systems. However, we have not been able to identify them with exactly solved models.
Algebraic Stress Model with RNG ε-Equation for Simulating Confined Strongly Swirling Turbulent Flows
Institute of Scientific and Technical Information of China (English)
Xu Jiangrong; Yao Qiang; Cao Xingyu; Cen Kefa
2001-01-01
Strongly swirl flow simulation are still under developing. In this paper, ε equation based on the Renormalization Group theory is used into algebraic stress model. Standard k-ε model, algebraic stress model by Jiang Zhang[5]and present model (RNG-ASM) are applied simultaneously to simulating the confined strongly swirling flow.The Simulating results by RNG-ASM model are compared to the results by other two model, it is shown that the predictions by this model display reasonable agreement with experimental data, and lead to greater improvement than Zhang's ASM turbulence model[5].
Robinson, P. A.; Newman, D. L.
1990-01-01
A simple two-component model of strong turbulence that makes clear predictions for the scalings, spectra, and statistics of Langmuir waves is developed. Scalings of quantities such as energy density, power input, dissipation power wave collapse, and number density of collapsing objects are investigated in detail and found to agree well with model predictions. The nucleation model of wave-packet formation is strongly supported by the results. Nucleation proceeds with energy flowing from background to localized states even in the absence of a driver. Modulational instabilities play little or no role in maintaining the turbulent state when significant density nonuniformities are present.
Triad pattern algorithm for predicting strong promoter candidates in bacterial genomes
Directory of Open Access Journals (Sweden)
Sakanyan Vehary
2008-05-01
Full Text Available Abstract Background Bacterial promoters, which increase the efficiency of gene expression, differ from other promoters by several characteristics. This difference, not yet widely exploited in bioinformatics, looks promising for the development of relevant computational tools to search for strong promoters in bacterial genomes. Results We describe a new triad pattern algorithm that predicts strong promoter candidates in annotated bacterial genomes by matching specific patterns for the group I σ70 factors of Escherichia coli RNA polymerase. It detects promoter-specific motifs by consecutively matching three patterns, consisting of an UP-element, required for interaction with the α subunit, and then optimally-separated patterns of -35 and -10 boxes, required for interaction with the σ70 subunit of RNA polymerase. Analysis of 43 bacterial genomes revealed that the frequency of candidate sequences depends on the A+T content of the DNA under examination. The accuracy of in silico prediction was experimentally validated for the genome of a hyperthermophilic bacterium, Thermotoga maritima, by applying a cell-free expression assay using the predicted strong promoters. In this organism, the strong promoters govern genes for translation, energy metabolism, transport, cell movement, and other as-yet unidentified functions. Conclusion The triad pattern algorithm developed for predicting strong bacterial promoters is well suited for analyzing bacterial genomes with an A+T content of less than 62%. This computational tool opens new prospects for investigating global gene expression, and individual strong promoters in bacteria of medical and/or economic significance.
The strong coupling Kondo lattice model as a Fermi gas
Östlund, S
2007-01-01
The strong coupling half-filled Kondo lattice model is an important example of a strongly interacting dense Fermi system for which conventional Fermi gas analysis has thus far failed. We remedy this by deriving an exact transformation that maps the model to a dilute gas of weakly interacting electron and hole quasiparticles that can then be analyzed by conventional dilute Fermi gas methods. The quasiparticle vacuum is a singlet Mott insulator for which the quasiparticle dynamics are simple. Since the transformation is exact, the electron spectral weight sum rules are obeyed exactly. Subtleties in understanding the behavior of electrons in the singlet Mott insulator can be reduced to a fairly complicated but precise relation between quasiparticles and bare electrons. The theory of free quasiparticles can be interpreted as an exactly solvable model for a singlet Mott insulator, providing an exact model in which to explore the strong coupling regime of a singlet Kondo insulator.
STRONG NORMALIZATION IN TYPE SYSTEMS - A MODEL THEORETICAL APPROACH
TERLOUW, J
1995-01-01
Tait's proof of strong normalization for the simply typed lambda-calculus is interpreted in a general model theoretical framework by means of the specification of a certain theory T and a certain model U of T. The argumentation is partly reduced to formal predicate logic by the application of
Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy
2008-01-01
Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...
A spine-sheath model for strong-line blazars
2015-01-01
We have developed a quasi-analytical model for the production of radiation in strong-line blazars, assuming a spine-sheath jet structure. The model allows us to study how the spine and sheath spectral components depend on parameters describing the geometrical and physical structure of "the blazar zone". We show that typical broad-band spectra of strong-line blazars can be reproduced by assuming the magnetization parameter to be of order unity and reconnection to be the dominant dissipation me...
A spine-sheath model for strong-line blazars
Sikora, Marek; Rutkowski, Mieszko; Begelman, Mitchell C.
2016-04-01
We have developed a quasi-analytical model for the production of radiation in strong-line blazars, assuming a spine-sheath jet structure. The model allows us to study how the spine and sheath spectral components depend on parameters describing the geometrical and physical structure of `the blazar zone'. We show that typical broad-band spectra of strong-line blazars can be reproduced by assuming the magnetization parameter to be of order unity and reconnection to be the dominant dissipation mechanism. Furthermore, we demonstrate that the spine-sheath model can explain why γ-ray variations are often observed to have much larger amplitudes than the corresponding optical variations. The model is also less demanding of jet power than one-zone models, and can reproduce the basic features of extreme γ-ray events.
A spine-sheath model for strong-line blazars
Sikora, Marek; Begelman, Mitchell
2015-01-01
We have developed a quasi-analytical model for the production of radiation in strong-line blazars, assuming a spine-sheath jet structure. The model allows us to study how the spine and sheath spectral components depend on parameters describing the geometrical and physical structure of "the blazar zone". We show that typical broad-band spectra of strong-line blazars can be reproduced by assuming the magnetization parameter to be of order unity and reconnection to be the dominant dissipation mechanism. Furthermore, we demonstrate that the spine-sheath model can explain why gamma-ray variations are often observed to have much larger amplitudes than the corresponding optical variations. The model is also less demanding of jet power than one-zone models, and can reproduce the basic features of extreme gamma-ray events.
Validation and modeling of earthquake strong ground motion using a composite source model
Zeng, Y.
2001-12-01
Zeng et al. (1994) have proposed a composite source model for synthetic strong ground motion prediction. In that model, the source is taken as a superposition of circular subevents with a constant stress drop. The number of subevents and their radius follows a power law distribution equivalent to the Gutenberg and Richter's magnitude-frequency relation for seismicity. The heterogeneous nature of the composite source model is characterized by its maximum subevent size and subevent stress drop. As rupture propagates through each subevent, it radiates a Brune's pulse or a Sato and Hirasawa's circular crack pulse. The method has been proved to be successful in generating realistic strong motion seismograms in comparison with observations from earthquakes in California, eastern US, Guerrero of Mexico, Turkey and India. The model has since been improved by including scattering waves from small scale heterogeneity structure of the earth, site specific ground motion prediction using weak motion site amplification, and nonlinear soil response using geotechnical engineering models. Last year, I have introduced an asymmetric circular rupture to improve the subevent source radiation and to provide a consistent rupture model between overall fault rupture process and its subevents. In this study, I revisit the Landers, Loma Prieta, Northridge, Imperial Valley and Kobe earthquakes using the improved source model. The results show that the improved subevent ruptures provide an improved effect of rupture directivity compared to our previous studies. Additional validation includes comparison of synthetic strong ground motions to the observed ground accelerations from the Chi-Chi, Taiwan and Izmit, Turkey earthquakes. Since the method has evolved considerably when it was first proposed, I will also compare results between each major modification of the model and demonstrate its backward compatibility to any of its early simulation procedures.
Semiclassical two-step model for strong-field ionization
Shvetsov-Shilovski, N I; Madsen, L B; Räsänen, E; Lemell, C; Burgdörfer, J; Arbó, D G; Tőkési, K
2016-01-01
We present a semiclassical two-step model for strong-field ionization that accounts for path interferences of tunnel-ionized electrons in the ionic potential beyond perturbation theory. Within the framework of a classical trajectory Monte-Carlo representation of the phase-space dynamics, the model employs the semiclassical approximation to the phase of the full quantum propagator in the exit channel. By comparison with the exact numerical solution of the time-dependent Schr\\"odinger equation for strong-field ionization of hydrogen, we show that for suitable choices of the momentum distribution after the first tunneling step, the model yields good quantitative agreement with the full quantum simulation. The two-dimensional photoelectron momentum distributions, the energy spectra, and the angular distributions are found to be in good agreement with the corresponding quantum results. Specifically, the model quantitatively reproduces the fan-like interference patterns in the low-energy part of the two-dimensional...
Zephyr - the prediction models
DEFF Research Database (Denmark)
Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg
2001-01-01
This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...
Strongly Correlated Transport in the Falicov Kimball Model
Boyd, Greg; Freericks, Jim; Zlatic, Veljko
2013-03-01
Many materials like the cuprates, heavy fermions, and strongly correlated oxides, are non-Fermi liquid ``bad metals'', with linear or quasi-linear resistivity as a function of temperature. The low-energy excitations are quasiparticle-like near the Fermi surface, but their lifetimes are short, so they are not coherent or free-particle-like, as in conventional Fermi-liquids (whose quasi-particle lifetimes diverge at the Fermi energy). It turns out that this kind of behavior is ubiquitous in a wide range of different strongly correlated models, as long as the temperature is above the Fermi-liquid scale. To illustrate this, we investigate the strongly correlated transport in the Falicov-Kimball model using dynamical mean-field theory (DMFT) - which is exactly solvable in the limit of infinite coordination number. We show results for the resistivity as a function of temperature, the quasiparticle lifetime, and the spectral function. These results are quite similar to those recently found for the Hubbard model, illustrating that this high temperature behavior is seen in many different models of strong electron correlations.
Energy Technology Data Exchange (ETDEWEB)
Yao, Yongxin [Iowa State Univ., Ames, IA (United States)
2009-01-01
Solidification of liquid is a very rich and complicated field, although there is always a famous homogeneous nucleation theory in a standard physics or materials science text book. Depending on the material and processing condition, liquid may solidify to single crystalline, polycrystalline with different texture, quasi-crystalline, amorphous solid or glass (Glass is a kind of amorphous solid in general, which has short-range and medium-range order). Traditional oxide glass may easily be formed since the covalent directional bonded network is apt to be disturbed. In other words, the energy landcape of the oxide glass is so complicated that system need extremely long time to explore the whole configuration space. On the other hand, metallic liquid usually crystalize upon cooling because of the metallic bonding nature. However, Klement et.al., (1960) reported that Au-Si liquid underwent an amorphous or “glassy” phase transformation with rapid quenching. In recent two decades, bulk metallic glasses have also been found in several multicomponent alloys[Inoue et al., (2002)]. Both thermodynamic factors (e.g., free energy of various competitive phase, interfacial free energy, free energy of local clusters, etc.) and kinetic factors (e.g., long range mass transport, local atomic position rearrangement, etc.) play important roles in the metallic glass formation process. Metallic glass is fundamentally different from nanocrystalline alloys. Metallic glasses have to undergo a nucleation process upon heating in order to crystallize. Thus the short-range and medium-range order of metallic glasses have to be completely different from crystal. Hence a method to calculate the energetics of different local clusters in the undercooled liquid or glasses become important to set up a statistic model to describe metalllic glass formation. Scattering techniques like x-ray and neutron have widely been used to study the structues of metallic glasses. Meanwhile, computer simulation
A simple model for strong ground motions and response spectra
Safak, Erdal; Mueller, Charles; Boatwright, John
1988-01-01
A simple model for the description of strong ground motions is introduced. The model shows that response spectra can be estimated by using only four parameters of the ground motion, the RMS acceleration, effective duration and two corner frequencies that characterize the effective frequency band of the motion. The model is windowed band-limited white noise, and is developed by studying the properties of two functions, cumulative squared acceleration in the time domain, and cumulative squared amplitude spectrum in the frequency domain. Applying the methods of random vibration theory, the model leads to a simple analytical expression for the response spectra. The accuracy of the model is checked by using the ground motion recordings from the aftershock sequences of two different earthquakes and simulated accelerograms. The results show that the model gives a satisfactory estimate of the response spectra.
DEFF Research Database (Denmark)
Hibino, Y.; Ichinose, T.; Costa, J.L.D.
2009-01-01
A procedure is presented to predict the storey where plastic drift dominates in two-storey buildings under strong ground motion. The procedure utilizes the yield strength and the mass of each storey as well as the peak ground acceleration. The procedure is based on two different assumptions: (1....... The efficiency of the procedure is verified by dynamic response analyses using elasto-plastic model....
Modelling laser-atom interactions in the strong field regime
Galstyan, A; Mota-Furtado, F; O'Mahony, P F; Janssens, N; Jenkins, S D; Chuluunbaatar, O; Piraux, B
2016-01-01
We consider the ionisation of atomic hydrogen by a strong infrared field. We extend and study in more depth an existing semi-analytical model. Starting from the time-dependent Schroedinger equation in momentum space and in the velocity gauge we substitute the kernel of the non-local Coulomb potential by a sum of N separable potentials, each of them supporting one hydrogen bound state. This leads to a set of N coupled one-dimensional linear Volterra integral equations to solve. We analyze the gauge problem for the model, the different ways of generating the separable potentials and establish a clear link with the strong field approximation which turns out to be a limiting case of the present model. We calculate electron energy spectra as well as the time evolution of electron wave packets in momentum space. We compare and discuss the results obtained with the model and with the strong field approximation and examine in this context, the role of excited states.
Confidence scores for prediction models
DEFF Research Database (Denmark)
Gerds, Thomas Alexander; van de Wiel, MA
2011-01-01
modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...
Regional Characterization of the Crust in Metropolitan Areas for Prediction of Strong Ground Motion
Hirata, N.; Sato, H.; Koketsu, K.; Umeda, Y.; Iwata, T.; Kasahara, K.
2003-12-01
Introduction: After the 1995 Kobe earthquake, the Japanese government increased its focus and funding of earthquake hazards evaluation, studies of man-made structures integrity, and emergency response planning in the major urban centers. A new agency, the Ministry of Education, Science, Sports and Culture (MEXT) has started a five-year program titled as Special Project for Earthquake Disaster Mitigation in Urban Areas (abbreviated to Dai-dai-toku in Japanese) since 2002. The project includes four programs: I. Regional characterization of the crust in metropolitan areas for prediction of strong ground motion. II. Significant improvement of seismic performance of structure. III. Advanced disaster management system. IV. Investigation of earthquake disaster mitigation research results. We will present the results from the first program conducted in 2002 and 2003. Regional Characterization of the Crust in Metropolitan Areas for Prediction of Strong Ground Motion: A long-term goal is to produce map of reliable estimations of strong ground motion. This requires accurate determination of ground motion response, which includes a source process, an effect of propagation path, and near surface response. The new five-year project was aimed to characterize the "source" and "propagation path" in the Kanto (Tokyo) region and Kinki (Osaka) region. The 1923 Kanto Earthquake is one of the important targets to be addressed in the project. The proximity of the Pacific and Philippine Sea subducting plates requires study of the relationship between earthquakes and regional tectonics. This project focuses on identification and geometry of: 1) Source faults, 2) Subducting plates and mega-thrust faults, 3) Crustal structure, 4) Seismogenic zone, 5) Sedimentary basins, 6) 3D velocity properties We have conducted a series of seismic reflection and refraction experiment in the Kanto region. In 2002 we have completed to deploy seismic profiling lines in the Boso peninsula (112 km) and the
Modelling, controlling, predicting blackouts
Wang, Chengwei; Baptista, Murilo S
2016-01-01
The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...
Hybrid modeling and prediction of dynamical systems
Lloyd, Alun L.; Flores, Kevin B.
2017-01-01
Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642
Melanoma Risk Prediction Models
Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Strong to fragile transition in a model of liquid silica
Barrat, Jean-Louis; Badro, James; Gillet, Philippe
1996-01-01
The transport properties of an ionic model for liquid silica at high temperatures and pressure are investigated using molecular dynamics simulations. With increasing pressure, a clear change from "strong" to "fragile" behaviour (according to Angell's classification of glass-forming liquids) is observed, albeit only on the small viscosity range that can be explored in MD simulations.. This change is related to structural changes, from an almost perfect four-fold coordination to an imperfect fi...
Weak versus strong wave turbulence in the MMT model
Chibbaro, Sergio; Onorato, Miguel
2016-01-01
Within the spirit of fluid turbulence, we consider the one-dimensional Majda-McLaughlin-Tabak (MMT) model that describes the interactions of nonlinear dispersive waves. We perform a detailed numerical study of the direct energy cascade in the defocusing regime. In particular, we consider a configuration with large-scale forcing and small scale dissipation, and we introduce three non- dimensional parameters: the ratio between nonlinearity and dispersion, {\\epsilon}, and the analogues of the Reynolds number, Re, i.e. the ratio between the nonlinear and dissipative time-scales, both at large and small scales. Our numerical experiments show that (i) in the limit of small {\\epsilon} the spectral slope observed in the statistical steady regime corresponds to the one predicted by the Weak Wave Turbulence (WWT) theory. (ii) As the nonlinearity is increased, the WWT theory breaks down and deviations from its predictions are observed. (iii) It is shown that such departures from the WWT theoretical predictions are accom...
Model of strong stationary vortex turbulence in space plasmas
Directory of Open Access Journals (Sweden)
G. D. Aburjania
2009-01-01
Full Text Available This paper investigates the macroscopic consequences of nonlinear solitary vortex structures in magnetized space plasmas by developing theoretical model of plasma turbulence. Strongly localized vortex patterns contain trapped particles and, propagating in a medium, excite substantial density fluctuations and thus, intensify the energy, heat and mass transport processes, i.e., such vortices can form strong vortex turbulence. Turbulence is represented as an ensemble of strongly localized (and therefore weakly interacting vortices. Vortices with various amplitudes are randomly distributed in space (due to collisions. For their description, a statistical approach is applied. It is supposed that a stationary turbulent state is formed by balancing competing effects: spontaneous development of vortices due to nonlinear twisting of the perturbations' fronts, cascading of perturbations into short scales (direct spectral cascade and collisional or collisionless damping of the perturbations in the short-wave domain. In the inertial range, direct spectral cascade occurs through merging structures via collisions. It is shown that in the magneto-active plasmas, strong turbulence is generally anisotropic Turbulent modes mainly develop in the direction perpendicular to the local magnetic field. It is found that it is the compressibility of the local medium which primarily determines the character of the turbulent spectra: the strong vortex turbulence forms a power spectrum in wave number space. For example, a new spectrum of turbulent fluctuations in k^{−8/3} is derived which agrees with available experimental data. Within the framework of the developed model particle diffusion processes are also investigated. It is found that the interaction of structures with each other and particles causes anomalous diffusion in the medium. The effective coefficient of diffusion has a square root dependence on the stationary level of noise.
Prediction models in complex terrain
DEFF Research Database (Denmark)
Marti, I.; Nielsen, Torben Skov; Madsen, Henrik
2001-01-01
are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...
A strong test of a maximum entropy model of trait-based community assembly.
Shipley, Bill; Laughlin, Daniel C; Sonnier, Grégory; Otfinowski, Rafael
2011-02-01
We evaluate the predictive power and generality of Shipley's maximum entropy (maxent) model of community assembly in the context of 96 quadrats over a 120-km2 area having a large (79) species pool and strong gradients. Quadrats were sampled in the herbaceous understory of ponderosa pine forests in the Coconino National Forest, Arizona, U.S.A. The maxent model accurately predicted species relative abundances when observed community-weighted mean trait values were used as model constraints. Although only 53% of the variation in observed relative abundances was associated with a combination of 12 environmental variables, the maxent model based only on the environmental variables provided highly significant predictive ability, accounting for 72% of the variation that was possible given these environmental variables. This predictive ability largely surpassed that of nonmetric multidimensional scaling (NMDS) or detrended correspondence analysis (DCA) ordinations. Using cross-validation with 1000 independent runs, the median correlation between observed and predicted relative abundances was 0.560 (the 2.5% and 97.5% quantiles were 0.045 and 0.825). The qualitative predictions of the model were also noteworthy: dominant species were correctly identified in 53% of the quadrats, 83% of rare species were correctly predicted to have a relative abundance of < 0.05, and the median predicted relative abundance of species actually absent from a quadrat was 5 x 10(-5).
LHC Limits on the Top-Higgs in Models with Strong Top-Quark Dynamics
Chivukula, R Sekhar; Logan, Heather E; Martin, Adam; Simmons, Elizabeth H
2011-01-01
LHC searches for the standard model Higgs Boson in WW or ZZ decay modes place strong constraints on the top-Higgs state predicted in many models with new dynamics preferentially affecting top quarks. Such a state couples strongly to top-quarks, and is therefore produced through gluon fusion at a rate enhanced relative to the rate for the standard model Higgs boson. A top-Higgs state with mass less than 300 GeV is excluded at 95% CL if the associated top-pion has a mass of 150 GeV, and the constraint is even stronger if the mass of the top-pion state exceeds the top-quark mass or if the top-pion decay constant is a substantial fraction of the weak scale. These results have significant implications for theories with strong top dynamics, such as topcolor-assisted technicolor, top-seesaw models, and certain Higgsless models.
String Theory Based Predictions for Novel Collective Modes in Strongly Interacting Fermi Gases
Bantilan, H; Ishii, T; Lewis, W E; Romatschke, P
2016-01-01
Very different strongly interacting quantum systems such as Fermi gases, quark-gluon plasmas formed in high energy ion collisions and black holes studied theoretically in string theory are known to exhibit quantitatively similar damping of hydrodynamic modes. It is not known if such similarities extend beyond the hydrodynamic limit. Do non-hydrodynamic collective modes in Fermi gases with strong interactions also match those from string theory calculations? In order to answer this question, we use calculations based on string theory to make predictions for novel types of modes outside the hydrodynamic regime in trapped Fermi gases. These predictions are amenable to direct testing with current state-of-the-art cold atom experiments.
Orbifolds and Exact Solutions of Strongly-Coupled Matrix Models
Cordova, Clay; Popolitov, Alexandr; Shakirov, Shamil
2016-01-01
We find an exact solution to strongly-coupled matrix models with a single-trace monomial potential. Our solution yields closed form expressions for the partition function as well as averages of Schur functions. The results are fully factorized into a product of terms linear in the rank of the matrix and the parameters of the model. We extend our formulas to include both logarthmic and finite-difference deformations, thereby generalizing the celebrated Selberg and Kadell integrals. We conjecture a formula for correlators of two Schur functions in these models, and explain how our results follow from a general orbifold-like procedure that can be applied to any one-matrix model with a single-trace potential.
Strongly Interacting Matter at Finite Chemical Potential: Hybrid Model Approach
Srivastava, P. K.; Singh, C. P.
2013-06-01
Search for a proper and realistic equation of state (EOS) for strongly interacting matter used in the study of the QCD phase diagram still appears as a challenging problem. Recently, we constructed a hybrid model description for the quark-gluon plasma (QGP) as well as hadron gas (HG) phases where we used an excluded volume model for HG and a thermodynamically consistent quasiparticle model for the QGP phase. The hybrid model suitably describes the recent lattice results of various thermodynamical as well as transport properties of the QCD matter at zero baryon chemical potential (μB). In this paper, we extend our investigations further in obtaining the properties of QCD matter at finite value of μB and compare our results with the most recent results of lattice QCD calculation.
Thermodynamics of the BMN matrix model at strong coupling
Costa, Miguel S.; Greenspan, Lauren; Penedones, João; Santos, Jorge E.
2015-03-01
We construct the black hole geometry dual to the deconfined phase of the BMN matrix model at strong 't Hooft coupling. We approach this solution from the limit of large temperature where it is approximately that of the non-extremal D0-brane geometry with a spherical S 8 horizon. This geometry preserves the SO(9) symmetry of the matrix model trivial vacuum. As the temperature decreases the horizon becomes deformed and breaks the SO(9) to the SO(6) × SO(3) symmetry of the matrix model. When the black hole free energy crosses zero the system undergoes a phase transition to the confined phase described by a Lin-Maldacena geometry. We determine this critical temperature, whose computation is also within reach of Monte Carlo simulations of the matrix model.
Thermodynamics of the BMN matrix model at strong coupling
Costa, Miguel S; Penedones, Joao; Santos, Jorge
2014-01-01
We construct the black hole geometry dual to the deconfined phase of the BMN matrix model at strong 't Hooft coupling. We approach this solution from the limit of large temperature where it is approximately that of the non-extremal D0-brane geometry with a spherical $S^8$ horizon. This geometry preserves the $SO(9)$ symmetry of the matrix model trivial vacuum. As the temperature decreases the horizon becomes deformed and breaks the $SO(9)$ to the $SO(6)\\times SO(3)$ symmetry of the matrix model. When the black hole free energy crosses zero the system undergoes a phase transition to the confined phase described by a Lin-Maldacena geometry. We determine this critical temperature, whose computation is also within reach of Monte Carlo simulations of the matrix model.
Semiclassical two-step model for strong-field ionization
Shvetsov-Shilovski, N. I.; Lein, M.; Madsen, L. B.; Räsänen, E.; Lemell, C.; Burgdörfer, J.; Arbó, D. G.; Tőkési, K.
2016-07-01
We present a semiclassical two-step model for strong-field ionization that accounts for path interferences of tunnel-ionized electrons in the ionic potential beyond perturbation theory. Within the framework of a classical trajectory Monte Carlo representation of the phase-space dynamics, the model employs the semiclassical approximation to the phase of the full quantum propagator in the exit channel. By comparison with the exact numerical solution of the time-dependent Schrödinger equation for strong-field ionization of hydrogen, we show that for suitable choices of the momentum distribution after the first tunneling step, the model yields good quantitative agreement with the full quantum simulation. The two-dimensional photoelectron momentum distributions, the energy spectra, and the angular distributions are found to be in good agreement with the corresponding quantum results. Specifically, the model quantitatively reproduces the fanlike interference patterns in the low-energy part of the two-dimensional momentum distributions, as well as the modulations in the photoelectron angular distributions.
Second Order Model for Strongly Sheared Compressible Turbulence
Directory of Open Access Journals (Sweden)
marzougui hamed
2015-01-01
Full Text Available In this paper, we propose a model designed to describe a strongly sheared compressible homogeneous turbulent flows. Such flows are far from equilibrium and are well represented by the A3 and A4 cases of the DNS of Sarkar. Speziale and Xu developed a relaxation model in incompressible turbulence able to take into account significant departures from equilibrium. In a previous paper, Radhia et al. proposed a relaxation model similar to that of Speziale and Xu .This model is based on an algebraic representation of the Reynolds stress tensor, much simpler than that of Speziale and Xu and it gave a good result for rapid axisymetric contraction. In this work, we propose to extend the Radhia et al’s. model to compressible homogenous turbulence. This model is based on the pressure-strain model of Launder et al., where we incorporate turbulent Mach number in order to take into account compressibility effects. To assess this model, two numerical simulations were performed which are similar to the cases A3 and A4 of the DNS of Sarkar.
Constraints on cosmological models from strong gravitational lensing systems
Cao, Shuo; Biesiada, Marek; Godlowski, Wlodzimierz; Zhu, Zong-Hong
2011-01-01
Using the gravitational lensing theory and cluster mass distribution model, we try to collect a relatively complete observational data concerning an Hubble constant independent ratio between two angular diameter distances $D_{ds}/D_s$ from various large systematic gravitational lens surveys and lensing by galaxy clusters combined with X-ray observations. On one hand, strongly gravitationally lensed quasar-galaxy systems create such a new opportunity by combining stellar kinematics (central velocity dispersion measurements) with lensing geometry (Einstein radius determination from position of images). We apply such a method to a combined gravitational lens data set including 27 data points from Sloan Lens ACS (SLACS), Lens Structure and Dynamics survey (LSD), and Sloan Bright Arcs Survey (SBAS). On the other hand, a new sample of 10 lensing galaxy clusters with redshifts ranging from 0.1 to 0.6 is also used, which is selected carefully from strong gravitational lensing systems with both X-ray satellite observa...
Strong parameter renormalization from optimum lattice model orbitals
Brosco, Valentina; Ying, Zu-Jian; Lorenzana, José
2017-01-01
Which is the best single-particle basis to express a Hubbard-like lattice model? A rigorous variational answer to this question leads to equations the solution of which depends in a self-consistent manner on the lattice ground state. Contrary to naive expectations, for arbitrary small interactions, the optimized orbitals differ from the noninteracting ones, leading also to substantial changes in the model parameters as shown analytically and in an explicit numerical solution for a simple double-well one-dimensional case. At strong coupling, we obtain the direct exchange interaction with a very large renormalization with important consequences for the explanation of ferromagnetism with model Hamiltonians. Moreover, in the case of two atoms and two fermions we show that the optimization equations are closely related to reduced density-matrix functional theory, thus establishing an unsuspected correspondence between continuum and lattice approaches.
Prediction models in complex terrain
DEFF Research Database (Denmark)
Marti, I.; Nielsen, Torben Skov; Madsen, Henrik
2001-01-01
The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...
Strongly Coupled Models with a Higgs-like Boson*
Directory of Open Access Journals (Sweden)
Pich Antonio
2013-11-01
Full Text Available Considering the one-loop calculation of the oblique S and T parameters, we have presented a study of the viability of strongly-coupled scenarios of electroweak symmetry breaking with a light Higgs-like boson. The calculation has been done by using an effective Lagrangian, being short-distance constraints and dispersive relations the main ingredients of the estimation. Contrary to a widely spread believe, we have demonstrated that strongly coupled electroweak models with massive resonances are not in conflict with experimentalconstraints on these parameters and the recently observed Higgs-like resonance. So there is room for these models, but they are stringently constrained. The vector and axial-vector states should be heavy enough (with masses above the TeV scale, the mass splitting between them is highly preferred to be small and the Higgs-like scalar should have a WW coupling close to the Standard Model one. It is important to stress that these conclusions do not depend critically on the inclusion of the second Weinberg sum rule.
Strongly Coupled Models with a Higgs-like Boson
Pich, Antonio; Rosell, Ignasi; José Sanz-Cillero, Juan
2013-11-01
Considering the one-loop calculation of the oblique S and T parameters, we have presented a study of the viability of strongly-coupled scenarios of electroweak symmetry breaking with a light Higgs-like boson. The calculation has been done by using an effective Lagrangian, being short-distance constraints and dispersive relations the main ingredients of the estimation. Contrary to a widely spread believe, we have demonstrated that strongly coupled electroweak models with massive resonances are not in conflict with experimentalconstraints on these parameters and the recently observed Higgs-like resonance. So there is room for these models, but they are stringently constrained. The vector and axial-vector states should be heavy enough (with masses above the TeV scale), the mass splitting between them is highly preferred to be small and the Higgs-like scalar should have a WW coupling close to the Standard Model one. It is important to stress that these conclusions do not depend critically on the inclusion of the second Weinberg sum rule. We wish to thank the organizers of LHCP 2013 for the pleasant conference. This work has been supported in part by the Spanish Government and the European Commission [FPA2010-17747, FPA2011- 23778, AIC-D-2011-0818, SEV-2012-0249 (Severo Ochoa Program), CSD2007-00042 (Consolider Project CPAN)], the Generalitat Valenciana [PrometeoII/2013/007] and the Comunidad de Madrid [HEPHACOS S2009/ESP-1473].
Predictive models of forest dynamics.
Purves, Drew; Pacala, Stephen
2008-06-13
Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.
Strongly nonlinear models for internal waves: an application for the dam-break problem
Chen, Shengqian
2016-01-01
Strongly nonlinear models of internal wave propagation for incompressible stratified Euler fluids are investigated numerically and analytically to determine the evolution of a class of initial conditions of interest in laboratory experiments. This class of step-like initial data severely tests the robustness of the models beyond their strict long-wave asymptotic validity, and model fidelity is assessed by direct numerical simulations (DNS) of the parent Euler system. It is found that the primary dynamics of near-solitary wave formation is remarkably well predicted by the models for both wave and fluid properties, at a fraction of the computational costs of the DNS code.
Hirshfeld atom refinement for modelling strong hydrogen bonds.
Woińska, Magdalena; Jayatilaka, Dylan; Spackman, Mark A; Edwards, Alison J; Dominiak, Paulina M; Woźniak, Krzysztof; Nishibori, Eiji; Sugimoto, Kunihisa; Grabowsky, Simon
2014-09-01
High-resolution low-temperature synchrotron X-ray diffraction data of the salt L-phenylalaninium hydrogen maleate are used to test the new automated iterative Hirshfeld atom refinement (HAR) procedure for the modelling of strong hydrogen bonds. The HAR models used present the first examples of Z' > 1 treatments in the framework of wavefunction-based refinement methods. L-Phenylalaninium hydrogen maleate exhibits several hydrogen bonds in its crystal structure, of which the shortest and the most challenging to model is the O-H...O intramolecular hydrogen bond present in the hydrogen maleate anion (O...O distance is about 2.41 Å). In particular, the reconstruction of the electron density in the hydrogen maleate moiety and the determination of hydrogen-atom properties [positions, bond distances and anisotropic displacement parameters (ADPs)] are the focus of the study. For comparison to the HAR results, different spherical (independent atom model, IAM) and aspherical (free multipole model, MM; transferable aspherical atom model, TAAM) X-ray refinement techniques as well as results from a low-temperature neutron-diffraction experiment are employed. Hydrogen-atom ADPs are furthermore compared to those derived from a TLS/rigid-body (SHADE) treatment of the X-ray structures. The reference neutron-diffraction experiment reveals a truly symmetric hydrogen bond in the hydrogen maleate anion. Only with HAR is it possible to freely refine hydrogen-atom positions and ADPs from the X-ray data, which leads to the best electron-density model and the closest agreement with the structural parameters derived from the neutron-diffraction experiment, e.g. the symmetric hydrogen position can be reproduced. The multipole-based refinement techniques (MM and TAAM) yield slightly asymmetric positions, whereas the IAM yields a significantly asymmetric position.
A multifluid model extended for strong temperature nonequilibrium
Energy Technology Data Exchange (ETDEWEB)
Chang, Chong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-08
We present a multifluid model in which the material temperature is strongly affected by the degree of segregation of each material. In order to track temperatures of segregated form and mixed form of the same material, they are defined as different materials with their own energy. This extension makes it necessary to extend multifluid models to the case in which each form is defined as a separate material. Statistical variations associated with the morphology of the mixture have to be simplified. Simplifications introduced include combining all molecularly mixed species into a single composite material, which is treated as another segregated material. Relative motion within the composite material, diffusion, is represented by material velocity of each component in the composite material. Compression work, momentum and energy exchange, virtual mass forces, and dissipation of the unresolved kinetic energy have been generalized to the heterogeneous mixture in temperature nonequilibrium. The present model can be further simplified by combining all mixed forms of materials into a composite material. Molecular diffusion in this case is modeled by the Stefan-Maxwell equations.
Description of Strongly Interacting Matter in A Hybrid Model
Srivastava, P K
2014-01-01
Search for a proper and realistic equation of state (EOS) for strongly interacting matter used in the study of the QCD phase diagram still appears as a challenging problem. Recently, we constructed a hybrid model description for the quark gluon plasma (QGP) as well as hadron gas (HG) phases where we used an excluded volume model for HG and a thermodynamically consistent quasiparticle model for the QGP phase. The hybrid model suitably describes the recent lattice results of various thermodynamical as well as transport properties of the QCD matter at zero baryon chemical potential ($\\mu_{B}$). In this paper, we extend our investigations further in obtaining the properties of QCD matter at finite value of $\\mu_{B}$ and compare our results with the most recent results of lattice QCD calculation. Finally we demonstrate the existence of two different limiting energy regimes and propose that the connection point of these two limiting regimes would foretell the existence of critical point (CP) of the deconfining phas...
Strong Local-Nonlocal Coupling for Integrated Fracture Modeling
Energy Technology Data Exchange (ETDEWEB)
Littlewood, David John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silling, Stewart A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, John A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Seleson, Pablo D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bond, Stephen D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Parks, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Burnett, Damon J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Gunzburger, Max [Florida State Univ., Tallahassee, FL (United States)
2015-09-01
Peridynamics, a nonlocal extension of continuum mechanics, is unique in its ability to capture pervasive material failure. Its use in the majority of system-level analyses carried out at Sandia, however, is severely limited, due in large part to computational expense and the challenge posed by the imposition of nonlocal boundary conditions. Combined analyses in which peridynamics is em- ployed only in regions susceptible to material failure are therefore highly desirable, yet available coupling strategies have remained severely limited. This report is a summary of the Laboratory Directed Research and Development (LDRD) project "Strong Local-Nonlocal Coupling for Inte- grated Fracture Modeling," completed within the Computing and Information Sciences (CIS) In- vestment Area at Sandia National Laboratories. A number of challenges inherent to coupling local and nonlocal models are addressed. A primary result is the extension of peridynamics to facilitate a variable nonlocal length scale. This approach, termed the peridynamic partial stress, can greatly reduce the mathematical incompatibility between local and nonlocal equations through reduction of the peridynamic horizon in the vicinity of a model interface. A second result is the formulation of a blending-based coupling approach that may be applied either as the primary coupling strategy, or in combination with the peridynamic partial stress. This blending-based approach is distinct from general blending methods, such as the Arlequin approach, in that it is specific to the coupling of peridynamics and classical continuum mechanics. Facilitating the coupling of peridynamics and classical continuum mechanics has also required innovations aimed directly at peridynamic models. Specifically, the properties of peridynamic constitutive models near domain boundaries and shortcomings in available discretization strategies have been addressed. The results are a class of position-aware peridynamic constitutive laws for
$\\Xi$ baryon strong decays in a chiral quark model
Xiao, Li-Ye
2013-01-01
The strong decays of $\\Xi$ baryons up to N=2 shell were studied in a chiral quark model. The strong decay properties of these well-established ground decuplet baryons were reasonably described. We found that (i) $\\Xi(1690)$ and $\\Xi(1820)$ could be assigned to the spin-parity $J^P=1/2^-$ state $|70,^{2}{8},1,1,1/2^->$ and the spin-parity $J^P=3/2^-$ state $|70,^{2}{8},1,1,3/2^->$, respectively. Slight configuration mixing might exist in these two negative parity states. (ii) $\\Xi(1950)$ might correspond to several different $\\Xi$ resonances. The broad states ($\\Gamma\\sim 100$ MeV) observed in the $\\Xi\\pi$ channel could be classified as the pure $J^P=5/2^-$ octet state $\\Xi^0|70,^{4}8,1,1,5/2^->$ or the mixed state $|\\Xi 1/2^->_3 $ with $J^P=1/2^-$. The $\\Xi$ resonances with moderate width ($\\Gamma\\sim 60$ MeV) observed in the $\\Xi\\pi$ channel might correspond to the $J^P=1/2^+$ excitation $|56,^{4}10,2,2,1/2^+>$. The second orbital excitation $|56,^{4}10,2,2,3/2^+>$ and the mixed state $|\\Xi 1/2^->_1$ might b...
A Comparison of Cosmological Models Using Strong Gravitational Lensing Galaxies
Melia, Fulvio; Wu, Xue-Feng
2014-01-01
Strongly gravitationally lensed quasar-galaxy systems allow us to compare competing cosmologies as long as one can be reasonably sure of the mass distribution within the intervening lens. In this paper, we assemble a catalog of 69 such systems, and carry out a one-on-one comparison between the standard model, LCDM, and the R_h=ct Universe. We find that both models account for the lens observations quite well, though the precision of these measurements does not appear to be good enough to favor one model over the other. Part of the reason is the so-called bulge-halo conspiracy that, on average, results in a baryonic velocity dispersion within a fraction of the optical effective radius virtually identical to that expected for the whole luminous-dark matter distribution. Given the limitations of doing precision cosmological testing using the current sample, we also carry out Monte Carlo simulations based on the current lens measurements to estimate how large the source catalog would have to be in order to rule o...
Multiparametric bifurcations of an epidemiological model with strong Allee effect.
Cai, Linlin; Chen, Guoting; Xiao, Dongmei
2013-08-01
In this paper we completely study bifurcations of an epidemic model with five parameters introduced by Hilker et al. (Am Nat 173:72-88, 2009), which describes the joint interplay of a strong Allee effect and infectious diseases in a single population. Existence of multiple positive equilibria and all kinds of bifurcation are examined as well as related dynamical behavior. It is shown that the model undergoes a series of bifurcations such as saddle-node bifurcation, pitchfork bifurcation, Bogdanov-Takens bifurcation, degenerate Hopf bifurcation of codimension two and degenerate elliptic type Bogdanov-Takens bifurcation of codimension three. Respective bifurcation surfaces in five-dimensional parameter spaces and related dynamical behavior are obtained. These theoretical conclusions confirm their numerical simulations and conjectures by Hilker et al., and reveal some new bifurcation phenomena which are not observed in Hilker et al. (Am Nat 173:72-88, 2009). The rich and complicated dynamics exhibit that the model is very sensitive to parameter perturbations, which has important implications for disease control of endangered species.
Model with Strong $\\gamma_4$ $T$-violation
Friedberg, R
2008-01-01
We extend the $T$ violating model of the paper on "Hidden symmetry of the CKM and neutrino-mapping matrices" by assuming its $T$-violating phases $\\chi_\\uparrow$ and $\\chi_\\downarrow$ to be large and the same, with $\\chi=\\chi_\\uparrow=\\chi_\\downarrow$. In this case, the model has 9 real parameters: $\\alpha_\\uparrow, \\beta_\\uparrow, \\xi_\\uparrow, \\eta_\\uparrow$ for the $\\uparrow$-quark sector, $\\alpha_\\downarrow, \\beta_\\downarrow, \\xi_\\downarrow, \\eta_\\downarrow$ for the $\\downarrow$ sector and a common $\\chi$. We examine whether these nine parameters are compatible with ten observables: the six quark masses and the four real parameters that characterize the CKM matrix (i.e., the Jarlskog invariant ${\\cal J}$ and three Eulerian angles). We find that this is possible only if the $T$violating phase $\\chi$ is large, between $-120^0$ to $-135^0$. In this strong $T$ violating model, the smallness of the Jarlskog invariant ${\\cal J}\\cong 3\\times 10^{-5}$ is mainly accounted for by the large heavy quark masses, with ...
Strongly interacting matter at high densities with a soliton model
Johnson, Charles Webster
1998-12-01
One of the major goals of modern nuclear physics is to explore the phase diagram of strongly interacting matter. The study of these 'extreme' conditions is the primary motivation for the construction of the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory which will accelerate nuclei to a center of mass (c.m.) energy of about 200 GeV/nucleon. From a theoretical perspective, a test of quantum chromodynamics (QCD) requires the expansion of the conditions examined from one phase point to the entire phase diagram of strongly-interacting matter. In the present work we focus attention on what happens when the density is increased, at low excitation energies. Experimental results from the Brookhaven Alternating Gradient Synchrotron (AGS) indicate that this regime may be tested in the 'full stopping' (maximum energy deposition) scenario achieved at the AGS having a c.m. collision energy of about 2.5 GeV/nucleon for two equal- mass heavy nuclei. Since the solution of QCD on nuclear length-scales is computationally prohibitive even on today's most powerful computers, progress in the theoretical description of high densities has come through the application of models incorporating some of the essential features of the full theory. The simplest such model is the MIT bag model. We use a significantly more sophisticated model, a nonlocal confining soliton model developed in part at Kent. This model has proven its value in the calculation of the properties of individual mesons and nucleons. In the present application, the many-soliton problem is addressed with the same model. We describe nuclear matter as a lattice of solitons and apply the Wigner-Seitz approximation to the lattice. This means that we consider spherical cells with one soliton centered in each, corresponding to the average properties of the lattice. The average density is then varied by changing the size of the Wigner-Seitz cell. To arrive at a solution, we need to solve a coupled set of
Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.
Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe
2013-04-01
Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.
An adaptive correspondence algorithm for modeling scenes with strong interreflections.
Xu, Yi; Aliaga, Daniel G
2009-01-01
Modeling real-world scenes, beyond diffuse objects, plays an important role in computer graphics, virtual reality, and other commercial applications. One active approach is projecting binary patterns in order to obtain correspondence and reconstruct a densely sampled 3D model. In such structured-light systems, determining whether a pixel is directly illuminated by the projector is essential to decoding the patterns. When a scene has abundant indirect light, this process is especially difficult. In this paper, we present a robust pixel classification algorithm for this purpose. Our method correctly establishes the lower and upper bounds of the possible intensity values of an illuminated pixel and of a non-illuminated pixel. Based on the two intervals, our method classifies a pixel by determining whether its intensity is within one interval but not in the other. Our method performs better than standard method due to the fact that it avoids gross errors during decoding process caused by strong inter-reflections. For the remaining uncertain pixels, we apply an iterative algorithm to reduce the inter-reflection within the scene. Thus, more points can be decoded and reconstructed after each iteration. Moreover, the iterative algorithm is carried out in an adaptive fashion for fast convergence.
Modeling of Strong Ground Motion in "The Geysers" Geothermal Area
Sharma, N.; Convertito, V.; Maercklin, N.; Zollo, A.
2012-04-01
The Geysers is a vapor-dominated geothermal field located about 120 km north of San Francisco, California. The field is actively exploited since the 1960s, and it is now perhaps the most important and most productive geothermal field in the USA. The continuous injection of fluids and the stress perturbations of this area has resulted in induced seismicity which is clearly felt in the surrounding villages. Thus, based on these considerations, in the present work Ground Motion Prediction Equations (GMPEs) are derived, as they play key role in seismic hazard analysis control and for monitoring the effects of the seismicity rate levels. The GMPEs are derived through the mixed non-linear regression technique for both Peak Ground Velocity (PGV) and Peak Ground Acceleration (PGA). This technique includes both fixed effects and random effects and allows to account for both inter-event and intra-event dependencies in the data. In order to account for site/station effects, a two steps approach has been used. In the first step, regression analysis is performed without station corrections and thus providing a reference model. In the second step, based on the residual distribution at each station and the results of a Z-test, station correction coefficients are introduced to get final correct model. The data from earthquakes recorded at 29 stations for the period September 2007 through November 2010 have been used. The magnitude range is (1.0 geothermal fields with respect to those obtained from natural seismic events. The residual analysis is performed at individual stations to check the reliability of the station corrections and for evaluating the fitting reliability of the retrieved model. The best model has been chosen on the basis of inter-event standard error and R-square test. After the introduction of the site/station correction factor, an improvement in the fit is observed, which resulted in total standard error reduction and increased R-square values.
Institute of Scientific and Technical Information of China (English)
WANG Li-feng(王丽凤); MA Li(马丽); David Vere-Jones; CHEN Shi-jun(陈时军)
2004-01-01
Based on the stochastic AMR model, this paper constructs man-made earthquake catalogues to investigate the property of parameter estimation of the model. Then the stochastic AMR model is applied to the study of several strong earthquakes in China and New Zealand. Akaike′s AIC criterion is used to discriminate whether an accelerating mode of earthquake activity precedes those events or not. Finally, regional accelerating seismic activity and possible prediction approach for future strong earthquakes are discussed.
Computing Strongly Connected Components in the Streaming Model
Laura, Luigi; Santaroni, Federico
In this paper we present the first algorithm to compute the Strongly Connected Components of a graph in the datastream model (W-Stream), where the graph is represented by a stream of edges and we are allowed to produce intermediate output streams. The algorithm is simple, effective, and can be implemented with few lines of code: it looks at each edge in the stream, and selects the appropriate action with respect to a tree T, representing the graph connectivity seen so far. We analyze the theoretical properties of the algorithm: correctness, memory occupation (O(n logn)), per item processing time (bounded by the current height of T), and number of passes (bounded by the maximal height of T). We conclude by presenting a brief experimental evaluation of the algorithm against massive synthetic and real graphs that confirms its effectiveness: with graphs with up to 100M nodes and 4G edges, only few passes are needed, and millions of edges per second are processed.
Response Regimes in Equivalent Mechanical Model of Strongly Nonlinear Liquid Sloshing
Farid, M
2016-01-01
We consider equivalent mechanical model of liquid sloshing in partially-filled cylindrical vessel; the model treats both the regime of linear sloshing, and strongly nonlinear sloshing regime. The latter is related to hydraulic impacts applied to the vessel walls. These hydraulic impacts are commonly simulated with the help of high-power potential and dissipation functions. For the sake of analytic exploration, we substitute this traditional approach by treatment of an idealized vibro-impact system with velocity-dependent restitution coefficient. The obtained reduced model is similar to recently explored system of linear primary oscillator with attached vibro-impact energy sink. The ratio of modal mass of the first sloshing mode to the total mass of the liquid and the tank serves as a natural small parameter for multiple-scale analysis. In the case of external ground forcing, steady-state responses and chaotic strongly modulated responses are revealed. All analytical predictions of the reduced vibro-impact mod...
Zare, Mehdi
2016-12-01
This study aims to develop a new earthquake strong motion-intensity catalog as well as intensity prediction equations for Iran based on the available data. For this purpose, all the sites which had both recorded strong motion and intensity values throughout the region were first searched. Then, the data belonging to the 306 identified sites were processed, and the results were compiled as a new strong motion-intensity catalog. Based on this new catalog, two empirical equations between the values of intensity and the ground motion parameters (GMPs) for the Iranian earthquakes were calculated. At the first step, earthquake "intensity" was considered as a function of five independent GMPs including "Log (PHA)," "moment magnitude (MW)," "distance to epicenter," "site type," and "duration," and a multiple stepwise regression was calculated. Regarding the correlations between the parameters and the effectiveness coefficients of the predictors, the Log (PHA) was recognized as the most effective parameter on the earthquake "intensity," while the parameter "site type" was removed from the equations since it was determines as the least significant variable. Then, at the second step, a simple ordinary least squares (OLS) regression was fitted only between the parameters intensity and the Log (PHA) which resulted in more over/underestimated intensity values comparing to the results of the multiple intensity-GMPs regression. However, for rapid response purposes, the simple OLS regression may be more useful comparing to the multiple regression due to its data availability and simplicity. In addition, according to 50 selected earthquakes, an empirical relation between the macroseismic intensity (I0) and MW was developed.
Computational studies of model disordered and strongly correlated electronic systems
Johri, Sonika
cannot be achieved perfectly in experiments. A chapter of this thesis is devoted to studying signatures of incomplete localization in a disordered system with interacting particles which is coupled to a bath. . Strongly interacting particles can also give rise to topological phases of matter that have exotic emergent properties, such as quasiparticles with fractional charges and anyonic, or perhaps even non-Abelian statistics. In addition to their intrinsic novelty, these particles (e.g. Majorana fermions) may be the building blocks of future quantum computers. The third part of my thesis focuses on the best experimentally known realizations of such systems - the fractional quantum Hall effect (FQHE) which occurs in two-dimensional electron gases in a strong perpendicular magnetic field. It has been observed in systems such as semiconductor heterostructures and, more recently, graphene. I have developed software for exact diagonalization of the many-body FQHE problem on the surface of a cylinder, a hitherto unstudied type of geometry. This geometry turns out to be optimal for the DMRG algorithm. Using this new geometry, I have studied properties of various fractionally-filled states, computing the overlap between exact ground states and model wavefunctions, their edge excitations, and entanglement spectra. I have calculated the sizes and tunneling amplitudes of quasiparticles, information which is needed to design the interferometers used to experimentally measure their Aharanov-Bohm phase. I have also designed numerical probes of the recently discovered geometric degree of freedom of FQHE states.
Model equation for strongly focused finite-amplitude sound beams
Kamakura; Ishiwata; Matsuda
2000-06-01
A model equation that describes the propagation of sound beams in a fluid is developed using the oblate spheroidal coordinate system. This spheroidal beam equation (SBE) is a parabolic equation and has a specific application to a theoretical prediction on focused, high-frequency beams from a circular aperture. The aperture angle does not have to be small. The theoretical background is basically along the same analytical lines as the composite method (CM) reported previously [B. Ystad and J. Berntsen, Acustica 82, 698-706 (1996)]. Numerical examples are displayed for the amplitudes of sound pressure along and across the beam axis when sinusoidal waves are radiated from the source with uniform amplitude distribution. The primitive approach to linear field analysis is readily extended to the case where harmonic generation in finite-amplitude sound beams becomes significant due to the inherent nonlinearity of the medium. The theory provides the propagation and beam pattern profiles that differ from the CM solution for each harmonic component.
Strong Ground-Motion Prediction in Seismic Hazard Analysis: PEGASOS and Beyond
Scherbaum, F.; Bommer, J. J.; Cotton, F.; Bungum, H.; Sabetta, F.
2005-12-01
The SSHAC Level 4 approach to probabilistic seismic hazard analysis (PSHA), which could be considered to define the state-of-the-art in PSHA using multiple expert opinions, has been fully applied only twice, firstly in the multi-year Yucca Mountain study and subsequently (2002-2004) in the PEGASOS project. The authors of this paper participated as ground-motion experts in this latter project, the objective of which was comprehensive seismic hazard analysis for four nuclear power plant sites in Switzerland, considering annual exceedance frequencies down to 1/10000000. Following SSHAC procedure, particular emphasis was put on capturing both the aleatory and epistemic uncertainties. As a consequence, ground motion prediction was performed by combining several empirical ground motion models within a logic tree framework with the weights on each logic tree branch expressing the personal degree-of-belief of each ground-motion expert. In the present paper, we critically review the current state of ground motion prediction methodology in PSHA in particular for regions of low seismicity. One of the toughest lessons from PEGASOS was that in systematically and rigorously applying the laws of uncertainty propagation to all of the required conversions and adjustments of ground motion models, a huge price has to be paid in an ever-growing aleatory variability. Once this path has been followed, these large sigma values will drive the hazard, particularly for low annual frequencies of exceedance. Therefore, from a post-PEGASOS perspective, the key issues in the context of ground-motion prediction for PSHA for the near future are to better understand the aleatory variability of ground motion and to develop suites of ground-motion prediction equations that employ the same parameter definitions. The latter is a global rather than a regional challenge which might be a desirable long-term goal for projects similar to the PEER NGA (Pacific Earthquake Engineering Research Center, Next
Strong interaction of hadrons in quark cluster model
Directory of Open Access Journals (Sweden)
Arezu Jahanshir
2015-09-01
Full Text Available The theoretical information on the hadrons interactions according to the basis investigation of the multiple scattering process theory is described. As we know multi-particle reactions on the hadrons targets are attracting a great attention nowadays. To survey strong interaction of jet particles with quarks that are inside hadrons (Baryons,mesons, exotic baryons(Penta-quarks, exotic mesons(Tetra-quarks, we can use the estimate called high energy approximation (Eikonal or Glauber approximation theory that known very well in nuclear physics. This estimate describes collision and interactions of jet particles with quarks and scattering from multi-focus hadrons like diffraction phenomenon in optics. Glauber multiple scattering process theory may apply in analyzing elastic and inelastic collision of hadrons in a range of high energy levels. In elastic collision, scattering amplitude is equal to total ranges of multiple collisions inside the hadrons. It’s possible to express Glauber multiple scattering factor in a form of mathematic series. So that each elements shows the number of occurred scattering inside the hadrons. Determination of scattering amplitude by the high energy approximation depends on elected primary coming wave function of the shot particle and function of out coming wave from the target nucleus. Therefore it’s not so hard to determine scattering amplitude. The main purpose of this paper is to show how to determine mathematical formula for differential cross section of jet particles in high energy levels with a hadrons in cluster model (qq, qq (Quarkonium-Quarkonium cluster.
Cosmic Constraints to wCDM Model from Strong Gravitational Lensing
An, Jie; Xu, Lixin
2016-01-01
In this paper, we study the cosmic constraint to $w$CDM model via $118$ strong gravitational lensing systems which are complied from SLACS, BELLS, LSD and SL2S surveys, where the ratio between two angular diameter distances $D^{obs} = D_A(z_l,z_s)/D_A(0,z_s)$ is taken as a cosmic observable. To obtain this ratio, we adopt two strong lensing models: one is the singular isothermal sphere model (SIS), the other one is the power-law density profile (PLP) model. Via the Markov Chain Mote Carlo method, the posterior distribution of the cosmological model parameters space is obtained. The results show that the cosmological model parameters are not sensitive to the parameterized forms of the power-law index $\\gamma$. Furthermore, the PLP model gives a relative tighter constraint to the cosmological parameters than that of the SIS model. The predicted value of $\\Omega_m=0.31^{+0.44}_{-0.24}$ by SIS model is compatible with that obtained by {\\it Planck}2015: $\\Omega_{m}=0.313\\pm0.013$. However, the value of $\\Omega_m=0...
Long-term predictability of regions and dates of strong earthquakes
Kubyshen, Alexander; Doda, Leonid; Shopin, Sergey
2016-04-01
Results on the long-term predictability of strong earthquakes are discussed. It is shown that dates of earthquakes with M>5.5 could be determined in advance of several months before the event. The magnitude and the region of approaching earthquake could be specified in the time-frame of a month before the event. Determination of number of M6+ earthquakes, which are expected to occur during the analyzed year, is performed using the special sequence diagram of seismic activity for the century time frame. Date analysis could be performed with advance of 15-20 years. Data is verified by a monthly sequence diagram of seismic activity. The number of strong earthquakes expected to occur in the analyzed month is determined by several methods having a different prediction horizon. Determination of days of potential earthquakes with M5.5+ is performed using astronomical data. Earthquakes occur on days of oppositions of Solar System planets (arranged in a single line). At that, the strongest earthquakes occur under the location of vector "Sun-Solar System barycenter" in the ecliptic plane. Details of this astronomical multivariate indicator still require further research, but it's practical significant is confirmed by practice. Another one empirical indicator of approaching earthquake M6+ is a synchronous variation of meteorological parameters: abrupt decreasing of minimal daily temperature, increasing of relative humidity, abrupt change of atmospheric pressure (RAMES method). Time difference of predicted and actual date is no more than one day. This indicator is registered 104 days before the earthquake, so it was called as Harmonic 104 or H-104. This fact looks paradoxical, but the works of A. Sytinskiy and V. Bokov on the correlation of global atmospheric circulation and seismic events give a physical basis for this empirical fact. Also, 104 days is a quarter of a Chandler period so this fact gives insight on the correlation between the anomalies of Earth orientation
Energy Technology Data Exchange (ETDEWEB)
Kaneda, Y.; Ejiri, J. [Obayashi Corp., Tokyo (Japan)
1996-10-01
This paper describes simulation results of strong acceleration motion with varying uncertain fault parameters mainly for a fault model of Hyogo-ken Nanbu earthquake. For the analysis, based on the fault parameters, the strong acceleration motion was simulated using the radiation patterns and the breaking time difference of composite faults as parameters. A statistic waveform composition method was used for the simulation. For the theoretical radiation patterns, directivity was emphasized which depended on the strike of faults, and the maximum acceleration was more than 220 gal. While, for the homogeneous radiation patterns, the maximum accelerations were isotopically distributed around the fault as a center. For variations in the maximum acceleration and the predominant frequency due to the breaking time difference of three faults, the response spectral value of maximum/minimum was about 1.7 times. From the viewpoint of seismic disaster prevention, underground structures including potential faults and non-arranging properties can be grasped using this simulation. Significance of the prediction of strong acceleration motion was also provided through this simulation using uncertain factors, such as breaking time of composite faults, as parameters. 4 refs., 4 figs., 1 tab.
Seasonal Predictability in a Model Atmosphere.
Lin, Hai
2001-07-01
The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.
Moermond, C.T.A.; Traas, T.P.; Roessink, I.; Veltman, K.; Hendriks, A.J.; Koelmans, A.A.
2007-01-01
The predictive power of bioaccumulation models may be limited when they do not account for strong sorption of organic contaminants to carbonaceous materials (CM) such as black carbon, and when they do not include metabolic transformation. We tested a food web accumulation model, including sorption t
Moermond, C.T.A.; Traas, T.P.; Roessink, I.; Veltman, K.; Hendriks, A.J.; Koelmans, A.A.
2007-01-01
The predictive power of bioaccumulation models may be limited when they do not account for strong sorption of organic contaminants to carbonaceous materials (CM) such as black carbon, and when they do not include metabolic transformation. We tested a food web accumulation model, including sorption
Predictive In Vivo Models for Oncology.
Behrens, Diana; Rolff, Jana; Hoffmann, Jens
2016-01-01
Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.
Statistical Seasonal Sea Surface based Prediction Model
Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima
2014-05-01
The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.
Study of Offset Collisions and Beam Adjustment in the LHC Using a Strong-Strong Simulation Model
Muratori, B
2002-01-01
The bunches of the two opposing beams in the LHC do not always collide head-on. The beam-beam effects cause a small, unavoidable separation under nominal operational conditions. During the beam adjustment and when the beams are brought into collision the beams are separated by a significant fraction of the beam size. A result of small beam separation can be the excitation of coherent dipole oscillations or an emittance increase. These two effects are studied using a strong-strong multi particle simulation model. The aim is to identify possible limitations and to find procedures which minimise possible detrimental effects.
Deane, R P; Heywood, I
2015-01-01
Strong gravitational lensing provides some of the deepest views of the Universe, enabling studies of high-redshift galaxies only possible with next-generation facilities without the lensing phenomenon. To date, 21 cm radio emission from neutral hydrogen has only been detected directly out to z~0.2, limited by the sensitivity and instantaneous bandwidth of current radio telescopes. We discuss how current and future radio interferometers such as the Square Kilometre Array (SKA) will detect lensed HI emission in individual galaxies at high redshift. Our calculations rely on a semi-analytic galaxy simulation with realistic HI disks (by size, density profile and rotation), in a cosmological context, combined with general relativistic ray tracing. Wide-field, blind HI surveys with the SKA are predicted to be efficient at discovering lensed HI systems, increasingly so at z > 2. This will be enabled by the combination of the magnification boosts, the steepness of the HI luminosity function at the high-mass end, and t...
STRAP Is a Strong Predictive Marker of Adjuvant Chemotherapy Benefit in Colorectal Cancer
Directory of Open Access Journals (Sweden)
Martin Buess
2004-11-01
Full Text Available BACKGROUND: Molecular predictors for the effectiveness of adjuvant chemotherapy in colorectal cancer are of considerable clinical interest. To this aim, we analyzed the serine threonine receptor-associated protein (STRAP, an inhibitor of TGF-βsignaling, with regard to prognosis and prediction of adjuvant 5-FU chemotherapy benefit. i The gene copy status of STRAP was determined using quantitative realtime polymerase chain reaction in 166 colorectal tumor biopsies, which had been collected from a randomized multicenter trial of 5-fluorouracil (5-FU/mitomycin C (MMC adjuvant chemotherapy of the Swiss Group for Clinical Cancer Research (SAKK. RESULTS: Amplification of STRAP was found in 22.8% of the tumors. When left without adjuvant chemotherapy, patients bearing tumors with a STRAP amplification had a significantly better prognosis (hazard ratio for death: 0.26; P = .004. Interestingly, these patients, when receiving adjuvant treatment, had a worse survival (hazard ratio for death: 3.48; P = .019 than without chemotherapy, whereas patients carrying tumors with diploidy or deletion of STRAP benefited from the treatment (hazard ratio for death: 0.44; P = .052. This suggests the amplification of STRAP as a strong predictor of an unfavorable effect of 5-FU-based adjuvant chemotherapy. CONCLUSION: If confirmed, the STRAP gene copy status might provide a parameter to decide about the use of 5-FU-based adjuvant chemotherapy.
PREDICT : model for prediction of survival in localized prostate cancer
Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco
2016-01-01
Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I
Effective quantum Monte Carlo algorithm for modeling strongly correlated systems
Kashurnikov, V. A.; Krasavin, A. V.
2007-01-01
A new effective Monte Carlo algorithm based on principles of continuous time is presented. It allows calculating, in an arbitrary discrete basis, thermodynamic quantities and linear response of mixed boson-fermion, spin-boson, and other strongly correlated systems which admit no analytic description
Exponential Decay of Correlations for the Strongly Coupled Toom Model
de Maere, Augustin
2011-01-01
We prove that, for the two-dimensional probabilistic cellular automaton of Toom in the low-noise regime, there are two classes of initial measures, each of which converges exponentially fast toward one of the two natural invariant measures. We also show that these two invariant measures have exponential decay of correlations in space and in time and are strongly mixing.
WHY WE CANNOT PREDICT STRONG EARTHQUAKES IN THE EARTH’S CRUST
Directory of Open Access Journals (Sweden)
Iosif L. Gufeld
2015-09-01
Full Text Available In the past decade, earthquake disasters caused multiple fatalities and significant economic losses and challenged the modern civilization. The wellknown achievements and growing power of civilization are backstrapped when facing the Nature. The question arises, what hinders solving a problem of earthquake prediction, while longterm and continuous seismic monitoring systems are in place in many regions of the world. For instance, there was no forecast of the Japan Great Earthquake of March 11, 2011, despite the fact that monitoring conditions for its prediction were unique. Its focal zone was 100–200 km away from the monitoring network installed in the area of permanent seismic hazard, which is subject to nonstop and longterm seismic monitoring. Lesson should be learned from our common fiasco in forecasting, taking into account research results obtained during the past 50–60 years. It is now evident that we failed to identify precursors of the earthquakes. Prior to the earthquake occurrence, the observed local anomalies of various fields reflected other processes that were mistakenly viewed as processes of preparation for largescale faulting. For many years, geotectonic situations were analyzed on the basis of the physics of destruction of laboratory specimens, which was applied to the lithospheric conditions. Many researchers realize that such an approach is inaccurate. Nonetheless, persistent attempts are being undertaken with application of modern computation to detect anomalies of various fields, which may be interpreted as earthquake precursors. In our opinion, such illusory intentions were smashed by the Great Japan Earthquake (Figure 6. It is also obvious that sufficient attention has not been given yet to fundamental studies of seismic processes.This review presents the authors’ opinion concerning the origin of the seismic process and strong earthquakes, being part of the process. The authors realize that a wide discussion is
Institute of Scientific and Technical Information of China (English)
陆耀军; 周力行; 沈熊
2000-01-01
The Reynolds stress transport equation model (DSM) is used to predict the strongly swirling turbulent flows in a liquid-liquid hydrocyclone, and the predictions are compared with LDV measurements . Predictions properly give the flow behavior observed in experiments, such as the Rankine-vortex structure and double peaks near the inlet region in tangential velocity profile, the downward flow near the wall and upward flow near the core in axial velocity profiles. In the inlet or upstream region of the hydrocyclone, the reverse flow near the axis is well predicted, but in the region with smaller cone angle and cylindrical section, there are some discrepancies between the model predictions and the LDV measurements. Predictions show that the pressure is small in the near-axis region and increases to the maximum near the wall. Both predictions and measurements indicate that the turbulence in hydrocy-clones is inhomogeneous and anisotropic.
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The Reynolds stress transport equation model (DSM) is used to predict the strongly swirling turbulent flows in a liquid-liquid hydrocyclone, and the predictions are compared with LDV measurements. Predictions properly give the flow behavior observed in experiments, such as the Rankine-vortex structure and double peaks near the inlet region in tangential velocity profile, the downward flow near the wall and upward flow near the core in axial velocity profiles. In the inlet or upstream region of the hydrocyclone, the reverse flow near the axis is well predicted, but in the region with smaller cone angle and cylindrical section, there are some discrepancies between the model predictions and the LDV measurements. Predictions show that the pressure is small in the near-axis region and increases to the maximum near the wall. Both predictions and measurements indicate that the turbulence in hydrocyclones is inhomogeneous and anisotropic.
Artificial Neural Network Model for Predicting Compressive
Directory of Open Access Journals (Sweden)
Salim T. Yousif
2013-05-01
Full Text Available Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature. The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor affecting the output of the model. The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.
On the Standard Model and Strongly-Correlated Electron Systems
Bashford, J D
2003-01-01
Higlighting certain similarities between the two-dimensional Luttinger liquid model and the effective fermionic theory obtained from the hypercharge Lagrangian, we argue the case for a new type of Standard Model extension.
Predictive Modeling of Cardiac Ischemia
Anderson, Gary T.
1996-01-01
The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.
Model predictive control of MSMPR crystallizers
Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc
2005-02-01
A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.
State resolved vibrational relaxation modeling for strongly nonequilibrium flows
Boyd, Iain D.; Josyula, Eswar
2011-05-01
Vibrational relaxation is an important physical process in hypersonic flows. Activation of the vibrational mode affects the fundamental thermodynamic properties and finite rate relaxation can reduce the degree of dissociation of a gas. Low fidelity models of vibrational activation employ a relaxation time to capture the process at a macroscopic level. High fidelity, state-resolved models have been developed for use in continuum gas dynamics simulations based on computational fluid dynamics (CFD). By comparison, such models are not as common for use with the direct simulation Monte Carlo (DSMC) method. In this study, a high fidelity, state-resolved vibrational relaxation model is developed for the DSMC technique. The model is based on the forced harmonic oscillator approach in which multi-quantum transitions may become dominant at high temperature. Results obtained for integrated rate coefficients from the DSMC model are consistent with the corresponding CFD model. Comparison of relaxation results obtained with the high-fidelity DSMC model shows significantly less excitation of upper vibrational levels in comparison to the standard, lower fidelity DSMC vibrational relaxation model. Application of the new DSMC model to a Mach 7 normal shock wave in carbon monoxide provides better agreement with experimental measurements than the standard DSMC relaxation model.
A Strongly Grounded Stable Model Semantics for Full Propositional Language
2012-01-01
Answer set programming is one of the most praised frameworks for declarative programming in general and non-monotonic reasoning in particular. There has been many efforts to extend stable model semantics so that answer set programs can use a more extensive syntax. To such endeavor, the community of non-monotonic reasoning has introduced extensions such as equilibrium models and FLP semantics. However, both of these extensions suffer from two problems: intended models according to such extensi...
Numerical weather prediction model tuning via ensemble prediction system
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
Strongly correlated electron systems: Photoemission and the single-impurity model
Energy Technology Data Exchange (ETDEWEB)
Arko, A.J.; Joyce, J.J.; Andrews, A.B.; Thompson, J.D.; Smith, J.L.; Mandrus, D.; Hundley, M.F.; Cornelius, A.L. [Los Alamos National Laboratories, Los Alamos, New Mexico 87545 (United States); Moshopoulou, E.; Fisk, Z. [NHMFL, Florida State University, Tallahassee, Florida 32306-4005 (United States); Canfield, P.C. [Iowa State University/Ames Laboratory, Ames, Iowa 50011 (United States); Menovsky, A. [Natuurkundig Laboratorium, University of Amsterdam, Amsterdam (The Netherlands)
1997-09-01
We present high-resolution, angle-resolved photoemission spectra for Ce-based and U-based strongly correlated electron systems. The experimental results are irreconcilable with the long-accepted single-impurity model, which predicts a narrow singlet state, in close proximity to the Fermi energy, whose linewidth and binding energy are a constant determined by a characteristic temperature T{sub K} for the material. We report that both 4f and 5f photoemission features disperse with crystal momentum at temperatures both above and below T{sub K}; these are characteristics consistent with narrow bands but not with the single-impurity model. Inclusion of the lattice must be considered at all temperatures. Variants of the periodic Anderson model are consistent with this approach. {copyright} {ital 1997} {ital The American Physical Society}
Flipped version of the supersymmetric strongly coupled preon model
Fajfer, S.; Mileković, M.; Tadić, D.
1989-12-01
In the supersymmetric SU(5) [SUSY SU(5)] composite model (which was described in an earlier paper) the fermion mass terms can be easily constructed. The SUSY SU(5)⊗U(1), i.e., flipped, composite model possesses a completely analogous composite-particle spectrum. However, in that model one cannot construct a renormalizable superpotential which would generate fermion mass terms. This contrasts with the standard noncomposite grand unified theories (GUT's) in which both the Georgi-Glashow electrical charge embedding and its flipped counterpart lead to the renormalizable theories.
Return Predictability, Model Uncertainty, and Robust Investment
DEFF Research Database (Denmark)
Lukas, Manuel
Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...
Lim, Eun-Pa; Hendon, Harry H.
2015-05-01
In 2010 eastern Australia experienced its wettest spring on record, which has been largely attributed to a strong La Niña event in conjunction with an extraordinary positive excursion of the Southern Annular Mode (SAM). La Niña impacts would be expected to have been predictable months in advance, but predictability of the occurrence of the strong SAM is less known. We explore the predictability of the strong SAM in austral spring 2010 and its contribution to the extreme wet conditions in eastern Australia, using the Australian Bureau of Meteorology's dynamical seasonal forecast system (POAMA2). Seasonal forecasts from POAMA2 were skilful in predicting the wet conditions over eastern Australia at up to 2 month lead time as a result of a good prediction of the impacts of the ongoing La Niña and the development of a strong positive excursion of the SAM. Forecast sensitivity experiments on initial conditions demonstrate that (1) the strong La Niña was a necessary condition for promoting the positive phase of SAM (high SAM) and the anomalous wet conditions over eastern Australia during October to November 2010; but (2) internal atmospheric processes were important for producing the moderate strength of the high SAM in September 2010 and for amplifying the strength of the high SAM forced by La Niña in October to November 2010; and (3) the strong high SAM was an important factor for the extremity of the Australian rainfall in late spring 2010. Therefore, high quality atmosphere and ocean initial conditions were both essential for the successful prediction of the extreme climate during austral spring 2010.
Strong scale dependent bispectrum in the Starobinsky model of inflation
Arroja, Frederico
2012-01-01
We compute analytically the dominant contribution to the tree-level bispectrum in the Starobinsky model of inflation. In this model, the potential is vacuum energy dominated but contains a subdominant linear term which changes the slope abruptly at a point. We show that on large scales compared with the transition scale $k_0$ and in the equilateral limit the analogue of the non-linearity parameter scales as $(k/k_0)^2$, that is its amplitude decays for larger and larger scales until it becomes subdominant with respect to the usual slow-roll suppressed corrections. On small scales we show that the non-linearity parameter oscillates with angular frequency given by $3/k_0$ and its amplitude grows linearly towards smaller scales and can be large depending on the model parameters. We also compare our results with previous results in the literature.
A strong viscous–inviscid interaction model for rotating airfoils
DEFF Research Database (Denmark)
Ramos García, Néstor; Sørensen, Jens Nørkær; Shen, Wen Zhong
2014-01-01
version, a parametric study on rotational effects induced by the Coriolis and centrifugal forces in the boundary-layer equations shows that the effects of rotation are to decrease the growth of the boundary-layer and delay the onset of separation, hence increasing the lift coefficient slightly while...... the viscous and inviscid parts. The inviscid part is modeled by a 2D panel method, and the viscous part is modeled by solving the integral form of the laminar and turbulent boundary-layer equations with extension for 3D rotational effects. Laminar-to-turbulent transition is either forced by employing...
Morandi, Alessandro; Davis, Daniel; Fick, Donna M.; Turco, Renato; Boustani, Malaz; Lucchi, Elena; Guerini, Fabio; Morghen, Sara; Torpilliesi, Tiziana; Gentile, Simona; MacLullich, Alasdair M.; Trabucchi, Marco; Bellelli, Giuseppe
2014-01-01
Objective Delirium superimposed on dementia (DSD) is common in many settings. Nonetheless, little is known about the association between DSD and clinical outcomes. The study aim was to evaluate the association between DSD and related adverse outcomes at discharge from rehabilitation and at 1-year follow-up in older inpatients undergoing rehabilitation. Design Prospective cohort study. Setting Hospital rehabilitation unit. Participants A total of 2642 patients aged 65 years or older admitted between January 2002 and December 2006. Measurements Dementia predating rehabilitation admission was detected by DSM-III-R criteria. Delirium was diagnosed with the DSM-IV-TR. The primary outcome was that of walking dependence (Barthel Index mobility subitem score of <15) captured as a trajectory from discharge to 1-year follow-up. A mixed-effects multivariate logistic regression model was used to analyze the association between DSD and outcome, after adjusting for relevant covariates. Secondary outcomes were institutionalization and mortality at 1-year follow-up, and logistic regression models were used to analyze these associations. Results The median age was 77 years (interquartile range: 71–83). The prevalence of DSD was 8%, and the prevalence of delirium and dementia alone were 4% and 22%, respectively. DSD at admission was found to be significantly associated with almost a 15-fold increase in the odds of walking dependence (odds ratio [OR] 15.5; 95% Confidence Interval [CI] 5.6–42.7; P < .01). DSD was also significantly associated with a fivefold increase in the risk of institutionalization (OR 5.0; 95% CI 2.8–8.9; P < .01) and an almost twofold increase in the risk of mortality (OR 1.8; 95% CI 1.1–2.8; P = .01). Conclusions DSD is a strong predictor of functional dependence, institutionalization, and mortality in older patients admitted to a rehabilitation setting, suggesting that strategies to detect DSD routinely in practice should be developed and DSD should
Strong Sector in non-minimal SUSY model
Costantini, Antonio
2016-01-01
We investigate the squark sector of a supersymmetric theory with an extended Higgs sector. We give the mass matrices of stop and sbottom, comparing the Minimal Supersymmetric Standard Model (MSSM) case and the non-minimal case. We discuss the impact of the extra superfields on the decay channels of the stop searched at the LHC.
Self-consistent Models of Strong Interaction with Chiral Symmetry
Nambu, Y.; Pascual, P.
1963-04-01
Some simple models of (renormalizable) meson-nucleon interaction are examined in which the nucleon mass is entirely due to interaction and the chiral ( gamma {sub 5}) symmetry is "broken'' to become a hidden symmetry. It is found that such a scheme is possible provided that a vector meson is introduced as an elementary field. (auth)
Coexistence of two species in a strongly coupled cooperating model
DEFF Research Database (Denmark)
Pedersen, Michael
In this paper, the cooperating two-species Lotka-Volterra model is discussed. We study the existence of solutions to a elliptic system with homogeneous Dirichlet boundary conditions. Our results show that this problem possesses at least one coexistence state if the birth rates are big and self...
Coexistence of two species in a strongly coupled cooperating model
DEFF Research Database (Denmark)
Pedersen, Michael
In this paper, the cooperating two-species Lotka-Volterra model is discussed. We study the existence of solutions to a elliptic system with homogeneous Dirichlet boundary conditions. Our results show that this problem possesses at least one coexistence state if the birth rates are big and self...
Note on the hydrodynamic description of thin nematic films: Strong anchoring model
Lin, Te-Sheng
2013-01-01
We discuss the long-wave hydrodynamic model for a thin film of nematic liquid crystal in the limit of strong anchoring at the free surface and at the substrate. We rigorously clarify how the elastic energy enters the evolution equation for the film thickness in order to provide a solid basis for further investigation: several conflicting models exist in the literature that predict qualitatively different behaviour. We consolidate the various approaches and show that the long-wave model derived through an asymptotic expansion of the full nemato-hydrodynamic equations with consistent boundary conditions agrees with the model one obtains by employing a thermodynamically motivated gradient dynamics formulation based on an underlying free energy functional. As a result, we find that in the case of strong anchoring the elastic distortion energy is always stabilising. To support the discussion in the main part of the paper, an appendix gives the full derivation of the evolution equation for the film thickness via asymptotic expansion. © 2013 AIP Publishing LLC.
Predictive Model Assessment for Count Data
2007-09-05
critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002
From strong to weak coupling in holographic models of thermalization
Energy Technology Data Exchange (ETDEWEB)
Grozdanov, Sašo; Kaplis, Nikolaos [Instituut-Lorentz for Theoretical Physics, Leiden University,Niels Bohrweg 2, Leiden 2333 CA (Netherlands); Starinets, Andrei O. [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom)
2016-07-29
We investigate the analytic structure of thermal energy-momentum tensor correlators at large but finite coupling in quantum field theories with gravity duals. We compute corrections to the quasinormal spectra of black branes due to the presence of higher derivative R{sup 2} and R{sup 4} terms in the action, focusing on the dual to N=4 SYM theory and Gauss-Bonnet gravity. We observe the appearance of new poles in the complex frequency plane at finite coupling. The new poles interfere with hydrodynamic poles of the correlators leading to the breakdown of hydrodynamic description at a coupling-dependent critical value of the wave-vector. The dependence of the critical wave vector on the coupling implies that the range of validity of the hydrodynamic description increases monotonically with the coupling. The behavior of the quasinormal spectrum at large but finite coupling may be contrasted with the known properties of the hierarchy of relaxation times determined by the spectrum of a linearized kinetic operator at weak coupling. We find that the ratio of a transport coefficient such as viscosity to the relaxation time determined by the fundamental non-hydrodynamic quasinormal frequency changes rapidly in the vicinity of infinite coupling but flattens out for weaker coupling, suggesting an extrapolation from strong coupling to the kinetic theory result. We note that the behavior of the quasinormal spectrum is qualitatively different depending on whether the ratio of shear viscosity to entropy density is greater or less than the universal, infinite coupling value of ℏ/4πk{sub B}. In the former case, the density of poles increases, indicating a formation of branch cuts in the weak coupling limit, and the spectral function shows the appearance of narrow peaks. We also discuss the relation of the viscosity-entropy ratio to conjectured bounds on relaxation time in quantum systems.
Strong field coherent control of molecular torsions—Analytical models
Energy Technology Data Exchange (ETDEWEB)
Ashwell, Benjamin A.; Ramakrishna, S.; Seideman, Tamar, E-mail: t-seideman@northwestern.edu [Department of Chemistry, Northwestern University, Evanston, Illinois 60208 (United States)
2015-08-14
We introduce analytical models of torsional alignment by moderately intense laser pulses that are applicable to the limiting cases of the torsional barrier heights. Using these models, we explore in detail the role that the laser intensity and pulse duration play in coherent torsional dynamics, addressing both experimental and theoretical concerns. Our results suggest strategies for minimizing the risk of off-resonant ionization, noting the qualitative differences between the case of torsional alignment subject to a field-free torsional barrier and that of torsional alignment of a barrier-less system (equivalent to a 2D rigid rotor). We also investigate several interesting torsional phenomena, including the onset of impulsive alignment of torsions, field-driven oscillations in quantum number space, and the disappearance of an alignment upper bound observed for a rigid rotor in the impulsive torsional alignment limit.
The strong interactions beyond the standard model of particle physics
Energy Technology Data Exchange (ETDEWEB)
Bergner, Georg [Muenster Univ. (Germany). Inst. for Theoretical Physics
2016-11-01
SuperMUC is one of the most convenient high performance machines for our project since it offers a high performance and flexibility regarding different applications. This is of particular importance for investigations of new theories, where on the one hand the parameters and systematic uncertainties have to be estimated in smaller simulations and on the other hand a large computational performance is needed for the estimations of the scale at zero temperature. Our project is just the first investigation of the new physics beyond the standard model of particle physics and we hope to proceed with our studies towards more involved Technicolour candidates, supersymmetric QCD, and extended supersymmetry.
Modelling and prediction of non-stationary optical turbulence behaviour
Doelman, Niek; Osborn, James
2016-07-01
There is a strong need to model the temporal fluctuations in turbulence parameters, for instance for scheduling, simulation and prediction purposes. This paper aims at modelling the dynamic behaviour of the turbulence coherence length r0, utilising measurement data from the Stereo-SCIDAR instrument installed at the Isaac Newton Telescope at La Palma. Based on an estimate of the power spectral density function, a low order stochastic model to capture the temporal variability of r0 is proposed. The impact of this type of stochastic model on the prediction of the coherence length behaviour is shown.
Paraficz, D; Richard, J; Morandi, A; Limousin, M; Jullo, E
2012-01-01
We present a new detailed parametric strong lensing mass reconstruction of the Bullet Cluster (1E 0657-56) at z=0.296, based on new WFC3 and ACS HST imaging and VLT/FORS2 spectroscopy. The strong lensing constraints undergone deep revision, there are 14 (6 new and 8 previously known) multiply imaged systems, of which 3 have spectroscopically confirmed redshifts (including 2 newly measured). The reconstructed mass distribution includes explicitly for the first time the combination of 3 mass components: i) the intra-cluster gas mass derived from X-ray observation, ii) the cluster galaxies modeled by their Fundamental Plane (elliptical) and Tully-Fisher (spiral) scaling relations and iii) dark matter. The best model has an average rms value of 0.158" between the predicted and measured image positions for the 14 multiple images considered. The derived mass model confirms the spacial offset between the X-ray gas and dark matter peaks. The galaxy halos to total mass fraction is found to be f_s=11+/-5% for a total m...
Nonlinear chaotic model for predicting storm surges
Directory of Open Access Journals (Sweden)
M. Siek
2010-09-01
Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.
Nonlinear chaotic model for predicting storm surges
Siek, M.; Solomatine, D.P.
This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.
EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH
Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.
2014-01-01
The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...
South African seasonal rainfall prediction performance by a coupled ocean-atmosphere model
CSIR Research Space (South Africa)
Landman, WA
2010-12-01
Full Text Available Evidence is presented that coupled ocean-atmosphere models can already outscore computationally less expensive atmospheric models. However, if the atmospheric models are forced with highly skillful SST predictions, they may still be a very strong...
How to Establish Clinical Prediction Models
Directory of Open Access Journals (Sweden)
Yong-ho Lee
2016-03-01
Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.
Comparison of Prediction-Error-Modelling Criteria
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....
Wang, Wenyu; Lee, Elisa T; Howard, Barbara V; Fabsitz, Richard R; Devereux, Richard B; Welty, Thomas K
2011-02-01
To compare fasting plasma glucose (FPG) and HbA(1c) in identifying and predicting type 2 diabetes in a population with high rates of diabetes. Diabetes was defined as an FPG level ≥ 126 mg/dL or an HbA(1c) level ≥ 6.5%. Data collected from the baseline and second exams (1989-1995) of the Strong Heart Study were used. RESULTS For cases of diabetes identified by FPG ≥ 126 mg/dL, using HbA(1c) ≥ 6.5% at the initial and 4-year follow-up diabetes screenings (or in identifying incident cases in 4 years) among undiagnosed participants left 46% and 59% of cases of diabetes undetected, respectively, whereas for cases identified by HbA(1c) ≥ 6.5%, using FPG ≥ 126 mg/dL left 11% and 59% unidentified, respectively. Age, waist circumference, urinary albumin-to-creatinine ratio, and baseline FPG and HbA(1c) levels were common significant risk factors for incident diabetes defined by either FPG or HbA(1c); triglyceride levels were significant for diabetes defined by HbA(1c) alone, and blood pressure and sibling history of diabetes were significant for diabetes defined by FPG alone. Using both the baseline FPG and HbA(1c) in diabetes prediction identified more people at risk than using either measure alone. CONCLUSIONS Among undiagnosed participants, using HbA(1c) alone in initial diabetes screening identifies fewer cases of diabetes than FPG, and using either FPG or HbA(1c) alone cannot effectively identify diabetes in a 4-year periodic successive diabetes screening or incident cases of diabetes in 4 years. Using both criteria may identify more people at risk. The proposed models using the commonly available clinical measures can be applied to assessing the risk of incident diabetes using either criterion.
Dekker, Alain D.; Coppus, Antonia M. W.; Vermeiren, Yannick; Aerts, Tony; van Duijn, Cornelia M.; Kremer, Berry P.; Naude, Pieter J. W.; Van Dam, Debby; De Deyn, Peter P.
2015-01-01
Background: Down syndrome (DS) is the most prevalent genetic cause of intellectual disability. Early-onset Alzheimer's disease (AD) frequently develops in DS and is characterized by progressive memory loss and behavioral and psychological signs and symptoms of dementia (BPSD). Predicting and monitor
Catalytic cracking models developed for predictive control purposes
Directory of Open Access Journals (Sweden)
Dag Ljungqvist
1993-04-01
Full Text Available The paper deals with state-space modeling issues in the context of model-predictive control, with application to catalytic cracking. Emphasis is placed on model establishment, verification and online adjustment. Both the Fluid Catalytic Cracking (FCC and the Residual Catalytic Cracking (RCC units are discussed. Catalytic cracking units involve complex interactive processes which are difficult to operate and control in an economically optimal way. The strong nonlinearities of the FCC process mean that the control calculation should be based on a nonlinear model with the relevant constraints included. However, the model can be simple compared to the complexity of the catalytic cracking plant. Model validity is ensured by a robust online model adjustment strategy. Model-predictive control schemes based on linear convolution models have been successfully applied to the supervisory dynamic control of catalytic cracking units, and the control can be further improved by the SSPC scheme.
Case studies in archaeological predictive modelling
Verhagen, Jacobus Wilhelmus Hermanus Philippus
2007-01-01
In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p
Directory of Open Access Journals (Sweden)
B. Croft
2012-01-01
show that aerosol concentrations and wet deposition predicted in a global model are strongly sensitive to the assumptions made regarding the wet scavenging of aerosols in convective clouds.
Childhood asthma prediction models: a systematic review.
Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup
2015-12-01
Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.
Durach, Maxim; Rusina, Anastasia; Kling, Matthias F.; Stockman, Mark I.
2011-08-01
We predict a dynamic metallization effect where an ultrafast (single-cycle) optical pulse with a ≲1V/Å field causes plasmonic metal-like behavior of a dielectric film with a few-nm thickness. This manifests itself in plasmonic oscillations of polarization and a significant population of the conduction band evolving on a ˜1fs time scale. These phenomena are due to a combination of both adiabatic (reversible) and diabatic (for practical purposes irreversible) pathways.
A Predictive Model of Geosynchronous Magnetopause Crossings
Dmitriev, A; Chao, J -K
2013-01-01
We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...
Lefor, Alsn T
2015-01-01
The behavior of strong gravitational lens model software in the analysis of lens models is not necessarily consistent among the various software available, suggesting that the use of several models may enhance the understanding of the system being studied. Among the publicly available codes, the model input files are heterogeneous, making the creation of multiple models tedious. An enhanced method of creating model files and a method to easily create multiple models, may increase the number of comparison studies. HydraLens simplifies the creation of model files for four strong gravitational lens model software packages, including Lenstool, Gravlens/Lensmodel, glafic and PixeLens, using a custom designed GUI for each of the four codes that simplifies the entry of the model for each of these codes, obviating the need for user manuals to set the values of the many flags and in each data field. HydraLens is designed in a modular fashion, which simplifies the addition of other strong gravitational lens codes in th...
Model predictive control classical, robust and stochastic
Kouvaritakis, Basil
2016-01-01
For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...
Models for short term malaria prediction in Sri Lanka
Directory of Open Access Journals (Sweden)
Galappaththy Gawrie NL
2008-05-01
Full Text Available Abstract Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed.
Youhanna, Sonia; Platt, Daniel E; Rebeiz, Abdallah; Lauridsen, Michael; Deeb, Mary E; Nasrallah, Antoine; Alam, Samir; Puzantian, Houry; Kabbani, Samer; Ghoul, Melanie; Zreik, Tony G; el Bayeh, Hamid; Abchee, Antoine; Zalloua, Pierre
2010-10-01
Coronary artery disease (CAD) is a multifactorial disease with acquired and inherited components. We investigated the roles of family history and consanguinity on CAD risk and age at diagnosis in 4284 patients. The compounded impact of diabetes, hyperlipidemia, hypertension, smoking, and BMI, which are known CAD risk factors, on CAD risk and age at diagnosis was also explored. CAD was determined by cardiac catheterization. Logistic regression and stratification were performed to determine the impact of family history and consanguinity on risk and onset of CAD, controlling for diabetes, hyperlipidemia, hypertension, smoking, and BMI. Family history of CAD and gender significantly increased the risk for young age at diagnosis of CAD (p<0.001). Consanguinity did not promote risk of CAD (p=0.38), but did affect age of disease diagnosis (p<0.001). The mean age at disease diagnosis was lowest, 54.8 years, when both family history of CAD and consanguinity were considered as unique risk factors for CAD, compared to 62.8 years for the no-risk-factor patient category (p<0.001). Family history of CAD and smoking are strongly associated with young age at diagnosis. Furthermore, parental consanguinity in the presence of family history lowers the age of disease diagnosis significantly for CAD, emphasizing the role of strong genetic and cultural CAD modifiers. These findings highlight the increased role of genetic determinants of CAD in some population subgroups, and suggest that populations and family structure influence genetic heterogeneity between patients with CAD. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
A single HII region model of the strong interstellar scattering towards Sgr A*
Sicheneder, Egid; Dexter, Jason
2017-01-01
Until recently, the strong interstellar scattering observed towards the Galactic center (GC) black hole, Sgr A*, was thought to come from dense gas within the GC region. The pulse broadening towards the transient magnetar SGR J1745-2900 near Sgr A* has shown that the source of the scattering is instead located much closer to Earth, possibly in a nearby spiral arm. We show that a single HII region along the line of sight, 1.5 - 4.8 kpc away from Earth with density ne of a few ˜eq 100 cm^{-3} and radius R ≃ 1.8 - 3.2 pc can explain the observed angular broadening of Sgr A*. Clouds closer to the GC overproduce the observed DM, providing an independent location constraint that agrees with that from the magnetar pulse broadening. Our model predicts that sources within ≲ 10 pc should show the same scattering origin as the magnetar and Sgr A*, while the nearest known pulsars with separations >20 pc should not. The radio spectrum of Sgr A* should show a cutoff from free-free absorption at 0.2 ≲ ν ≲ 1 GHz. For a magnetic field strength B ˜eq 15 - 70 {μ}G, the HII region could produce the rotation measure of the magnetar, the largest of any known pulsar, without requiring the gas near Sgr A* to be strongly magnetised.
A single H II region model of the strong interstellar scattering towards Sgr A*
Sicheneder, Egid; Dexter, Jason
2017-05-01
Until recently, the strong interstellar scattering observed towards the Galactic centre (GC) black hole, Sgr A*, was thought to come from dense gas within the GC region. The pulse broadening towards the transient magnetar SGR J1745-2900 near Sgr A* has shown that the source of the scattering is instead located much closer to Earth, possibly in a nearby spiral arm. We show that a single H II region along the line of sight, 1.5-4.8 kpc away from Earth with density ne of a few ≃ 100 cm^{-3} and radius R ≃ 1.8-3.2 pc can explain the observed angular broadening of Sgr A*. Clouds closer to the GC overproduce the observed disperson measure, providing an independent location constraint that agrees with that from the magnetar pulse broadening. Our model predicts that sources within ≲10 pc should show the same scattering origin as the magnetar and Sgr A*, while the nearest known pulsars with separations >20 pc should not. The radio spectrum of Sgr A* should show a cut-off from free-free absorption at 0.2 ≲ ν ≲ 1 GHz. For a magnetic field strength B ≃ 15-70 μG, the H II region could produce the rotation measure of the magnetar, the largest of any known pulsar, without requiring the gas near Sgr A* to be strongly magnetized.
Energy Technology Data Exchange (ETDEWEB)
Yang, Li-Ming, E-mail: lmyang.uio@gmail.com, E-mail: ganzx001@umn.edu; Frauenheim, Thomas [Bremen Center for Computational Materials Science, University of Bremen, Am Falturm 1, 28359 Bremen (Germany); Dornfeld, Matthew; Hui, Pik-Mai; Ganz, Eric, E-mail: lmyang.uio@gmail.com, E-mail: ganzx001@umn.edu [Department of Physics, University of Minnesota, 116 Church St., SE, Minneapolis, Minnesota 55416 (United States)
2015-06-28
We use density functional theory to predict and evaluate 10 novel covalent organic frameworks (COFs), labeled (X{sub 4}Y)(BDC){sub 3}, (X = C/Si; Y = C, Si, Ge, Sn, and Pb), with topology based on metal organic framework isoreticular metal-organic framework (IRMOF-1), but with new elements substituted for the corner atoms. We show that these new materials are stable structures using frequency calculations. For two structures, (C{sub 4}C and Si{sub 4}C) molecular dynamics simulations were performed to demonstrate stability of the systems up to 600 K for 10 ps. This demonstrates the remarkable stability of these systems, some of which may be experimentally accessible. For the C{sub 4}C material, we also explored the stability of isolated corners and linkers and vacuum and started to build the structure from these pieces. We discuss the equilibrium lattice parameters, formation enthalpies, electronic structures, chemical bonding, and mechanical and optical properties. The predicted bulk moduli of these COFs range from 18.9 to 23.9 GPa, larger than that of IRMOF-1 (ca. 15.4 GPa), and larger than many existing 3D COF materials. The band gaps range from 1.5 to 2.1 eV, corresponding to 600–830 nm wavelength (orange through near infrared). The negative values of the formation enthalpy suggest that they are stable and should be experimentally accessible under suitable conditions. Seven materials distort the crystal structure to a lower space group symmetry Fm-3, while three materials maintain the original Fm-3m space group symmetry. All of the new materials are highly luminescent. We hope that this work will inspire efforts for experimental synthesis of these new materials.
Yang, Li-Ming; Dornfeld, Matthew; Hui, Pik-Mai; Frauenheim, Thomas; Ganz, Eric
2015-06-01
We use density functional theory to predict and evaluate 10 novel covalent organic frameworks (COFs), labeled (X4Y)(BDC)3, (X = C/Si; Y = C, Si, Ge, Sn, and Pb), with topology based on metal organic framework isoreticular metal-organic framework (IRMOF-1), but with new elements substituted for the corner atoms. We show that these new materials are stable structures using frequency calculations. For two structures, (C4C and Si4C) molecular dynamics simulations were performed to demonstrate stability of the systems up to 600 K for 10 ps. This demonstrates the remarkable stability of these systems, some of which may be experimentally accessible. For the C4C material, we also explored the stability of isolated corners and linkers and vacuum and started to build the structure from these pieces. We discuss the equilibrium lattice parameters, formation enthalpies, electronic structures, chemical bonding, and mechanical and optical properties. The predicted bulk moduli of these COFs range from 18.9 to 23.9 GPa, larger than that of IRMOF-1 (ca. 15.4 GPa), and larger than many existing 3D COF materials. The band gaps range from 1.5 to 2.1 eV, corresponding to 600-830 nm wavelength (orange through near infrared). The negative values of the formation enthalpy suggest that they are stable and should be experimentally accessible under suitable conditions. Seven materials distort the crystal structure to a lower space group symmetry Fm-3, while three materials maintain the original Fm-3m space group symmetry. All of the new materials are highly luminescent. We hope that this work will inspire efforts for experimental synthesis of these new materials.
Energy based prediction models for building acoustics
DEFF Research Database (Denmark)
Brunskog, Jonas
2012-01-01
In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...
Massive Predictive Modeling using Oracle R Enterprise
CERN. Geneva
2014-01-01
R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...
Atmospheric CO2 observations and models suggest strong carbon uptake by forests in New Zealand
Steinkamp, Kay; Mikaloff Fletcher, Sara E.; Brailsford, Gordon; Smale, Dan; Moore, Stuart; Keller, Elizabeth D.; Baisden, W. Troy; Mukai, Hitoshi; Stephens, Britton B.
2017-01-01
A regional atmospheric inversion method has been developed to determine the spatial and temporal distribution of CO2 sinks and sources across New Zealand for 2011-2013. This approach infers net air-sea and air-land CO2 fluxes from measurement records, using back-trajectory simulations from the Numerical Atmospheric dispersion Modelling Environment (NAME) Lagrangian dispersion model, driven by meteorology from the New Zealand Limited Area Model (NZLAM) weather prediction model. The inversion uses in situ measurements from two fixed sites, Baring Head on the southern tip of New Zealand's North Island (41.408° S, 174.871° E) and Lauder from the central South Island (45.038° S, 169.684° E), and ship board data from monthly cruises between Japan, New Zealand, and Australia. A range of scenarios is used to assess the sensitivity of the inversion method to underlying assumptions and to ensure robustness of the results. The results indicate a strong seasonal cycle in terrestrial land fluxes from the South Island of New Zealand, especially in western regions covered by indigenous forest, suggesting higher photosynthetic and respiratory activity than is evident in the current a priori land process model. On the annual scale, the terrestrial biosphere in New Zealand is estimated to be a net CO2 sink, removing 98 (±37) Tg CO2 yr-1 from the atmosphere on average during 2011-2013. This sink is much larger than the reported 27 Tg CO2 yr-1 from the national inventory for the same time period. The difference can be partially reconciled when factors related to forest and agricultural management and exports, fossil fuel emission estimates, hydrologic fluxes, and soil carbon change are considered, but some differences are likely to remain. Baseline uncertainty, model transport uncertainty, and limited sensitivity to the northern half of the North Island are the main contributors to flux uncertainty.
Rastaetter, Lutz; Kuznetsova, Maria; Hesse, Michael; Chulaki, Anna; Pulkkinen, Antti; Ridley, Aaron J.; Gombosi, Tamas; Vapirev, Alexander; Raeder, Joachim; Wiltberger, Michael James; Mays, M. L.; Fok, Mei-Ching H.; Weigel, Robert S.; Welling, Daniel T.
2010-01-01
The GEM 2008 modeling challenge efforts are expanding beyond comparing in-situ measurements in the magnetosphere and ionosphere to include the computation of indices to be compared. The Dst index measures the largest deviations of the horizontal magnetic field at 4 equatorial magnetometers from the quiet-time background field and is commonly used to track the strength of the magnetic disturbance of the magnetosphere during storms. Models can calculate a proxy Dst index in various ways, including using the Dessler-Parker Sckopke relation and the energy of the ring current and Biot-Savart integration of electric currents in the magnetosphere. The GEM modeling challenge investigates 4 space weather events and we compare models available at CCMC against each other and the observed values of Ost. Models used include SWMF/BATSRUS, OpenGGCM, LFM, GUMICS (3D magnetosphere MHD models), Fok-RC, CRCM, RAM-SCB (kinetic drift models of the ring current), WINDMI (magnetosphere-ionosphere electric circuit model), and predictions based on an impulse response function (IRF) model and analytic coupling functions with inputs of solar wind data. In addition to the analysis of model-observation comparisons we look at the way Dst is computed in global magnetosphere models. The default value of Dst computed by the SWMF model is for Bz the Earth's center. In addition to this, we present results obtained at different locations on the Earth's surface. We choose equatorial locations at local noon, dusk (18:00 hours), midnight and dawn (6:00 hours). The different virtual observatory locations reveal the variation around the earth-centered Dst value resulting from the distribution of electric currents in the magnetosphere during different phases of a storm.
Wu, Jin-Long; Xiao, Heng; Ling, Julia
2016-01-01
Although Reynolds-Averaged Navier-Stokes (RANS) equations are still the dominant tool for engineering design and analysis applications involving turbulent flows, standard RANS models are known to be unreliable in many flows of engineering relevance, including flows with separation, strong pressure gradients or mean flow curvature. With increasing amounts of 3-dimensional experimental data and high fidelity simulation data from Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS), data-driven turbulence modeling has become a promising approach to increase the predictive capability of RANS simulations. Recently, a data-driven turbulence modeling approach via machine learning has been proposed to predict the Reynolds stress anisotropy of a given flow based on high fidelity data from closely related flows. In this work, the closeness of different flows is investigated to assess the prediction confidence a priori. Specifically, the Mahalanobis distance and the kernel density estimation (KDE) technique...
Liver Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Cervical Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Prostate Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Pancreatic Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Colorectal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Bladder Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Esophageal Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Lung Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Breast Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Ovarian Cancer Risk Prediction Models
Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Testicular Cancer Risk Prediction Models
Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.
Institute of Scientific and Technical Information of China (English)
Miyi Li; Tao Fang
2015-01-01
A rigorous approach is proposed to model the mean ion activity coefficient for strong electrolyte systems using the Poisson–Boltzmann equation. An effective screening radius similar to the Debye decay length is introduced to define the local composition and new boundary conditions for the central ion. The crystallographic ion size is also considered in the activity coefficient expressions derived and non-electrostatic contributions are neglected. The model is presented for aqueous strong electrolytes and compared with the classical Debye–Hückel (DH) limiting law for dilute solutions. The radial distribution function is compared with the DH and Monte Carlo studies. The mean ion activity coefficients are calculated for 1:1 aqueous solutions containing strong electrolytes composed of alkali halides. The individual ion activity coefficients and mean ion activity coefficients in mixed sol-vents are predicted with the new equations.
Posterior Predictive Model Checking in Bayesian Networks
Crawford, Aaron
2014-01-01
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…
A Course in... Model Predictive Control.
Arkun, Yaman; And Others
1988-01-01
Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)
Testing strong factorial invariance using three-level structural equation modeling
Jak, Suzanne
2014-01-01
Within structural equation modeling, the most prevalent model to investigate measurement bias is the multigroup model. Equal factor loadings and intercepts across groups in a multigroup model represent strong factorial invariance (absence of measurement bias) across groups. Although this approach is
Equivalency and unbiasedness of grey prediction models
Institute of Scientific and Technical Information of China (English)
Bo Zeng; Chuan Li; Guo Chen; Xianjun Long
2015-01-01
In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.
Predictability of extreme values in geophysical models
Directory of Open Access Journals (Sweden)
A. E. Sterk
2012-09-01
Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.
Risk terrain modeling predicts child maltreatment.
Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye
2016-12-01
As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.
Harvey, David; Jauzac, Mathilde
2016-01-01
We explore how assuming that mass traces light in strong gravitational lensing models can lead to systematic errors in the predicted position of multiple images. Using a model based on the galaxy cluster MACSJ0416 (z = 0.397) from the Hubble Frontier Fields, we split each galactic halo into a baryonic and dark matter component. We then shift the dark matter halo such that it no longer aligns with the baryonic halo and investigate how this affects the resulting position of multiple images. We find for physically motivated misalignments in dark halo position, ellipticity, position angle and density profile, that multiple images can move on average by more than 0.2" with individual images moving greater than 1". We finally estimate the full error induced by assuming that light traces mass and find that this assumption leads to an expected RMS error of 0.5", almost the entire error budget observed in the Frontier Fields. Given the large potential contribution from the assumption that light traces mass to the erro...
Property predictions using microstructural modeling
Energy Technology Data Exchange (ETDEWEB)
Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)
2005-07-15
Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.
Spatial Economics Model Predicting Transport Volume
Directory of Open Access Journals (Sweden)
Lu Bo
2016-10-01
Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.
Modeling and Prediction Using Stochastic Differential Equations
DEFF Research Database (Denmark)
Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp
2016-01-01
Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...
Precision Plate Plan View Pattern Predictive Model
Institute of Scientific and Technical Information of China (English)
ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun
2011-01-01
According to the rolling features of plate mill, a 3D elastic-plastic FEM （finite element model） based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS （mizushima automatic plan view pattern control system） method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP （plan view pattern predictive） model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.
Strong and weak interactions in a simple field-theoretical model
Hove, Léon van
2006-01-01
An exactly renormalizable model of quantum fields, introduced earlier by Th. W. Ruijgrok and the present author, is considered for large but finite cut-off. It gives rise to strong and weak interaction effects. In the limit of infinite cut-off the weak interactions vanish and the strong interactions
Distinguishing among models of strong WL WL scattering at the LHC
Energy Technology Data Exchange (ETDEWEB)
Kilgore, W.B.
1997-01-01
Using a multi-channel analysis of strong W{sub L} W{sub L} scattering signals, I study the LHC`s ability to distinguish among various models of strongly interacting electroweak symmetry breaking sectors. 9 refs., 1 fig., 3 tabs.
NBC Hazard Prediction Model Capability Analysis
1999-09-01
Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented
Testing strong factorial invariance using three-level structural equation modeling
Directory of Open Access Journals (Sweden)
Suzanne eJak
2014-07-01
Full Text Available Within structural equation modeling, the most prevalent model to investigate measurement bias is the multigroup model. Equal factor loadings and intercepts across groups in a multigroup model represent strong factorial invariance (absence of measurement bias across groups. Although this approach is possible in principle, it is hardly practical when the number of groups is large or when the group size is relatively small. Jak, Oort and Dolan (2013 showed how strong factorial invariance across large numbers of groups can be tested in a multilevel structural equation modeling framework, by treating group as a random instead of a fixed variable. In the present study, this model is extended for use with three-level data. The proposed method is illustrated with an investigation of strong factorial invariance across 156 school classes and 50 schools in a Dutch dyscalculia test, using three-level structural equation modeling.
Testing strong factorial invariance using three-level structural equation modeling.
Jak, Suzanne
2014-01-01
Within structural equation modeling, the most prevalent model to investigate measurement bias is the multigroup model. Equal factor loadings and intercepts across groups in a multigroup model represent strong factorial invariance (absence of measurement bias) across groups. Although this approach is possible in principle, it is hardly practical when the number of groups is large or when the group size is relatively small. Jak et al. (2013) showed how strong factorial invariance across large numbers of groups can be tested in a multilevel structural equation modeling framework, by treating group as a random instead of a fixed variable. In the present study, this model is extended for use with three-level data. The proposed method is illustrated with an investigation of strong factorial invariance across 156 school classes and 50 schools in a Dutch dyscalculia test, using three-level structural equation modeling.
Directory of Open Access Journals (Sweden)
Yun Wang
2016-01-01
Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.
Corporate prediction models, ratios or regression analysis?
Bijnen, E.J.; Wijn, M.F.C.M.
1994-01-01
The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in
Comparison of Vibrational Relaxation Modeling for Strongly Non-Equilibrium Flows
2014-01-01
145 .98 4396 V. Summary and Conclusions The form of two vibrational relaxation models that are commonly used in DSMC and CFD simula- tions have been...Technical Paper 3. DATES COVERED (From - To) Dec 2013 – Jan 2014 4. TITLE AND SUBTITLE Comparison of Vibrational Relaxation Modeling for Strongly Non...including experimental gas measurement techniques , shock layer vibration-dissociation coupling, and vibrational energy freezing in strong expansions
Modelling Chemical Reasoning to Predict Reactions
Segler, Marwin H S
2016-01-01
The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...
Energy Technology Data Exchange (ETDEWEB)
Hutchings, L; Ioannidou, E; Voulgaris, N; Kalogeras, I; Savy, J; Foxall, W; Stavrakakis, G
2004-08-06
We test a methodology to predict the range of ground-motion hazard for a fixed magnitude earthquake along a specific fault or within a specific source volume, and we demonstrate how to incorporate this into probabilistic seismic hazard analyses (PSHA). We modeled ground motion with empirical Green's functions. We tested our methodology with the 7 September 1999, Mw=6.0 Athens earthquake, we: (1) developed constraints on rupture parameters based on prior knowledge of earthquake rupture processes and sources in the region; (2) generated impulsive point shear source empirical Green's functions by deconvolving out the source contribution of M < 4.0 aftershocks; (3) used aftershocks that occurred throughout the area and not necessarily along the fault to be modeled; (4) ran a sufficient number of scenario earthquakes to span the full variability of ground motion possible; (5) found that our distribution of synthesized ground motions span what actually occurred and their distribution is realistically narrow; (6) determined that one of our source models generates records that match observed time histories well; (7) found that certain combinations of rupture parameters produced ''extreme'' ground motions at some stations; (8) identified that the ''best fitting'' rupture models occurred in the vicinity of 38.05{sup o} N 23.60{sup o} W with center of rupture near 12 km, and near unilateral rupture towards the areas of high damage, and this is consistent with independent investigations; and (9) synthesized strong motion records in high damage areas for which records from the earthquake were not recorded. We then developed a demonstration PSHA for a source region near Athens utilizing synthesized ground motion rather that traditional attenuation. We synthesized 500 earthquakes distributed throughout the source zone likely to have Mw=6.0 earthquakes near Athens. We assumed an average return period of 1000 years for this
Evaluation of CASP8 model quality predictions
Cozzetto, Domenico
2009-01-01
The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.
Genetic models of homosexuality: generating testable predictions
Gavrilets, Sergey; Rice, William R.
2006-01-01
Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...
Wind farm production prediction - The Zephyr model
Energy Technology Data Exchange (ETDEWEB)
Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)
2002-06-01
This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)
Classical trajectory perspective of atomic ionization in strong laser fields semiclassical modeling
Liu, Jie
2014-01-01
The ionization of atoms and molecules in strong laser fields is an active field in modern physics and has versatile applications in such as attosecond physics, X-ray generation, inertial confined fusion (ICF), medical science and so on. Classical Trajectory Perspective of Atomic Ionization in Strong Laser Fields covers the basic concepts in this field and discusses many interesting topics using the semiclassical model of classical trajectory ensemble simulation, which is one of the most successful ionization models and has the advantages of a clear picture, feasible computing and accounting for many exquisite experiments quantitatively. The book also presents many applications of the model in such topics as the single ionization, double ionization, neutral atom acceleration and other timely issues in strong field physics, and delivers useful messages to readers with presenting the classical trajectory perspective on the strong field atomic ionization. The book is intended for graduate students and researchers...
Predictive model for segmented poly(urea
Directory of Open Access Journals (Sweden)
Frankl P.
2012-08-01
Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.
Large eddy simulation subgrid model for soot prediction
El-Asrag, Hossam Abd El-Raouf Mostafa
Soot prediction in realistic systems is one of the most challenging problems in theoretical and applied combustion. Soot formation as a chemical process is very complicated and not fully understood. The major difficulty stems from the chemical complexity of the soot formation process as well as its strong coupling with the other thermochemical and fluid processes that occur simultaneously. Soot is a major byproduct of incomplete combustion, having a strong impact on the environment as well as the combustion efficiency. Therefore, innovative methods is needed to predict soot in realistic configurations in an accurate and yet computationally efficient way. In the current study, a new soot formation subgrid model is developed and reported here. The new model is designed to be used within the context of the Large Eddy Simulation (LES) framework, combined with Linear Eddy Mixing (LEM) as a subgrid combustion model. The final model can be applied equally to premixed and non-premixed flames over any required geometry and flow conditions in the free, the transition, and the continuum regimes. The soot dynamics is predicted using a Method of Moments approach with Lagrangian Interpolative Closure (MOMIC) for the fractional moments. Since no prior knowledge of the particles distribution is required, the model is generally applicable. The current model accounts for the basic soot transport phenomena as transport by molecular diffusion and Thermophoretic forces. The model is first validated against experimental results for non-sooting swirling non-premixed and partially premixed flames. Next, a set of canonical premixed sooting flames are simulated, where the effect of turbulence, binary diffusivity and C/O ratio on soot formation are studied. Finally, the model is validated against a non-premixed jet sooting flame. The effect of the flame structure on the different soot formation stages as well as the particle size distribution is described. Good results are predicted with
An Importance Sampling Scheme for Models in a Strong External Field
Molkaraie, Mehdi
2015-01-01
We propose Monte Carlo methods to estimate the partition function of the two-dimensional Ising model in the presence of an external magnetic field. The estimation is done in the dual of the Forney factor graph representing the model. The proposed methods can efficiently compute an estimate of the partition function in a wide range of model parameters. As an example, we consider models that are in a strong external field.
PREDICTIVE CAPACITY OF ARCH FAMILY MODELS
Directory of Open Access Journals (Sweden)
Raphael Silveira Amaro
2016-03-01
Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.
Predictive QSAR modeling of phosphodiesterase 4 inhibitors.
Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr
2012-02-01
A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.
Testing the Predictions of the Universal Structured GRB Jet Model
Nakar, E; Guetta, D; Nakar, Ehud; Granot, Jonathan; Guetta, Dafne
2004-01-01
The two leading models for the structure of GRB jets are (1) the uniform jet model, where the energy per solid angle, $\\epsilon$, is roughly constant within some finite half-opening angle, $\\theta_j$, and sharply drops outside of $\\theta_j$, and (2) the universal structured jet (USJ) model, where all GRB jets are intrinsically identical, and $\\epsilon$ drops as the inverse square of the angle from the jet axis. The simplicity of the USJ model gives it a strong predictive power, including a specific prediction for the observed GRB distribution as a function of both the redshift $z$ and the viewing angle $\\theta$ from the jet axis. We show that the current sample of GRBs with known $z$ and estimated $\\theta$ does not agree with the predictions of the USJ model. This can be best seen for a relatively narrow range in $z$, in which the USJ model predicts that most GRBs should be near the upper end of the observed range in $\\theta$, while in the observed sample most GRBs are near the lower end of that range. Since ...
Institute of Scientific and Technical Information of China (English)
Jinglan Wu; Pengfei Jiao; Wei Zhuang; Jingwei Zhou; Hanjie Ying
2016-01-01
L-phenylalanine, one of the nine essential amino acids for the human body, is extensively used as an ingredient in food, pharmaceutical and nutrition industries. A suitable equilibrium model is required for purification of L-phenylalanine based on ion-exchange chromatography. In this work, the equilibrium uptake of L-phenylalanine on a strong acid-cation exchanger SH11 was investigated experimental y and theoretical y. A modified Donnan ion-exchange (DIX) model, which takes the activity into account, was established to predict the uptake of L-phenyl-alanine at various solution pH values. The model parameters including selectivity and mean activity coefficient in the resin phase are presented. The modified DIX model is in good agreement with the experimental data. The optimum operating pH value of 2.0, with the highest L-phenylalanine uptake on the resin, is predicted by the model. This basic information combined with the general mass transfer model wil lay the foundation for the prediction of dynamic behavior of fixed bed separation process.
Strong Coupling Limits and Quantum Isomorphisms of the Gauged Thirring Model
Bufalo, R.; Casana, R.; Pimentel, B. M.
We have studied the quantum equivalence in the respective strong coupling limits of the bidimensional gauged Thirring model with both Schwinger and Thirring models. It is achieved following a nonperturbative quantization of the gauged Thirring model into the path-integral approach. First, we have established the constraint structure via the Dirac's formalism for constrained systems and defined the correct vacuum-vacuum transition amplitude by using the Faddeev-Senjanovic method. Next, we have computed exactly the relevant Green's functions and shown the Ward-Takahashi identities. Afterwards, we have established the quantum isomorphisms between gauged Thirring model and both Schwinger and Thirring models by analyzing the respective Green's functions in the strong coupling limits, respectively. A special attention is necessary to establish the quantum isomorphism between the gauged Thirring model and the Thirring model.
Development of 3D ferromagnetic model of tokamak core with strong toroidal asymmetry
DEFF Research Database (Denmark)
Markovič, Tomáš; Gryaznevich, Mikhail; Ďuran, Ivan;
2015-01-01
Fully 3D model of strongly asymmetric tokamak core, based on boundary integral method approach (i.e. characterization of ferromagnet by its surface) is presented. The model is benchmarked on measurements on tokamak GOLEM, as well as compared to 2D axisymmetric core equivalent for this tokamak...
Ehrenfest's theorem and the validity of the two-step model for strong-field ionization
DEFF Research Database (Denmark)
Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer
By comparison with the solution of the time-dependent Schrodinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...
Can the Axions of Standard--like Superstring Models Solve the Strong CP Problem?
Halyo, E
1993-01-01
We find that there are three axions in standard--like superstring models in the four dimensional free fermionic formulation. These axions are either harmful or very heavy. Therefore, they cannot solve the strong CP problem. We show that this is a general result in superstring models with chiral generations from the $Z_2$ twisted sectors which use a $Z_4$ twist.
Institute of Scientific and Technical Information of China (English)
Yee LEUNG; WU Kefa; DONG Tianxin
2001-01-01
In this paper, a multivariate linear functional relationship model, where the covariance matrix of the observational errors is not restricted, is considered. The parameter estimation of this model is discussed. The estimators are shown to be a strongly consistent estimation under some mild conditions on the incidental parameters.
Modelling the predictive performance of credit scoring
Directory of Open Access Journals (Sweden)
Shi-Wei Shen
2013-02-01
Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.
Calibrated predictions for multivariate competing risks models.
Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni
2014-04-01
Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.
Modelling language evolution: Examples and predictions.
Gong, Tao; Shuai, Lan; Zhang, Menghan
2014-06-01
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
Modelling language evolution: Examples and predictions
Gong, Tao; Shuai, Lan; Zhang, Menghan
2014-06-01
We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.
The effects of model and data complexity on predictions from species distributions models
DEFF Research Database (Denmark)
García-Callejas, David; Bastos, Miguel
2016-01-01
study contradicts the widely held view that the complexity of species distributions models has significant effects in their predictive ability while findings support for previous observations that the properties of species distributions data and their relationship with the environment are strong...
Directory of Open Access Journals (Sweden)
Kathrin Wapler
2015-04-01
Full Text Available Two severe summer-time convective events in Germany are investigated which can be classified by the prevailing synoptic conditions into a strong and a weak forcing case. The strong forcing case exhibits a larger scale precipitation pattern caused by frontal ascent whereas scattered convection is dominating the convective activity in the weak forcing case. Other distinguished differences between the cases are faster movement of convective cells and larger regions with significant loss mainly due to severe gusts in the strong forcing case. A comprehensive set of various observations is used to characterise the two different events. The observations include measurements from a lightning detection network, precipitation radar, geostationary satellite and weather stations, as well as information from an automated cell detection algorithm based on radar reflectivity which is combined with severe weather reports, and damage data from insurances. Forecast performance at various time scales is analysed ranging from nowcasting and warning to short-range forecasting. Various methods and models are examined, including human warnings, observation-based nowcasting algorithms and high-resolution ensemble prediction systems. The analysis shows the advantages of a multi-sensor and multi-source approach in characterising convective events and their impacts. Using data from various sources allows to combine the different strengths of observational data sets, especially in terms of spatial coverage or data accuracy, e.g. damage data from insurances provide good spatial coverage with little meteorological information while measurements at weather stations provide accurate but pointwise observations. Furthermore, using data from multiple sources allow for a better understanding of the convective life cycle. Several parameters from different instruments are shown to have a predictive skill for convective development, these include satellite-based cloud-top cooling
Global Solar Dynamo Models: Simulations and Predictions
Indian Academy of Sciences (India)
Mausumi Dikpati; Peter A. Gilman
2008-03-01
Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.
Model Predictive Control of Sewer Networks
Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.
2017-01-01
The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.
Model for Thermal Relic Dark Matter of Strongly Interacting Massive Particles.
Hochberg, Yonit; Kuflik, Eric; Murayama, Hitoshi; Volansky, Tomer; Wacker, Jay G
2015-07-10
A recent proposal is that dark matter could be a thermal relic of 3→2 scatterings in a strongly coupled hidden sector. We present explicit classes of strongly coupled gauge theories that admit this behavior. These are QCD-like theories of dynamical chiral symmetry breaking, where the pions play the role of dark matter. The number-changing 3→2 process, which sets the dark matter relic abundance, arises from the Wess-Zumino-Witten term. The theories give an explicit relationship between the 3→2 annihilation rate and the 2→2 self-scattering rate, which alters predictions for structure formation. This is a simple calculable realization of the strongly interacting massive-particle mechanism.
DKIST Polarization Modeling and Performance Predictions
Harrington, David
2016-05-01
Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration
Modelling Chemical Reasoning to Predict Reactions
Segler, Marwin H. S.; Waller, Mark P.
2016-01-01
The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...
Predictive Modeling of the CDRA 4BMS
Coker, Robert; Knox, James
2016-01-01
Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.
Raman Model Predicting Hardness of Covalent Crystals
Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian
2009-01-01
Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts
Unreachable Setpoints in Model Predictive Control
DEFF Research Database (Denmark)
Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp
2008-01-01
steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ
Prediction modelling for population conviction data
Tollenaar, N.
2017-01-01
In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.
A Predictive Model for MSSW Student Success
Napier, Angela Michele
2011-01-01
This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…
Predictability of extreme values in geophysical models
Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.
2012-01-01
Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model
A revised prediction model for natural conception
Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,
2017-01-01
One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis
Distributed Model Predictive Control via Dual Decomposition
DEFF Research Database (Denmark)
Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle
2014-01-01
This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...
Predictive Modelling of Mycotoxins in Cereals
Fels, van der H.J.; Liu, C.
2015-01-01
In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ
Leptogenesis in minimal predictive seesaw models
Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F
2015-01-01
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\
Optimal model-free prediction from multivariate time series.
Runge, Jakob; Donner, Reik V; Kurths, Jürgen
2015-05-01
Forecasting a time series from multivariate predictors constitutes a challenging problem, especially using model-free approaches. Most techniques, such as nearest-neighbor prediction, quickly suffer from the curse of dimensionality and overfitting for more than a few predictors which has limited their application mostly to the univariate case. Therefore, selection strategies are needed that harness the available information as efficiently as possible. Since often the right combination of predictors matters, ideally all subsets of possible predictors should be tested for their predictive power, but the exponentially growing number of combinations makes such an approach computationally prohibitive. Here a prediction scheme that overcomes this strong limitation is introduced utilizing a causal preselection step which drastically reduces the number of possible predictors to the most predictive set of causal drivers making a globally optimal search scheme tractable. The information-theoretic optimality is derived and practical selection criteria are discussed. As demonstrated for multivariate nonlinear stochastic delay processes, the optimal scheme can even be less computationally expensive than commonly used suboptimal schemes like forward selection. The method suggests a general framework to apply the optimal model-free approach to select variables and subsequently fit a model to further improve a prediction or learn statistical dependencies. The performance of this framework is illustrated on a climatological index of El Niño Southern Oscillation.
Specialized Language Models using Dialogue Predictions
Popovici, C; Popovici, Cosmin; Baggia, Paolo
1996-01-01
This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...
Caries risk assessment models in caries prediction
Directory of Open Access Journals (Sweden)
Amila Zukanović
2013-11-01
Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.
Disease prediction models and operational readiness.
Directory of Open Access Journals (Sweden)
Courtney D Corley
Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology
Model Predictive Control based on Finite Impulse Response Models
DEFF Research Database (Denmark)
Prasath, Guru; Jørgensen, John Bagterp
2008-01-01
We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...
A generating functional approach to the sd-model with strong correlations
Directory of Open Access Journals (Sweden)
Yu.A.Izyumov
2005-01-01
Full Text Available A Kadanoff-Baym-type generating functional approach, earlier developed by the authors to strongly correlated systems, is applied to the sd-model with strong sd-coupling. Formalism of the Hubbard X-operators was used, and equation for electron Green's function was derived with functional derivatives over external fluctuating fields. Iterations in this equation generate a perturbation theory near the atomic limit. Hartree-Fock type approximation is developed within the framework of this theory, and the problem of a metal-insulator phase transition in sd-model is discussed.
ENSO Prediction using Vector Autoregressive Models
Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.
2013-12-01
A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.
Electrostatic ion thrusters - towards predictive modeling
Energy Technology Data Exchange (ETDEWEB)
Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)
2014-02-15
The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Gas explosion prediction using CFD models
Energy Technology Data Exchange (ETDEWEB)
Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)
2006-07-15
A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)
Genetic models of homosexuality: generating testable predictions.
Gavrilets, Sergey; Rice, William R
2006-12-22
Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.
Characterizing Attention with Predictive Network Models.
Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M
2017-04-01
Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Study On Distributed Model Predictive Consensus
Keviczky, Tamas
2008-01-01
We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.
Different Interaction Models in Strong Decays of Negative Parity N* Resonances Under 2 GeV
Institute of Scientific and Technical Information of China (English)
HE Jun; DONG Yu-Bing
2004-01-01
In this paper, by using harmonic-oscillator wave functions of different interaction models, i.e. OPE (onepion-exchange model), OPsE (only pseudoscalar meson exchange model), the extended GBE (Goldstone-boson-exchange model including vector and scalar mesons), and OGE (one-gluon-exchange model), we calculate and compare the strong decays of negative parity N* resonances under 2 GeV. We find that the conventional mixing angles are correct, and GBE and OGE are obviously superior to OPE and OPsE.
NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES
Directory of Open Access Journals (Sweden)
R. G. SILVA
1999-03-01
Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.
Performance model to predict overall defect density
Directory of Open Access Journals (Sweden)
J Venkatesh
2012-08-01
Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.
Neuro-fuzzy modeling in bankruptcy prediction
Directory of Open Access Journals (Sweden)
Vlachos D.
2003-01-01
Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.
Estimating Predictive Variance for Statistical Gas Distribution Modelling
Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo
2009-05-01
Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.
Pressure prediction model for compression garment design.
Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q
2010-01-01
Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.
Statistical assessment of predictive modeling uncertainty
Barzaghi, Riccardo; Marotta, Anna Maria
2017-04-01
When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.
Electromagnetic and Strong Decays in a Collective Model of the Nucleon
Leviatan, A
1997-01-01
We present an analysis of electromagnetic elastic form factors, helicity amplitudes and strong decay widths of non-strange baryon resonances, within a collective model of the nucleon. Flavor-breaking and stretching effects are considered. Deviations from the naive three-constituents description are pointed out.
Flache, Andreas; Mas, Michael; Mäs, Michael
2008-01-01
Lau and Murnighan (LM) suggested that strong demographic faultlines threaten team cohesion and reduce consensus. However, it remains unclear which assumptions are exactly needed to derive faultline effects. We propose a formal computational model of the effects of faultlines that uses four elementar
Strong absorption model and radial parameters of medium-weight nuclei
Kuterbekov, K A
2002-01-01
The results of analysis of the angular distributions of differential cross sections for elastic scattering in the medium-weight nuclei in the framework strong absorption model are given. There are presented the A-relations of interaction radius, the data of radial parameters of even isotopes with mass number A=64-124 are obtained. (author)
STRONG CONSISTENCY OF M ESTIMATOR IN LINEAR MODEL FOR NEGATIVELY ASSOCIATED SAMPLES
Institute of Scientific and Technical Information of China (English)
Qunying WU
2006-01-01
This paper discusses the strong consistency of M estimator of regression parameter in linear model for negatively associated samples. As a result, the author extends Theorem 1 and Theorem 2 of Shanchao YANG (2002) to the NA errors without necessarily imposing any extra condition.
Engineering the Dynamics of Effective Spin-Chain Models for Strongly Interacting Atomic Gases
DEFF Research Database (Denmark)
Volosniev, A. G.; Petrosyan, D.; Valiente, M.
2015-01-01
We consider a one-dimensional gas of cold atoms with strong contact interactions and construct an effective spin-chain Hamiltonian for a two-component system. The resulting Heisenberg spin model can be engineered by manipulating the shape of the external confining potential of the atomic gas. We...
Strong Lensing Probabilities in a Cosmological Model with a Running Primordial Power Spectrum
Zhang, T J; Yang, Z L; He, X T; Zhang, Tong-Jie; Chen, Da-Ming; Yang, Zhi-Liang; He, Xiang-Tao
2004-01-01
The combination of the first-year Wilkinson Microwave Anisotropy Probe (WMAP) data with other finer scale cosmic microwave background (CMB) experiments (CBI and ACBAR) and two structure formation measurements (2dFGRS and Lyman $\\alpha$ forest) suggest a $\\Lambda$CDM cosmological model with a running spectral power index of primordial density fluctuations. Motivated by this new result on the index of primordial power spectrum, we present the first study on the predicted lensing probabilities of image separation in a spatially flat $\\Lambda$CDM model with a running spectral index (RSI-$\\Lambda$CDM model). It is shown that the RSI-$\\Lambda$CDM model suppress the predicted lensing probabilities on small splitting angles of less than about 4$^{''}$ compared with that of standard power-law $\\Lambda$CDM (PL-$\\Lambda$CDM) model.
Predictive modelling of contagious deforestation in the Brazilian Amazon.
Directory of Open Access Journals (Sweden)
Isabel M D Rosa
Full Text Available Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1 it is probabilistic and quantifies uncertainty around predictions and parameters; (2 the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3 deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia"-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy, annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is
Predictive modelling of contagious deforestation in the Brazilian Amazon.
Rosa, Isabel M D; Purves, Drew; Souza, Carlos; Ewers, Robert M
2013-01-01
Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia")-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently
A kinetic model for predicting biodegradation.
Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O
2007-01-01
Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.
Disease Prediction Models and Operational Readiness
Energy Technology Data Exchange (ETDEWEB)
Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.
2014-03-19
INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the
Describing the strongly interacting quark-gluon plasma through the Friedberg-Lee model
Shu, Song; Li, Jia-Rong
2010-10-01
The Friedberg-Lee (FL) model is studied at finite temperature and density. The soliton solutions of the FL model in the deconfinement phase transition are solved and thoroughly discussed for certain boundary conditions. We indicate that the solitons before and after the deconfinement have different physical meanings: the soliton before deconfinement represents hadrons, while the soliton after the deconfinement represents the bound state of quarks which leads to a strongly interacting quark-gluon plasma phase. The corresponding phase diagram is given.
Nonlinear model predictive control theory and algorithms
Grüne, Lars
2017-01-01
This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...
Predictive Modeling in Actinide Chemistry and Catalysis
Energy Technology Data Exchange (ETDEWEB)
Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-05-16
These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.
Anisotropic spin model of strong spin-orbit-coupled triangular antiferromagnets
Li, Yao-Dong; Wang, Xiaoqun; Chen, Gang
2016-07-01
Motivated by the recent experimental progress on the strong spin-orbit-coupled rare-earth triangular antiferromagnet, we analyze the highly anisotropic spin model that describes the interaction between the spin-orbit-entangled Kramers' doublet local moments on the triangular lattice. We apply the Luttinger-Tisza method, the classical Monte Carlo simulation, and the self-consistent spin wave theory to analyze the anisotropic spin Hamiltonian. The classical phase diagram includes the 120∘ state and two distinct stripe-ordered phases. The frustration is very strong and significantly suppresses the ordering temperature in the regimes close to the phase boundary between two ordered phases. Going beyond the semiclassical analysis, we include the quantum fluctuations of the spin moments within a self-consistent Dyson-Maleev spin-wave treatment. We find that the strong quantum fluctuations melt the magnetic order in the frustrated regions. We explore the magnetic excitations in the three different ordered phases as well as in strong magnetic fields. Our results provide a guidance for the future theoretical study of the generic model and are broadly relevant for strong spin-orbit-coupled triangular antiferromagnets such as YbMgGaO4, RCd3P3 , RZn3P3 , RCd3As3 , RZn3As3 , and R2O2CO3 .
García-Tecocoatzi, H.; Bijker, R.; Ferretti, J.; Galatà, G.; Santopinto, E.
2016-10-01
In this contribution, we discuss the results of a QM calculation of the open-flavor strong decays of **** light nucleon resonances. These are the results of a recent calculation, where we used a modified ^3P_0 model for the amplitudes and the U(7) algebraic model and the hypercentral quark model to predict the baryon spectrum. The decay amplitudes are compared with the existing experimental data.
Probabilistic prediction models for aggregate quarry siting
Robinson, G.R.; Larkins, P.M.
2007-01-01
Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.
Predicting Footbridge Response using Stochastic Load Models
DEFF Research Database (Denmark)
Pedersen, Lars; Frier, Christian
2013-01-01
Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....
Nonconvex Model Predictive Control for Commercial Refrigeration
DEFF Research Database (Denmark)
Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp
2013-01-01
is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...
Sari, Hanife; Yetilmezsoy, Kaan; Ilhan, Fatih; Yazici, Senem; Kurt, Ugur; Apaydin, Omer
2013-06-01
Three multiple input and multiple output-type fuzzy-logic-based models were developed as an artificial intelligence-based approach to model a novel integrated process (UF-IER-EDBM-FO) consisted of ultrafiltration (UF), ion exchange resins (IER), electrodialysis with bipolar membrane (EDBM), and Fenton's oxidation (FO) units treating young, middle-aged, and stabilized landfill leachates. The FO unit was considered as the key process for implementation of the proposed modeling scheme. Four input components such as H(2)O(2)/chemical oxygen demand ratio, H(2)O(2)/Fe(2+) ratio, reaction pH, and reaction time were fuzzified in a Mamdani-type fuzzy inference system to predict the removal efficiencies of chemical oxygen demand, total organic carbon, color, and ammonia nitrogen. A total of 200 rules in the IF-THEN format were established within the framework of a graphical user interface for each fuzzy-logic model. The product (prod) and the center of gravity (centroid) methods were performed as the inference operator and defuzzification methods, respectively, for the proposed prognostic models. Fuzzy-logic predicted results were compared to the outputs of multiple regression models by means of various descriptive statistical indicators, and the proposed methodology was tested against the experimental data. The testing results clearly revealed that the proposed prognostic models showed a superior predictive performance with very high determination coefficients (R (2)) between 0.930 and 0.991. This study indicated a simple means of modeling and potential of a knowledge-based approach for capturing complicated inter-relationships in a highly non-linear problem. Clearly, it was shown that the proposed prognostic models provided a well-suited and cost-effective method to predict removal efficiencies of wastewater parameters prior to discharge to receiving streams.
Solvable model of dissipative dynamics in the deep strong coupling regime
Bina, M; Casanova, J; Garcia-Ripoll, J J; Lulli, A; Casagrande, F; Solano, E
2011-01-01
We describe the dynamics of a qubit interacting with a bosonic mode coupled to a zero-temperature bath in the deep strong coupling (DSC) regime. We provide an analytical solution for this open system dynamics in the off-resonance case of the qubit-mode interaction. Collapses and revivals of parity chain populations and the oscillatory behavior of the mean photon number are predicted. At the same time, photon number wave packets, propagating back and forth along parity chains, become incoherently mixed. Finally, we investigate numerically the effect of detuning on the validity of the analytical solution.
Constructing predictive models of human running.
Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre
2015-02-06
Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Angular Structure of Jet Quenching Within a Hybrid Strong/Weak Coupling Model
Casalderrey-Solana, Jorge; Milhano, Guilherme; Pablos, Daniel; Rajagopal, Krishna
2017-01-01
Within the context of a hybrid strong/weak coupling model of jet quenching, we study the modification of the angular distribution of the energy within jets in heavy ion collisions, as partons within jet showers lose energy and get kicked as they traverse the strongly coupled plasma produced in the collision. To describe the dynamics transverse to the jet axis, we add the effects of transverse momentum broadening into our hybrid construction, introducing a parameter $K\\equiv \\hat q/T^3$ that governs its magnitude. We show that, because of the quenching of the energy of partons within a jet, even when $K\
Angular Structure of Jet Quenching Within a Hybrid Strong/Weak Coupling Model
Casalderrey-Solana, Jorge; Milhano, Guilherme; Pablos, Daniel; Rajagopal, Krishna
2016-01-01
Within the context of a hybrid strong/weak coupling model of jet quenching, we study the modification of the angular distribution of the energy within jets in heavy ion collisions, as partons within jet showers lose energy and get kicked as they traverse the strongly coupled plasma produced in the collision. To describe the dynamics transverse to the jet axis, we add the effects of transverse momentum broadening into our hybrid construction, introducing a parameter $K\\equiv \\hat q/T^3$ that governs its magnitude. We show that, because of the quenching of the energy of partons within a jet, even when $K\
Energy Technology Data Exchange (ETDEWEB)
Chong, S-H [Institute for Molecular Science, Okazaki 444-8585 (Japan); Chen, S-H [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Mallamace, F, E-mail: chong@ims.ac.j [Dipartimento di Fisica, Universita di Messina and IRCCS Neurolesi ' Bonino-Pulejo' , I-98166 Messina (Italy)
2009-12-16
It is argued that the extended mode-coupling theory for glass transition predicts a dynamic crossover in the alpha-relaxation time and in the self-diffusion constant as a general implication of the structure of its equations of motion. This crossover occurs near the critical temperature T{sub c} of the idealized version of the theory, and is caused by the change in the dynamics from the one determined by the cage effect to that dominated by hopping processes. When combined with a model for the hopping kernel deduced from the dynamical theory for diffusion-jump processes, the dynamic crossover can be identified as the fragile-to-strong crossover (FSC) in which the alpha-relaxation time and the self-diffusion constant cross over from a non-Arrhenius to an Arrhenius behavior. Since the present theory does not resort to the existence of the so-called Widom line, to which the FSC in confined water has been attributed, it provides a possible explanation of the FSC observed in a variety of glass-forming systems in which the existence of the Widom line is unlikely. In addition, the present theory predicts that the Stokes-Einstein relation (SER) breaks down in different ways on the fragile and strong sides of the FSC, in agreement with the experimental observation in confined water. It is also demonstrated that the violation of the SER in both the fragile and strong regions can be fitted reasonably well by a single fractional relation with an empirical exponent of 0.85.
Energy Technology Data Exchange (ETDEWEB)
Gasparini, Maria Alice; Marshall, Phil; Treu, Tommaso; /UC, Santa Barbara; Morganson, Eric; /KIPAC, Menlo Park; Dubath, Florian; /Santa Barbara, KITP
2007-11-14
We use current theoretical estimates for the density of long cosmic strings to predict the number of strong gravitational lensing events in astronomical imaging surveys as a function of angular resolution and survey area. We show that angular resolution is the single most important factor, and that interesting limits on the dimensionless string tension G{mu}/c{sup 2} can be obtained by existing and planned surveys. At the resolution of the Hubble Space Telescope (0'.14), it is sufficient to survey of order a square degree -- well within reach of the current HST archive -- to probe the regime G{mu}/c{sup 2} {approx} 10{sup -8}. If lensing by cosmic strings is not detected, such a survey would improve the limit on the string tension by an order of magnitude on that available from the cosmic microwave background. At the resolution (0'.028) attainable with the next generation of large ground based instruments, both in the radio and the infra-red with adaptive optics, surveying a sky area of order ten square degrees will allow us to probe the G{mu}/c{sup 2} {approx} 10{sup -9} regime. These limits will not be improved significantly by increasing the solid angle of the survey.
Note on the hydrodynamic description of thin nematic films: strong anchoring model
Lin, Te-Sheng; Archer, Andrew J; Kondic, Lou; Thiele, Uwe
2013-01-01
We discuss the long-wave hydrodynamic model for a thin film of nematic liquid crystal in the limit of strong anchoring at the free surface and at the substrate. Our aim is to clarify how the elastic energy enters the evolution equation for the film thickness; several models exist in the literature that result in qualitatively different behaviour. We consolidate the various approaches and show that the long-wave model derived through an asymptotic expansion of the full nemato-hydrodynamic equations with consistent boundary conditions agrees with the equation one obtains by employing a thermodynamically motivated gradient dynamics formulation based on an underlying free energy functional. As a result, we find that the elastic distortion energy is always stabilising in the case of strong anchoring. To support the discussion in the main part of the paper, an appendix gives the full derivation of the evolution equation for the film thickness via asymptotic expansion.
Scale invariant extension of the standard model with a strongly interacting hidden sector.
Hur, Taeil; Ko, P
2011-04-08
We present a scale invariant extension of the standard model with a new QCD-like strong interaction in the hidden sector. A scale Λ(H) is dynamically generated in the hidden sector by dimensional transmutation, and chiral symmetry breaking occurs in the hidden sector. This scale is transmitted to the SM sector by a real singlet scalar messenger S and can trigger electroweak symmetry breaking. Thus all the mass scales in this model arise from the hidden sector scale Λ(H), which has quantum mechanical origin. Furthermore, the lightest hadrons in the hidden sector are stable by the flavor conservation of the hidden sector strong interaction, and could be the cold dark matter (CDM). We study collider phenomenology, relic density, and direct detection rates of the CDM of this model.
Walczak, A.P.
2015-01-01
<strong>Title of the PhD thesis: Development of an integrated in vitro model for the prediction of oral bioavailability of nanoparticlesstrong> The number of food-related products containing nanoparticles (NPs) increases. To understand the safety of such products, the potential uptake of these NPs
Predictive modeling by the cerebellum improves proprioception.
Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J
2013-09-04
Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.
The strong-weak coupling symmetry in 2D Φ4 field models
Directory of Open Access Journals (Sweden)
B.N.Shalaev
2005-01-01
Full Text Available It is found that the exact beta-function β(g of the continuous 2D gΦ4 model possesses two types of dual symmetries, these being the Kramers-Wannier (KW duality symmetry and the strong-weak (SW coupling symmetry f(g, or S-duality. All these transformations are explicitly constructed. The S-duality transformation f(g is shown to connect domains of weak and strong couplings, i.e. above and below g*. Basically it means that there is a tempting possibility to compute multiloop Feynman diagrams for the β-function using high-temperature lattice expansions. The regular scheme developed is found to be strongly unstable. Approximate values of the renormalized coupling constant g* found from duality symmetry equations are in an agreement with available numerical results.
Modeling of Nonlinear Propagation in Multi-layer Biological Tissues for Strong Focused Ultrasound
Institute of Scientific and Technical Information of China (English)
FAN Ting-Bo; LIU Zhen-Bo; ZHANG Zhe; ZHANG DONG; GONG Xiu-Fen
2009-01-01
A theoretical model of the nonlinear propagation in multi-layered tissues for strong focused ultrasound is proposed. In this model, the spheroidal beam equation (SBE) is utilized to describe the nonlinear sound propagation in each layer tissue, and generalized oblique incidence theory is used to deal with the sound transmission between two layer tissues. Computer simulation is performed on a fat-muscle-liver tissue model under the irradiation of a 1 MHz focused transducer with a large aperture angle of 35°. The results demonstrate that the tissue layer would change the amplitude of sound pressure at the focal region and cause the increase of side petals.
A prediction model for Clostridium difficile recurrence
Directory of Open Access Journals (Sweden)
Francis D. LaBarbera
2015-02-01
Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.
Gamma-Ray Pulsars Models and Predictions
Harding, A K
2001-01-01
Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...
Ground Motion Prediction Models for Caucasus Region
Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino
2016-04-01
Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.
Modeling and Prediction of Krueger Device Noise
Guo, Yueping; Burley, Casey L.; Thomas, Russell H.
2016-01-01
This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.
A generative model for predicting terrorist incidents
Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger
2017-05-01
A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations
Allowing for model error in strong constraint 4D-Var
Howes, Katherine; Lawless, Amos; Fowler, Alison
2016-04-01
Four dimensional variational data assimilation (4D-Var) can be used to obtain the best estimate of the initial conditions of an environmental forecasting model, namely the analysis. In practice, when the forecasting model contains errors, the analysis from the 4D-Var algorithm will be degraded to allow for errors later in the forecast window. This work focusses on improving the analysis at the initial time by allowing for the fact that the model contains error, within the context of strong constraint 4D-Var. The 4D-Var method developed acknowledges the presence of random error in the model at each time step by replacing the observation error covariance matrix with an error covariance matrix that includes both observation error and model error statistics. It is shown that this new matrix represents the correct error statistics of the innovations in the presence of model error. A method for estimating this matrix using innovation statistics, without requiring prior knowledge of the model error statistics, is presented. The method is demonstrated numerically using a non-linear chaotic system with erroneous parameter values. We show that that the new method works to reduce the analysis error covariance when compared with a standard strong constraint 4D-Var scheme. We discuss the fact that an improved analysis will not necessarily provide a better forecast.
Optimal feedback scheduling of model predictive controllers
Institute of Scientific and Technical Information of China (English)
Pingfang ZHOU; Jianying XIE; Xiaolong DENG
2006-01-01
Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.
Objective calibration of numerical weather prediction models
Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.
2017-07-01
Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.
Prediction models from CAD models of 3D objects
Camps, Octavia I.
1992-11-01
In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.
An Anisotropic Hardening Model for Springback Prediction
Zeng, Danielle; Xia, Z. Cedric
2005-08-01
As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.
Molecular alignment and filamentation: comparison between weak and strong field models
Berti, N; Wolf, J -P; Faucher, O
2014-01-01
The impact of nonadiabatic laser-induced molecular alignment on filamentation is numerically studied. Weak and strong field model of impulsive molecular alignment are compared in the context of nonlinear pulse propagation. It is shown that the widely used weak field model describing the refractive index modification induced by impulsive molecular alignment accurately reproduces the propagation dynamics providing that only a single pulse is involved during the experiment. On the contrary, it fails at reproducing the nonlinear propagation experienced by an intense laser pulse traveling in the wake of a second strong laser pulse. The discrepancy depends on the relative delay between the two pulses and is maximal for delays corresponding to half the rotational period of the molecule.
The moduli and gravitino (non)-problems in models with strongly stabilized moduli
Energy Technology Data Exchange (ETDEWEB)
Evans, Jason L.; Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN, 55455 (United States); Garcia, Marcos A.G., E-mail: jlevans@umn.edu, E-mail: garciagarcia@physics.umn.edu, E-mail: olive@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN, 55455 (United States)
2014-03-01
In gravity mediated models and in particular in models with strongly stabilized moduli, there is a natural hierarchy between gaugino masses, the gravitino mass and moduli masses: m{sub 1/2} << m{sub 3/2} << m{sub φ}. Given this hierarchy, we show that 1) moduli problems associated with excess entropy production from moduli decay and 2) problems associated with moduli/gravitino decays to neutralinos are non-existent. Placed in an inflationary context, we show that the amplitude of moduli oscillations are severely limited by strong stabilization. Moduli oscillations may then never come to dominate the energy density of the Universe. As a consequence, moduli decay to gravitinos and their subsequent decay to neutralinos need not overpopulate the cold dark matter density.
A study of Feshbach resonances and the unitary limit in a model of strongly correlated nucleons
Mekjian, Aram Z
2010-01-01
A model of strongly interacting and correlated hadrons is developed. The interaction used contains a long range attraction and short range repulsive hard core. Using this interaction and various limiting situations of it, a study of the effect of bound states and Feshbach resonances is given. The limiting situations are a pure square well interaction, a delta-shell potential and a pure hard core potential. The limit of a pure hard core potential are compared with results for a spinless Bose and Fermi gas. The limit of many partial waves for a pure hard core interaction is also considered and result in expressions involving the hard core volume. This feature arises from a scaling relation similar to that for hard sphere scattering with diffractive corrections. The role of underlying isospin symmetries associated with the strong interaction of protons and neutrons in this two component model is investigated. Properties are studied with varying proton fraction. An analytic expression for the Beth Uhlenbeck conti...
The Moduli and Gravitino (non)-Problems in Models with Strongly Stabilized Moduli
Evans, Jason L; Olive, Keith A
2013-01-01
In gravity mediated models and in particular in models with strongly stabilized moduli, there is a natural hierarchy between gaugino masses, the gravitino mass and moduli masses: $m_{1/2} \\ll m_{3/2} \\ll m_{\\phi}$. Given this hierarchy, we show that 1) moduli problems associated with excess entropy production from moduli decay and 2) problems associated with moduli/gravitino decays to neutralinos are non-existent. Placed in an inflationary context, we show that the amplitude of moduli oscillations are severely limited by strong stabilization. Moduli oscillations may then never come to dominate the energy density of the Universe. As a consequence, moduli decay to gravitinos and their subsequent decay to neutralinos need not overpopulate the cold dark matter density.
Predictive modelling of ferroelectric tunnel junctions
Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.
2016-05-01
Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.
Simple predictions from multifield inflationary models.
Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C
2014-04-25
We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.
Veprauskas, Amy; Cushing, J M
2017-03-01
We study a discrete time, structured population dynamic model that is motivated by recent field observations concerning certain life history strategies of colonial-nesting gulls, specifically the glaucous-winged gull (Larus glaucescens). The model focuses on mechanisms hypothesized to play key roles in a population's response to degraded environment resources, namely, increased cannibalism and adjustments in reproductive timing. We explore the dynamic consequences of these mechanics using a juvenile-adult structure model. Mathematically, the model is unusual in that it involves a high co-dimension bifurcation at [Formula: see text] which, in turn, leads to a dynamic dichotomy between equilibrium states and synchronized oscillatory states. We give diagnostic criteria that determine which dynamic is stable. We also explore strong Allee effects caused by positive feedback mechanisms in the model and the possible consequence that a cannibalistic population can survive when a non-cannibalistic population cannot.
Weak-strong uniqueness for measure-valued solutions of some compressible fluid models
Gwiazda, Piotr; Świerczewska-Gwiazda, Agnieszka; Wiedemann, Emil
2015-10-01
We prove weak-strong uniqueness in the class of admissible measure-valued solutions for the isentropic Euler equations in any space dimension and for the Savage-Hutter model of granular flows in one and two space dimensions. For the latter system, we also show the complete dissipation of momentum in finite time, thus rigorously justifying an assumption that has been made in the engineering and numerical literature.
Weak-strong uniqueness for measure-valued solutions of some compressible fluid models
Gwiazda, Piotr; Wiedemann, Emil
2015-01-01
We prove weak-strong uniqueness in the class of admissible measure-valued solutions for the isentropic Euler equations in any space dimension and for the Savage-Hutter model of granular flows in one and two space dimensions. For the latter system, we also show the complete dissipation of momentum in finite time, thus rigorously justifying an assumption that has been made in the engineering and numerical literature.
A Time Varying Strong Coupling Constant as a Model of Inflationary Universe
Chamoun, N; Vucetich, H
2000-01-01
We consider a scenario where the strong coupling constant was changing in the early universe. We attribute this change to a variation in the colour charge within a Bekenstein-like model. Allowing for a large value for the vacuum gluon condensate $\\sim 10^{22}GeV^4$, we could generate inflation with the required properties to solve the fluctuation and other standard cosmology problems. A possible approach to end the inflation is suggested.
Meiling, Yu; Lianshou, Liu
2008-01-01
Pair distribution function for delocalized quarks in the strongly coupled quark gluon plasma (sQGP) as well as in the states at intermediate stages of crossover from hadronic matter to sQGP are calculated using a molecule-like aggregation model. The shapes of the obtained pair distribution functions exhibit the character of liquid. The increasing correlation length in the process of crossover indicates a diminishing viscosity of the fluid system.
Institute of Scientific and Technical Information of China (English)
YIN; Changming; ZHAO; Lincheng; WEI; Chengdong
2006-01-01
In a generalized linear model with q × 1 responses, the bounded and fixed (or adaptive) p × q regressors Zi and the general link function, under the most general assumption on the minimum eigenvalue of ∑ni=1 ZiZ'i, the moment condition on responses as weak as possible and the other mild regular conditions, we prove that the maximum quasi-likelihood estimates for the regression parameter vector are asymptotically normal and strongly consistent.
Predictions of models for environmental radiological assessment
Energy Technology Data Exchange (ETDEWEB)
Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)
2011-07-01
In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)
Comparison between cohesive zone models and a coupled criterion for prediction of edge debonding
Vandellos, T.; Martin, E.; Leguillon, D.
2014-01-01
International audience; The onset of edge debonding within a bonded specimen submitted to bending is modeled with two numerical approaches: the coupled criterion and the cohesive zone model. The comparison of the results obtained with the both approaches evidences that (i) the prediction of edge debonding strongly depends on the shape of the cohesive law and (ii) the trapezoidal cohesive law is the most relevant model to predict the edge debonding as compared with the coupled criterion.
Predicting Protein Secondary Structure with Markov Models
DEFF Research Database (Denmark)
Fischer, Paul; Larsen, Simon; Thomsen, Claus
2004-01-01
we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...
A Modified Model Predictive Control Scheme
Institute of Scientific and Technical Information of China (English)
Xiao-Bing Hu; Wen-Hua Chen
2005-01-01
In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.
Hierarchical Model Predictive Control for Resource Distribution
DEFF Research Database (Denmark)
Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob
2010-01-01
This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....
Explicit model predictive control accuracy analysis
Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano
2015-01-01
Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...
Yang, Hong-Liu; Radons, Günter
2008-01-01
Crossover from weak to strong chaos in high-dimensional Hamiltonian systems at the strong stochasticity threshold (SST) was anticipated to indicate a global transition in the geometric structure of phase space. Our recent study of Fermi-Pasta-Ulam models showed that corresponding to this transition the energy density dependence of all Lyapunov exponents is identical apart from a scaling factor. The current investigation of the dynamic XY model discovers an alternative scenario for the energy dependence of the system dynamics at SSTs. Though similar in tendency, the Lyapunov exponents now show individually different energy dependencies except in the near-harmonic regime. Such a finding restricts the use of indices such as the largest Lyapunov exponent and the Ricci curvatures to characterize the global transition in the dynamics of high-dimensional Hamiltonian systems. These observations are consistent with our conjecture that the quasi-isotropy assumption works well only when parametric resonances are the dominant sources of dynamical instabilities. Moreover, numerical simulations demonstrate the existence of hydrodynamical Lyapunov modes (HLMs) in the dynamic XY model and show that corresponding to the crossover in the Lyapunov exponents there is also a smooth transition in the energy density dependence of significance measures of HLMs. In particular, our numerical results confirm that strong chaos is essential for the appearance of HLMs.
Critical conceptualism in environmental modeling and prediction.
Christakos, G
2003-10-15
Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.
Baruselli, Pier Paolo; Vojta, Matthias
2015-10-09
SmB_{6} was recently proposed to be both a strong topological insulator and a topological crystalline insulator. For this and related cubic topological Kondo insulators, we prove the existence of four different topological phases, distinguished by the sign of mirror Chern numbers. We characterize these phases in terms of simple observables, and we provide concrete tight-binding models for each phase. Based on theoretical and experimental results for SmB_{6} we conclude that it realizes the phase with C_{k_{z}=0}^{+}=+2, C_{k_{z}=π}^{+}=+1, C_{k_{x}=k_{y}}^{+}=-1, and we propose a corresponding minimal model.
Energy Technology Data Exchange (ETDEWEB)
Szyniszewski, Marcin [Lancaster Univ. (United Kingdom). Dept. of Physics; Manchester Univ. (United Kingdom). NoWNano DTC; Cichy, Krzysztof [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Kujawa-Cichy, Agnieszka [Frankfurt Univ., Frankfurt am Main (Germany). Inst. fuer Theortische Physik
2014-10-15
We employ exact diagonalization with strong coupling expansion to the massless and massive Schwinger model. New results are presented for the ground state energy and scalar mass gap in the massless model, which improve the precision to nearly 10{sup -9}%. We also investigate the chiral condensate and compare our calculations to previous results available in the literature. Oscillations of the chiral condensate which are present while increasing the expansion order are also studied and are shown to be directly linked to the presence of flux loops in the system.
Predictive Capability Maturity Model for computational modeling and simulation.
Energy Technology Data Exchange (ETDEWEB)
Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.
2007-10-01
The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.
A Predictive Maintenance Model for Railway Tracks
DEFF Research Database (Denmark)
Li, Rui; Wen, Min; Salling, Kim Bang
2015-01-01
For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...
Behavior of human serum albumin on strong cation exchange resins: II. model analysis.
Voitl, Agnes; Butté, Alessandro; Morbidelli, Massimo
2010-08-20
Experiments with human serum albumin on a strong cation exchange resin exhibit a peculiar elution pattern: the protein elutes with two peaks in a modifier gradient. This behavior is modeled with a general rate model, where the two elution peaks are modeled with two binding conformations, one of which is at equilibrium conditions, while for the other, the adsorption process is rate limited. Isocratic experiments under nonadsorbing conditions were used to characterize the mass transfer process. The isotherm of both adsorption conformations as well as the kinetic of adsorption and desorption for the second conformation are functions of the modifier concentration. They are evaluated with linear modifier gradient experiments and step experiments with various adsorption times. All experimental features are well reproduced by the proposed modified general rate model.
Strong Ground Motion in the 2011 Tohoku Earthquake: a 1Directional - 3Component Modeling
D'Avila, Maria Paola Santisi; Lenti, Luca
2013-01-01
Local wave amplification due to strong seismic motions in surficial multilayered soil is influenced by several parameters such as the wavefield polarization and the dynamic properties and impedance contrast between soil layers. The present research aims at investigating seismic motion amplification in the 2011 Tohoku earthquake through a one-directional three-component (1D-3C) wave propagation model. A 3D nonlinear constitutive relation for dry soils under cyclic loading is implemented in a quadratic line finite element model. The soil rheology is modeled by mean of a multi-surface cyclic plasticity model of the Masing-Prandtl-Ishlinskii-Iwan (MPII) type. Its major advantage is that the rheology is characterized by few commonly measured parameters. Ground motions are computed at the surface of soil profiles in the Tohoku area (Japan) by propagating 3C signals recorded at rock outcrops, during the 2011 Tohoku earthquake. Computed surface ground motions are compared to the Tohoku earthquake records at alluvial ...
Bufalo, Rodrigo; Pimentel, Bruto Max
2010-01-01
We have performed a nonperturbative quantization of the two-dimensional gauged Thirring model by using the path-integral approach. First, we have studied the constraint structure via the Dirac's formalism for constrained systems and by using the Faddeev-Senjanovic method we have calculated the vacuum--vacuum transition amplitude, then have computed the correlation functions in a nonperturbative framework, and the Ward-Takahashi identities of model as well. Afterwards, we have established at quantum level the isomorphisms between gauged Thirring model with the Schwinger and Thirring models by analyzing the respective Green's functions in the strong limit of the coupling constants $g $ and $e$, respectively. A special attention is necessary to perform the quantum analysis in the limit $e\\rightarrow \\infty $.
Modeling a nonperturbative spinor vacuum interacting with a strong gravitational wave
Dzhunushaliev, Vladimir
2015-01-01
We consider the propagation of strong gravitational waves interacting with a nonperturbative vacuum of spinor fields. To described the latter, we suggest an approximate model. The corresponding Einstein equation has the form of the Schr\\"odinger equation. Its gravitational-wave solution is analogous to the solution of the Schr\\"odinger equation for an electron moving in a periodic potential. The general solution for the periodic gravitational waves is found. The analog of the Kronig-Penney model for gravitational waves is considered. It is shown that the suggested gravitational-wave model permits the existence of weak electric charge and current densities concomitant with the gravitational wave. Based on this observation, a possible experimental verification of the model is suggested.
Modeling a nonperturbative spinor vacuum interacting with a strong gravitational wave
Dzhunushaliev, Vladimir; Folomeev, Vladimir
2015-07-01
We consider the propagation of strong gravitational waves interacting with a nonperturbative vacuum of spinor fields. To described the latter, we suggest an approximate model. The corresponding Einstein equation has the form of the Schrödinger equation. Its gravitational-wave solution is analogous to the solution of the Schrödinger equation for an electron moving in a periodic potential. The general solution for the periodic gravitational waves is found. The analog of the Kronig-Penney model for gravitational waves is considered. It is shown that the suggested gravitational-wave model permits the existence of weak electric charge and current densities concomitant with the gravitational wave. Based on this observation, a possible experimental verification of the model is suggested.
Modeling a nonperturbative spinor vacuum interacting with a strong gravitational wave
Energy Technology Data Exchange (ETDEWEB)
Dzhunushaliev, Vladimir [Al-Farabi Kazakh National University, Department of Theoretical and Nuclear Physics, Almaty (Kazakhstan); Al-Farabi Kazakh National University, Institute of Experimental and Theoretical Physics, Almaty (Kazakhstan); Folomeev, Vladimir [Institute of Physicotechnical Problems and Material Science, NAS of the Kyrgyz Republic, Bishkek (Kyrgyzstan)
2015-07-15
We consider the propagation of strong gravitational waves interacting with a nonperturbative vacuum of spinor fields. To described the latter, we suggest an approximate model. The corresponding Einstein equation has the form of the Schroedinger equation. Its gravitational-wave solution is analogous to the solution of the Schroedinger equation for an electron moving in a periodic potential. The general solution for the periodic gravitational waves is found. The analog of the Kronig-Penney model for gravitational waves is considered. It is shown that the suggested gravitational-wave model permits the existence of weak electric charge and current densities concomitant with the gravitational wave. Based on this observation, a possible experimental verification of the model is suggested. (orig.)
A predictive fitness model for influenza
Łuksza, Marta; Lässig, Michael
2014-03-01
The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.
Predictive Model of Radiative Neutrino Masses
Babu, K S
2013-01-01
We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...
A predictive model for dimensional errors in fused deposition modeling
DEFF Research Database (Denmark)
Stolfi, A.
2015-01-01
This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...
Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.
Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F
2013-04-01
In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.
DEFF Research Database (Denmark)
Jørgensen, John Bagterp; Jørgensen, Sten Bay
2007-01-01
model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...
Data-Model Comparisons of Photoelectron Flux Intensities on the Strong Crustal Field Lines at Mars
Liemohn, Michael; Trantham, Matthew; Mitchell, David
2010-05-01
This study quantifies the factors controlling photoelectron fluxes on strong crustal field lines in the Martian ionosphere. Using data from Mars Global Surveyor's Magnetometer and Electron Reflectometer instruments, dayside electron populations near the strong crustal fields in the southern hemisphere are analyzed versus various controlling parameters. These parameters include a Mars F10.7 proxy, a solar wind pressure proxy, local solar zenith angle, magnetic elevation angle, magnetic field strength. It was found that solar EUV radiation (corrected for solar zenith angle and the Mars-Sun distance) has the strongest influence on the photoelectron fluxes, and during different time periods this radiation has a stronger influence than at others times. Second, fluxes show a slight enhancement when the magnetic elevation angle is near zero degrees (horizontal field lines). Finally, other parameters, such as pressure and magnetic field strength, seem to have no major influence. These measurement-based results are then compared against numerical modeling flux intensities to quantify the physical mechanisms behind the observed relationships. The numerical code used for this study is our superthermal electron transport model, which solves for the electric distribution function along a magnetic field line. The code includes the influence of a variable magnetic field strength, pitch angle scattering and mirror trapping, and collisional energy cascading. The influence of solar EUV flux, atmospheric composition, solar wind dynamic pressure, and the local magnetic field are systematically investigated with this code to understand why some of these parameters have a strong influence on photoelectron flux intensity while others do not.
Interaction effects in a microscopic quantum wire model with strong spin-orbit interaction
Winkler, G. W.; Ganahl, M.; Schuricht, D.; Evertz, H. G.; Andergassen, S.
2017-06-01
We investigate the effect of strong interactions on the spectral properties of quantum wires with strong Rashba spin-orbit (SO) interaction in a magnetic field, using a combination of matrix product state and bosonization techniques. Quantum wires with strong Rashba SO interaction and magnetic field exhibit a partial gap in one-half of the conducting modes. Such systems have attracted wide-spread experimental and theoretical attention due to their unusual physical properties, among which are spin-dependent transport, or a topological superconducting phase when under the proximity effect of an s-wave superconductor. As a microscopic model for the quantum wire we study an extended Hubbard model with SO interaction and Zeeman field. We obtain spin resolved spectral densities from the real-time evolution of excitations, and calculate the phase diagram. We find that interactions increase the pseudo gap at k = 0 and thus also enhance the Majorana-supporting phase and stabilize the helical spin order. Furthermore, we calculate the optical conductivity and compare it with the low energy spiral Luttinger liquid result, obtained from field theoretical calculations. With interactions, the optical conductivity is dominated by an excotic excitation of a bound soliton-antisoliton pair known as a breather state. We visualize the oscillating motion of the breather state, which could provide the route to their experimental detection in e.g. cold atom experiments.
Predictive Models for Photovoltaic Electricity Production in Hot Weather Conditions
Directory of Open Access Journals (Sweden)
Jabar H. Yousif
2017-07-01
Full Text Available The process of finding a correct forecast equation for photovoltaic electricity production from renewable sources is an important matter, since knowing the factors affecting the increase in the proportion of renewable energy production and reducing the cost of the product has economic and scientific benefits. This paper proposes a mathematical model for forecasting energy production in photovoltaic (PV panels based on a self-organizing feature map (SOFM model. The proposed model is compared with other models, including the multi-layer perceptron (MLP and support vector machine (SVM models. Moreover, a mathematical model based on a polynomial function for fitting the desired output is proposed. Different practical measurement methods are used to validate the findings of the proposed neural and mathematical models such as mean square error (MSE, mean absolute error (MAE, correlation (R, and coefficient of determination (R2. The proposed SOFM model achieved a final MSE of 0.0007 in the training phase and 0.0005 in the cross-validation phase. In contrast, the SVM model resulted in a small MSE value equal to 0.0058, while the MLP model achieved a final MSE of 0.026 with a correlation coefficient of 0.9989, which indicates a strong relationship between input and output variables. The proposed SOFM model closely fits the desired results based on the R2 value, which is equal to 0.9555. Finally, the comparison results of MAE for the three models show that the SOFM model achieved a best result of 0.36156, whereas the SVM and MLP models yielded 4.53761 and 3.63927, respectively. A small MAE value indicates that the output of the SOFM model closely fits the actual results and predicts the desired output.
Inference of gene regulatory networks with the strong-inhibition Boolean model
Energy Technology Data Exchange (ETDEWEB)
Xia Qinzhi; Liu Lulu; Ye Weiming; Hu Gang, E-mail: ganghu@bnu.edu.cn [Department of Physics, Beijing Normal University, Beijing 100875 (China)
2011-08-15
The inference of gene regulatory networks (GRNs) is an important topic in biology. In this paper, a logic-based algorithm that infers the strong-inhibition Boolean genetic regulatory networks (where regulation by any single repressor can definitely suppress the expression of the gene regulated) from time series is discussed. By properly ordering various computation steps, we derive for the first time explicit formulae for the probabilities at which different interactions can be inferred given a certain number of data. With the formulae, we can predict the precision of reconstructions of regulation networks when the data are insufficient. Numerical simulations coincide well with the analytical results. The method and results are expected to be applicable to a wide range of general dynamic networks, where logic algorithms play essential roles in the network dynamics and the probabilities of various logics can be estimated well.
Field-theoretic methods in strongly-coupled models of general gauge mediation
Fortin, Jean-François; Stergiou, Andreas
2013-08-01
An often-exploited feature of the operator product expansion (OPE) is that it incorporates a splitting of ultraviolet and infrared physics. In this paper we use this feature of the OPE to perform simple, approximate computations of soft masses in gauge-mediated supersymmetry breaking. The approximation amounts to truncating the OPEs for hidden-sector current-current operator products. Our method yields visible-sector superpartner spectra in terms of vacuum expectation values of a few hidden-sector IR elementary fields. We manage to obtain reasonable approximations to soft masses, even when the hidden sector is strongly coupled. We demonstrate our techniques in several examples, including a new framework where supersymmetry breaking arises both from a hidden sector and dynamically. Our results suggest that strongly-coupled models of supersymmetry breaking are naturally split.
Strong decays of N~*(1535) in an extended chiral quark model
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
The strong decays of the N*(1535) resonance are investigated in an extended chiral quark model by including the low-lying qqqqq components in addition to the qqq component.The results show that these five-quark components in N*(1535) contribute significantly to the N*(1535)→ Nπ and N*(1535) → Nη decays.The contributions to the Nη decay come from both the lowest energy and the next-to-lowest energy five-quarks components,while the contributions to the Nπ decay come from only the latter one.Taking these contributions into account,the description for the strong decays of N*(1535) is improved,especially for the puzzling large ratio of the decays to Nη and Nπ.
Field-theoretic methods in strongly-coupled models of general gauge mediation
Energy Technology Data Exchange (ETDEWEB)
Fortin, Jean-François, E-mail: jean-francois.fortin@cern.ch [Theory Division, Department of Physics, CERN, CH-1211 Geneva 23 (Switzerland); Stanford Institute for Theoretical Physics, Department of Physics, Stanford University, Stanford, CA 94305 (United States); Stergiou, Andreas, E-mail: stergiou@physics.ucsd.edu [Department of Physics, University of California, San Diego, La Jolla, CA 92093 (United States)
2013-08-01
An often-exploited feature of the operator product expansion (OPE) is that it incorporates a splitting of ultraviolet and infrared physics. In this paper we use this feature of the OPE to perform simple, approximate computations of soft masses in gauge-mediated supersymmetry breaking. The approximation amounts to truncating the OPEs for hidden-sector current–current operator products. Our method yields visible-sector superpartner spectra in terms of vacuum expectation values of a few hidden-sector IR elementary fields. We manage to obtain reasonable approximations to soft masses, even when the hidden sector is strongly coupled. We demonstrate our techniques in several examples, including a new framework where supersymmetry breaking arises both from a hidden sector and dynamically. Our results suggest that strongly-coupled models of supersymmetry breaking are naturally split.
Global strong solution to the three-dimensional liquid crystal flows of Q-tensor model
Xiao, Yao
2017-02-01
A complex hydrodynamic system that models the fluid of nematic liquid crystals in a bounded domain in R3 is studied. The system is a forced incompressible Navier-Stokes equation coupled with a parabolic type equation of Q-tensors. We invoke the maximal regularity of the Stokes operators and parabolic operators in Besov spaces to obtain the local strong solution if the initial Q-tensor is not too "wild". In addition, it is showed that such solution can be extended to a global one if the initial data is a sufficiently small perturbation around the trivial equilibrium state. Finally, it is proved that the global strong solution obtained here is identical to those weak solutions obtained in Paicu and Zarnescu [26].
Propagation of a Laguerre-Gaussian correlated Schell-model beam in strongly nonlocal nonlinear media
Qiu, Yunli; Chen, Zhaoxi; He, Yingji
2017-04-01
Analytical expressions for the cross-spectral density function and the second-order moments of the Wigner distribution function of a Laguerre-Gaussian correlated Schell-model (LGCSM) beam propagating in strongly nonlocal nonlinear media are derived. The propagation properties, such as beam irradiance, beam width, the spectral degree of coherence and the propagation factor of a LGCSM beam inside the media are investigated in detail. The effect of the beam parameters and the input power on the evolution properties of a LGCSM is illustrated numerically. It is found that the beam width varies periodically or keeps invariant for a certain proper input power. And both the beam irradiance and the spectral degree of coherence of the LGCSM beam change periodically with the propagation distance for the arbitrary input power which however has no influence on the propagation factor. The coherent length and the mode order mainly affect the evolution speed of the LGCSM beam in strongly nonlocal nonlinear media.
Acebron, Ana; Jullo, Eric; Limousin, Marceau; Tilquin, André; Giocoli, Carlo; Jauzac, Mathilde; Mahler, Guillaume; Richard, Johan
2017-09-01
Strong gravitational lensing by galaxy clusters is a fundamental tool to study dark matter and constrain the geometry of the Universe. Recently, the Hubble Space Telescope Frontier Fields programme has allowed a significant improvement of mass and magnification measurements but lensing models still have a residual root mean square between 0.2 arcsec and few arcseconds, not yet completely understood. Systematic errors have to be better understood and treated in order to use strong lensing clusters as reliable cosmological probes. We have analysed two simulated Hubble-Frontier-Fields-like clusters from the Hubble Frontier Fields Comparison Challenge, Ares and Hera. We use several estimators (relative bias on magnification, density profiles, ellipticity and orientation) to quantify the goodness of our reconstructions by comparing our multiple models, optimized with the parametric software lenstool, with the input models. We have quantified the impact of systematic errors arising, first, from the choice of different density profiles and configurations and, secondly, from the availability of constraints (spectroscopic or photometric redshifts, redshift ranges of the background sources) in the parametric modelling of strong lensing galaxy clusters and therefore on the retrieval of cosmological parameters. We find that substructures in the outskirts have a significant impact on the position of the multiple images, yielding tighter cosmological contours. The need for wide-field imaging around massive clusters is thus reinforced. We show that competitive cosmological constraints can be obtained also with complex multimodal clusters and that photometric redshifts improve the constraints on cosmological parameters when considering a narrow range of (spectroscopic) redshifts for the sources.
Hydrodynamic Lyapunov modes and strong stochasticity threshold in Fermi-Pasta-Ulam models.
Yang, Hong-Liu; Radons, Günter
2006-06-01
The existence of a strong stochasticity threshold (SST) has been detected in many Hamiltonian lattice systems, including the Fermi-Pasta-Ulam (FPU) model, which is characterized by a crossover of the system dynamics from weak to strong chaos with increasing energy density epsilon. Correspondingly, the relaxation time to energy equipartition and the largest Lyapunov exponent exhibit different scaling behavior in the regimes below and beyond the threshold value. In this paper, we attempt to go one step further in this direction to explore further changes in the energy density dependence of other Lyapunov exponents and of hydrodynamic Lyapunov modes (HLMs). In particular, we find that for the FPU-beta and FPU-alpha(beta) models the scalings of the energy density dependence of all Lyapunov exponents experience a similar change at the SST as that of the largest Lyapunov exponent. In addition, the threshold values of the crossover of all Lyapunov exponents are nearly identical. These facts lend support to the point of view that the crossover in the system dynamics at the SST manifests a global change in the geometric structure of phase space. They also partially answer the question of why the simple assumption that the ambient manifold representing the system dynamics is quasi-isotropic works quite well in the analytical calculation of the largest Lyapunov exponent. Furthermore, the FPU-beta model is used as an example to show that HLMs exist in Hamiltonian lattice models with continuous symmetries. Some measures are defined to indicate the significance of HLMs. Numerical simulations demonstrate that there is a smooth transition in the energy density dependence of these variables corresponding to the crossover in Lyapunov exponents at the SST. In particular, our numerical results indicate that strong chaos is essential for the appearance of HLMs and those modes become more significant with increasing degree of chaoticity.
Two criteria for evaluating risk prediction models.
Pfeiffer, R M; Gail, M H
2011-09-01
We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.
Methods for Handling Missing Variables in Risk Prediction Models
Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.
2016-01-01
Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient
Ginzburg-Landau expansion in strongly disordered attractive Anderson-Hubbard model
Kuchinskii, E. Z.; Kuleeva, N. A.; Sadovskii, M. V.
2017-07-01
We have studied disordering effects on the coefficients of Ginzburg-Landau expansion in powers of superconducting order parameter in the attractive Anderson-Hubbard model within the generalized DMFT+Σ approximation. We consider the wide region of attractive potentials U from the weak coupling region, where superconductivity is described by BCS model, to the strong coupling region, where the superconducting transition is related with Bose-Einstein condensation (BEC) of compact Cooper pairs formed at temperatures essentially larger than the temperature of superconducting transition, and a wide range of disorder—from weak to strong, where the system is in the vicinity of Anderson transition. In the case of semielliptic bare density of states, disorder's influence upon the coefficients A and B of the square and the fourth power of the order parameter is universal for any value of electron correlation and is related only to the general disorder widening of the bare band (generalized Anderson theorem). Such universality is absent for the gradient term expansion coefficient C. In the usual theory of "dirty" superconductors, the C coefficient drops with the growth of disorder. In the limit of strong disorder in BCS limit, the coefficient C is very sensitive to the effects of Anderson localization, which lead to its further drop with disorder growth up to the region of the Anderson insulator. In the region of BCS-BEC crossover and in BEC limit, the coefficient C and all related physical properties are weakly dependent on disorder. In particular, this leads to relatively weak disorder dependence of both penetration depth and coherence lengths, as well as of related slope of the upper critical magnetic field at superconducting transition, in the region of very strong coupling.
Directory of Open Access Journals (Sweden)
M. R. Saradjian
2011-04-01
Full Text Available Usually a precursor alone might not be useful as an accurate, precise, and stand-alone criteria for the earthquake parameters prediction. Therefore it is more appropriate to exploit parameters extracted from a variety of individual precursors so that their simultaneous integration would reduce the parameters's uncertainty.
In our previous studies, five strong earthquakes which happened in the Samoa Islands, Sichuan (China, L'Aquila (Italy, Borujerd (Iran and Zarand (Iran have been analyzed to locate unusual variations in the time series of the different earthquake precursors. In this study, we have attempted to estimate earthquake parameters using the detected anomalies in the mentioned case studies.
Using remote sensing observations, this study examines variations of electron and ion density, electron temperature, total electron content (TEC, electric and magnetic fields and land surface temperature (LST several days before the studied earthquakes. Regarding the ionospheric precursors, the geomagnetic indices D_{st} and K_{p} were used to distinguish pre-earthquake disturbed states from the other anomalies related to the geomagnetic activities.
The inter-quartile range of data was utilized to construct their upper and lower bound to detect disturbed states outsides the bounds which might be associated with impending earthquakes.
When the disturbed state associated with an impending earthquake is detected, based on the type of precursor, the number of days relative to the earthquake day is estimated. Then regarding the deviation value of the precursor from the undisturbed state the magnitude of the impending earthquake is estimated. The radius of the affected area is calculated using the estimated magnitude and Dobrovolsky formula.
In order to assess final earthquake parameters (i.e. date, magnitude and radius of the affected area for each case study, the earthquake
More, Anupreeta; Oguri, Masamune; More, Surhud; Lee, Chien-Hsiu
2016-01-01
We present predictions for time delays between multiple images of the gravitationally lensed supernova, iPTF16geu, which was recently discovered from the intermediate Palomar Transient Factory (iPTF). As the supernova is of Type Ia where the intrinsic luminosity is usually well-known, accurately measured time delays of the multiple images could provide tight constraints on the Hubble constant. According to our lens mass models constrained by the {\\it Hubble Space Telescope} F814W image, we expect the maximum relative time delay to be less than a day, which is consistent with the maximum of 100 hours reported by Goobar et al. but places a stringent upper limit. Furthermore, the fluxes of most of the supernova images depart from expected values suggesting that they are affected by microlensing. The microlensing timescales are small enough that they may pose significant problems to measure the time delays reliably. Our lensing rate calculation indicates that the occurrence of a lensed SN in iPTF is likely. Howev...
More, Anupreeta; Suyu, Sherry H.; Oguri, Masamune; More, Surhud; Lee, Chien-Hsiu
2017-02-01
We present predictions for time delays between multiple images of the gravitationally lensed supernova, iPTF16geu, which was recently discovered from the intermediate Palomar Transient Factory (iPTF). As the supernova is of Type Ia where the intrinsic luminosity is usually well known, accurately measured time delays of the multiple images could provide tight constraints on the Hubble constant. According to our lens mass models constrained by the Hubble Space Telescope F814W image, we expect the maximum relative time delay to be less than a day, which is consistent with the maximum of 100 hr reported by Goobar et al. but places a stringent upper limit. Furthermore, the fluxes of most of the supernova images depart from expected values suggesting that they are affected by microlensing. The microlensing timescales are small enough that they may pose significant problems to measure the time delays reliably. Our lensing rate calculation indicates that the occurrence of a lensed SN in iPTF is likely. However, the observed total magnification of iPTF16geu is larger than expected, given its redshift. This may be a further indication of ongoing microlensing in this system.
Rukes, Lothar; Oberleithner, Kilian
2016-01-01
Linear stability analysis has proven to be a useful tool in the analysis of dominant coherent structures, such as the von K\\'{a}rm\\'{a}n vortex street and the global spiral mode associated with the vortex breakdown of swirling jets. In recent years, linear stability analysis has been applied successfully to turbulent time-mean flows, instead of laminar base-flows, \\textcolor{black}{which requires turbulent models that account for the interaction of the turbulent field with the coherent structures. To retain the stability equations of laminar flows, the Boussinesq approximation with a spatially nonuniform but isotropic eddy viscosity is typically employed. In this work we assess the applicability of this concept to turbulent strongly swirling jets, a class of flows that is particularly unsuited for isotropic eddy viscosity models. Indeed we find that unsteady RANS simulations only match with experiments with a Reynolds stress model that accounts for an anisotropic eddy viscosity. However, linear stability anal...
A Simple Model of Fields Including the Strong or Nuclear Force and a Cosmological Speculation
Directory of Open Access Journals (Sweden)
David L. Spencer
2016-10-01
Full Text Available Reexamining the assumptions underlying the General Theory of Relativity and calling an object's gravitational field its inertia, and acceleration simply resistance to that inertia, yields a simple field model where the potential (kinetic energy of a particle at rest is its capacity to move itself when its inertial field becomes imbalanced. The model then attributes electromagnetic and strong forces to the effects of changes in basic particle shape. Following up on the model's assumption that the relative intensity of a particle's gravitational field is always inversely related to its perceived volume and assuming that all black holes spin, may create the possibility of a cosmic rebound where a final spinning black hole ends with a new Big Bang.
Modeling of random wave transformation with strong wave-induced coastal currents
Institute of Scientific and Technical Information of China (English)
Zheng Jinhai; H. Mase; Li Tongfei
2008-01-01
The propagation and transformation of multi-directional and uni-directional random waves over a coast with complicated bathymetric and geometric features are studied experimentally and numerically. Laboratory investigation indicates that wave energy convergence and divergence cause strong coastal currents to develop and inversely modify the wave fields. A coastal spectral wave model, based on the wave action balance equation with diffraction effect (WABED), is used to simulate the transformation of random waves over the complicated bathymetry. The diffraction effect in the wave model is derived from a parabolic approximation of wave theory, and the mean energy dissipation rate per unit horizontal area due to wave breaking is parameterized by the bore-based formulation with a breaker index of 0.73. The numerically simulated wave field without considering coastal currents is different from that of experiments, whereas model results considering currents clearly reproduce the intensification of wave height in front of concave shorelines.
On strongly degenerate convection-diffusion Problems Modeling sedimentation-consolidation Processes
Energy Technology Data Exchange (ETDEWEB)
Buerger, R.; Evje, S.; Karlsen, S. Hvistendahl
1999-10-01
This report investigates initial-boundary value problems for a quasilinear strongly degenerate convection-diffusion equation with a discontinuous diffusion coefficient. These problems come from the mathematical modelling of certain sedimentation-consolidation processes. Existence of entropy solutions belonging to BV is shown by the vanishing viscosity method. The existence proof for one of the models includes a new regularity result for the integrated diffusion coefficient. New uniqueness proofs for entropy solutions are also presented. These proofs rely on a recent extension to second order equations of Kruzkov`s method of `doubling of the variables`. The application to a sedimentation-consolidation model is illustrated by two numerical examples. 25 refs., 2 figs.
Simulating the All-Order Strong Coupling Expansion III: O(N) sigma/loop models
Wolff, Ulli
2009-01-01
We reformulate the O(N) sigma model as a loop model whose configurations are the all-order strong coupling graphs of the original model. The loop configurations are represented by a pointer list in the computer and a Monte Carlo update scheme is proposed. Sample simulations are reported and the method turns out to be similarly efficient as the reflection cluster method, but it has greater potential for systematic generalization to other lattice field theories. A variant action suggested by the method is also simulated and leads to a rather extreme demonstration of the concept of universality of the scaling or continuum limit. {\\it I would like to dedicate this paper to Martin L\\"uscher on the occasion of his sixtieth birthday. I thank him for his superb contributions to quantum field theory and for the privilege to collaborate with him.}
Dark Matter and Strong Electroweak Phase Transition in a Radiative Neutrino Mass Model
Ahriche, Amine
2013-01-01
We consider an extension of the standard model (SM) with charged singlet scalars and right handed (RH) neutrinos all at the electroweak scale. In this model, the neutrino masses are generated at three loops, which provide an explanation for their smallness, and the lightest RH neutrino, $N_{1}$, is a dark matter candidate. We find that for three generations of RH neutrinos, the model can be consistent with the neutrino oscillation data, lepton flavor violating processes, $N_{1}$ can have a relic density in agreement with the recent Planck data, and the electroweak phase transition can be strongly first order. We also show that the charged scalars may enhance the branching ratio $h-->YY$, where as $h-->YZ$ get can get few percent suppression. We also discuss the phenomenological implications of the RH neutrinos at the collider.
Estimating the magnitude of prediction uncertainties for the APLE model
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...
Prediction of Catastrophes: an experimental model
Peters, Randall D; Pomeau, Yves
2012-01-01
Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...
Deconfinement in the presence of a strong magnetic background: an exercise within the MIT bag model
Fraga, Eduardo S
2012-01-01
We study the effect of a very strong homogeneous magnetic field B on the thermal deconfinement transition within the simplest phenomenological approach: the MIT bag pressure for the quark-gluon plasma and a gas of pions for the hadronic sector. Even though the model is known to be crude in numerical precision and misses the correct nature of the (crossover) transition, it provides a simple setup for the discussion of some subtleties of vacuum and thermal contributions in each phase, and should provide a reasonable qualitative description of the critical temperature in the presence of B. We find that the critical temperature decreases.
Deconfinement in the presence of a strong magnetic background: An exercise within the MIT bag model
Fraga, Eduardo S.; Palhares, Letícia F.
2012-07-01
We study the effect of a very strong homogeneous magnetic field B on the thermal deconfinement transition within the simplest phenomenological approach: the MIT bag pressure for the quark-gluon plasma and a gas of pions for the hadronic sector. Even though the model is known to be crude in numerical precision and misses the correct nature of the (crossover) transition, it provides a simple setup for the discussion of some subtleties of vacuum and thermal contributions in each phase, and should provide a reasonable qualitative description of the critical temperature in the presence of B. We find that the critical temperature decreases, saturating for very large fields.
Bekenstein model and the time variation of the strong coupling constant
Chamoun, N; Vucetich, H
2001-01-01
We propose to generalize Bekenstein model for the time variation of the fine structure "constant" $\\alpha_{em}$ to QCD strong coupling constant $\\alpha_S$. We find that, except for a ``fine tuned'' choice of the free parameters, the extension can not be performed trivially without being in conflict with experimental constraints and this rules out $\\alpha_S$ variability. This is due largely to the huge numerical value of the QCD vacuum gluon condensate when compared to the mass density of the universe.
A quark model study of strong decays of $X\\left( 3915\\right) $
González, P
2016-01-01
Strong decays of $X\\left( 3915\\right) $ are analyzed from two quark model descriptions of $X\\left( 3915\\right) $, a conventional one in terms of the Cornell potential and an unconventional one from a Generalized Screened potential. We conclude that the experimental suppression of the OZI allowed decay $X\\left( 3915\\right) \\rightarrow D\\overline{D}$ might be explained in both cases as due to the momentum dependence of the decay amplitude. However, the experimental significance of the OZI forbidden decay $X\\left( 3915\\right) \\rightarrow\\omega J/\\psi$ could favor an unconventional description.
Effective Lagrangian description of the possible strong sector of the standard model
Energy Technology Data Exchange (ETDEWEB)
Casalbuoni, R. (European Organization for Nuclear Research, Geneva (Switzerland); Istituto Nazionale di Fisica Nucleare, Florence (Italy)); Dominici, D. (Istituto Nazionale di Fisica Nucleare, Florence (Italy); Florence Univ. (Italy). Ist. di Fisica); Gatto, R. (Geneva Univ. (Switzerland). Dept. de Physique Theorique)
1984-11-15
We discuss the effective Lagrangian of the scalar and longitudinal sector of the standard SU(2)xU(1) model and derive the corresponding Feynman rules. Such a sector becomes strong when the Higgs mass parameter msub(H) is large. Scalar propagation, in this case, is conveniently described by a degenerate 2x2 matrix. We apply the Feyman rules to calculate scattering amplitudes among longitudinally polarized W and Z, which now satisfy partial wave-unitarity also at large msub(H). We also calculate production amplitudes among such states and find that they are no longer depressed, when msub(H) is large.
Effective lagrangian description of the possible strong sector of the standard model
Energy Technology Data Exchange (ETDEWEB)
Casalbuoni, R.; Dominici, D.; Gatto, R.
1984-11-15
We discuss the effective lagrangian of the scalar and longitudinal sector of the standard SU(2)xU(1) model and derive the corresponding Feynman rules. Such a sector becomes strong when the Higgs mass parameter msub(H) is large. Scalar propagation, in this case, is conveniently described by a degenerate 2x2 matrix. We apply the Feyman rules to calculate scattering amplitudes among longitudinally polarized W and Z, which now satisfy partial wave-unitarity also at large msub(H). We also calculate production amplitudes among such states and find that they are no longer depressed, when msub(H) is large.
Engineering the Dynamics of Effective Spin-Chain Models for Strongly Interacting Atomic Gases
DEFF Research Database (Denmark)
Volosniev, A. G.; Petrosyan, D.; Valiente, M.
2015-01-01
We consider a one-dimensional gas of cold atoms with strong contact interactions and construct an effective spin-chain Hamiltonian for a two-component system. The resulting Heisenberg spin model can be engineered by manipulating the shape of the external confining potential of the atomic gas. We...... find that bosonic atoms offer more flexibility for tuning independently the parameters of the spin Hamiltonian through interatomic (intra-species) interaction which is absent for fermions due to the Pauli exclusion principle. Our formalism can have important implications for control and manipulation...
Quark matter under strong magnetic fields in the Nambu--Jona-Lasinio Model
Peres-Menezes, D; Avancini, S S; Martinez, A Perez; Providência, C
2008-01-01
In the present work we use the large-$N_c$ approximation to investigate quark matter described by the SU(2) Nambu--Jona-Lasinio model subject to a strong magnetic field. The Landau levels are filled in such a way that usual kinks appear in the effective mass and other related quantities. $\\beta$-equilibrium is also considered and the macroscopic properties of a magnetar described by this quark matter is obtained. Our study shows that the magnetar masses and radii are larger if the magnetic field increases but only very large fields ($\\ge 10^{18}$ G) affect the EoS in a non negligible way.
Protogenov, A P
2001-01-01
The brief review of events, conditioned by the nonlinear modes strong correlations in the planar systems is presented. The analysis is limited by the Schroedinger nonlinear equation model. The fields stationary distributions are determined. The dependence of the particles number on the parameter characterizing the degree of looking, of the universal oscillation lines, is obtained. It is shown that by small values of this parameter there exists on the two-dimensional lattice the universal gravitation, which may be the dynamic cause of transition to the coherent state. The connection of the chiral nonlinear boundary modes with the violations of the Galilean-invariance of the considered system is discussed
Three-loop Standard Model effective potential at leading order in strong and top Yukawa couplings
Energy Technology Data Exchange (ETDEWEB)
Martin, Stephen P. [Santa Barbara, KITP
2014-01-08
I find the three-loop contribution to the effective potential for the Standard Model Higgs field, in the approximation that the strong and top Yukawa couplings are large compared to all other couplings, using dimensional regularization with modified minimal subtraction. Checks follow from gauge invariance and renormalization group invariance. I also briefly comment on the special problems posed by Goldstone boson contributions to the effective potential, and on the numerical impact of the result on the relations between the Higgs vacuum expectation value, mass, and self-interaction coupling.
Predictive modeling of low solubility semiconductor alloys
Rodriguez, Garrett V.; Millunchick, Joanna M.
2016-09-01
GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.
Distributed model predictive control made easy
Negenborn, Rudy
2014-01-01
The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems. This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...
Leptogenesis in minimal predictive seesaw models
Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.
2015-10-01
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.
Min, Y.; Zhong, C. G.; Dong, Z. C.; Zhao, Z. Y.; Zhou, P. X.; Yao, K. L.
2016-10-01
A first-principles study of the transport properties of two thiolated pentacenes sandwiching ethyl is performed. The thiolated pentacene molecule shows strong n-type characteristics when contact Ag lead because of low work function about metal Ag. A strong negative differential resistance (NDR) effect with large peak-to-valley ratio of 758% is present under low bias. Our investigations indicate that strong n- or p-type molecules can be used as low bias molecular NDR devices and that the molecular NDR effect based on molecular-level leaving not on molecular-level crossing has no hysteresis.
A MAGNIFIED GLANCE INTO THE DARK SECTOR: PROBING COSMOLOGICAL MODELS WITH STRONG LENSING IN A1689
Energy Technology Data Exchange (ETDEWEB)
Magaña, Juan; Motta, V.; Cárdenas, Victor H.; Verdugo, T. [Instituto de Física y Astronomía, Facultad de Ciencias, Universidad de Valparaíso, Avda. Gran Bretaña 1111, Valparaíso (Chile); Jullo, Eric, E-mail: juan.magana@uv.cl, E-mail: veronica.motta@uv.cl, E-mail: victor.cardenas@uv.cl, E-mail: tomasverdugo@gmail.com, E-mail: eric.jullo@lam.fr [Aix Marseille Universite, CNRS, LAM (Laboratoire d’Astrophysique de Marseille) UMR 7326, F-13388 Marseille (France)
2015-11-01
In this paper we constrain four alternative models to the late cosmic acceleration in the universe: Chevallier–Polarski–Linder (CPL), interacting dark energy (IDE), Ricci holographic dark energy (HDE), and modified polytropic Cardassian (MPC). Strong lensing (SL) images of background galaxies produced by the galaxy cluster Abell 1689 are used to test these models. To perform this analysis we modify the LENSTOOL lens modeling code. The value added by this probe is compared with other complementary probes: Type Ia supernovae (SN Ia), baryon acoustic oscillations (BAO), and cosmic microwave background (CMB). We found that the CPL constraints obtained for the SL data are consistent with those estimated using the other probes. The IDE constraints are consistent with the complementary bounds only if large errors in the SL measurements are considered. The Ricci HDE and MPC constraints are weak, but they are similar to the BAO, SN Ia, and CMB estimations. We also compute the figure of merit as a tool to quantify the goodness of fit of the data. Our results suggest that the SL method provides statistically significant constraints on the CPL parameters but is weak for those of the other models. Finally, we show that the use of the SL measurements in galaxy clusters is a promising and powerful technique to constrain cosmological models. The advantage of this method is that cosmological parameters are estimated by modeling the SL features for each underlying cosmology. These estimations could be further improved by SL constraints coming from other galaxy clusters.
Predicting plants -modeling traits as a function of environment
Franklin, Oskar
2016-04-01
A central problem in understanding and modeling vegetation dynamics is how to represent the variation in plant properties and function across different environments. Addressing this problem there is a strong trend towards trait-based approaches, where vegetation properties are functions of the distributions of functional traits rather than of species. Recently there has been enormous progress in in quantifying trait variability and its drivers and effects (Van Bodegom et al. 2012; Adier et al. 2014; Kunstler et al. 2015) based on wide ranging datasets on a small number of easily measured traits, such as specific leaf area (SLA), wood density and maximum plant height. However, plant function depends on many other traits and while the commonly measured trait data are valuable, they are not sufficient for driving predictive and mechanistic models of vegetation dynamics -especially under novel climate or management conditions. For this purpose we need a model to predict functional traits, also those not easily measured, and how they depend on the plants' environment. Here I present such a mechanistic model based on fitness concepts and focused on traits related to water and light limitation of trees, including: wood density, drought response, allocation to defense, and leaf traits. The model is able to predict observed patterns of variability in these traits in relation to growth and mortality, and their responses to a gradient of water limitation. The results demonstrate that it is possible to mechanistically predict plant traits as a function of the environment based on an eco-physiological model of plant fitness. References Adier, P.B., Salguero-Gómez, R., Compagnoni, A., Hsu, J.S., Ray-Mukherjee, J., Mbeau-Ache, C. et al. (2014). Functional traits explain variation in plant lifehistory strategies. Proc. Natl. Acad. Sci. U. S. A., 111, 740-745. Kunstler, G., Falster, D., Coomes, D.A., Hui, F., Kooyman, R.M., Laughlin, D.C. et al. (2015). Plant functional traits
Models of the Strongly Lensed Quasar DES J0408-5354
Energy Technology Data Exchange (ETDEWEB)
Agnello, A.; et al.
2017-02-01
We present gravitational lens models of the multiply imaged quasar DES J0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with the aim of interpreting its remarkable quad-like configuration. We first model the DES single-epoch $grizY$ images as a superposition of a lens galaxy and four point-like objects, obtaining spectral energy distributions (SEDs) and relative positions for the objects. Three of the point sources (A,B,D) have SEDs compatible with the discovery quasar spectra, while the faintest point-like image (G2/C) shows significant reddening and a `grey' dimming of $\\approx0.8$mag. In order to understand the lens configuration, we fit different models to the relative positions of A,B,D. Models with just a single deflector predict a fourth image at the location of G2/C but considerably brighter and bluer. The addition of a small satellite galaxy ($R_{\\rm E}\\approx0.2$") in the lens plane near the position of G2/C suppresses the flux of the fourth image and can explain both the reddening and grey dimming. All models predict a main deflector with Einstein radius between $1.7"$ and $2.0",$ velocity dispersion $267-280$km/s and enclosed mass $\\approx 6\\times10^{11}M_{\\odot},$ even though higher resolution imaging data are needed to break residual degeneracies in model parameters. The longest time-delay (B-A) is estimated as $\\approx 85$ (resp. $\\approx125$) days by models with (resp. without) a perturber near G2/C. The configuration and predicted time-delays of J0408-5354 make it an excellent target for follow-up aimed at understanding the source quasar host galaxy and substructure in the lens, and measuring cosmological parameters. We also discuss some lessons learnt from J0408-5354 on lensed quasar finding strategies, due to its chromaticity and morphology.
Comparing model predictions for ecosystem-based management
DEFF Research Database (Denmark)
Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste
2016-01-01
Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...
Stochastic finite-fault modelling of strong earthquakes in Narmada South Fault, Indian Shield
Indian Academy of Sciences (India)
P Sengupta
2012-06-01
The Narmada South Fault in the Indian peninsular shield region is associated with moderate-to-strong earthquakes. The prevailing hazard evidenced by the earthquake-related fatalities in the region imparts significance to the investigations of the seismogenic environment. In the present study, the prevailing seismotectonic conditions specified by parameters associated with source, path and site conditions are appraised. Stochastic finite-fault models are formulated for each scenario earthquake. The simulated peak ground accelerations for the rock sites from the possible mean maximum earthquake of magnitude 6.8 goes as high as 0.24 g while fault-rupture of magnitude 7.1 exhibits a maximum peak ground acceleration of 0.36 g. The results suggest that present hazard specification of Bureau of Indian Standards as inadequate. The present study is expected to facilitate development of ground motion models for deterministic and probabilistic seismic hazard analysis of the region.
Quantifying Environmental and Line-of-Sight Effects in Models of Strong Gravitational Lens Systems
McCully, Curtis; Wong, Kenneth C; Zabludoff, Ann I
2016-01-01
Matter near a gravitational lens galaxy or projected along the line of sight (LOS) can affect strong lensing observables by more than contemporary measurement errors. We simulate lens fields with realistic three-dimensional mass configurations (self-consistently including voids), and then use lens models to quantify biases and uncertainties associated with different ways of treating the lens environment (ENV) and LOS. We identify the combination of mass, projected offset, and redshift that determines the importance of a perturbing galaxy for lensing. Foreground structures have a stronger effect on the lens potential than background structures, due to non-linear effects in the foreground and downweighting in the background. There is dramatic variation in the net strength of ENV/LOS effects across different lens fields; modeling fields individually yields stronger priors on $H_0$ than ray tracing through N-body simulations. Lens systems in groups tend to have stronger ENV/LOS contributions than non-group lenses...
Global well-posedness of strong solutions to a tropical climate model
Li, Jinkai
2015-01-01
In this paper, we consider the Cauchy problem to the TROPIC CLIMATE MODEL derived by Frierson-Majda-Pauluis in [Comm. Math. Sci, Vol. 2 (2004)] which is a coupled system of the barotropic and the first baroclinic modes of the velocity and the typical midtropospheric temperature. The system considered in this paper has viscosities in the momentum equations, but no diffusivity in the temperature equation. We establish here the global well-posedness of strong solutions to this model. In proving the global existence of strong solutions, to overcome the difficulty caused by the absence of the diffusivity in the temperature equation, we introduce a new velocity $w$ (called the pseudo baroclinic velocity), which has more regularities than the original baroclinic mode of the velocity. An auxiliary function $\\phi$, which looks like the effective viscous flux for the compressible Navier-Stokes equations, is also introduced to obtain the $L^\\infty$ bound of the temperature. Regarding the uniqueness, we use the idea of p...
Thermal conductivity of local moment models with strong spin-orbit coupling
Stamokostas, Georgios L.; Lapas, Panteleimon E.; Fiete, Gregory A.
2017-02-01
We study the magnetic and lattice contributions to the thermal conductivity of electrically insulating strongly spin-orbit coupled magnetically ordered phases on a two-dimensional honeycomb lattice using the Kitaev-Heisenberg model. Depending on model parameters, such as the relative strength of the spin-orbit induced anisotropic coupling, a number of magnetically ordered phases are possible. In this work, we study two distinct regimes of thermal transport depending on whether the characteristic energy of the phonons or the magnons dominates, and focus on two different relaxation mechanisms, boundary scattering and magnon-phonon scattering. For spatially anisotropic magnetic phases, the thermal conductivity tensor can be highly anisotropic when the magnetic energy scale dominates, since the magnetic degrees of freedom dominate the thermal transport for temperatures well below the magnetic transition temperature. In the opposite limit in which the phonon energy scale dominates, the thermal conductivity will be nearly isotropic, reflecting the isotropic (at low temperatures) phonon dispersion assumed for the honeycomb lattice. We further discuss the extent to which thermal transport properties are influenced by strong spin-orbit induced anisotropic coupling in the local moment regime of insulating magnetic phases. The developed methodology can be applied to any 2D magnon-phonon system, and more importantly to systems where an analytical Bogoliubov transformation cannot be found and magnon bands are not necessarily isotropic.
Energy Technology Data Exchange (ETDEWEB)
Chen, Yun; Geng, Chao-Qiang [Department of Physics, National Tsing Hua University, Hsinchu, 300 Taiwan (China); Cao, Shuo; Huang, Yu-Mei; Zhu, Zong-Hong, E-mail: chenyun@bao.ac.cn, E-mail: geng@phys.nthu.edu.tw, E-mail: caoshuo@bnu.edu.cn, E-mail: huangymei@gmail.com, E-mail: zhuzh@bnu.edu.cn [Department of Astronomy, Beijing Normal University, Beijing 100875 (China)
2015-02-01
We constrain the scalar field dark energy model with an inverse power-law potential, i.e., V(φ) ∝ φ{sup −α} (α > 0), from a set of recent cosmological observations by compiling an updated sample of Hubble parameter measurements including 30 independent data points. Our results show that the constraining power of the updated sample of H(z) data with the HST prior on H{sub 0} is stronger than those of the SCP Union2 and Union2.1 compilations. A recent sample of strong gravitational lensing systems is also adopted to confine the model even though the results are not significant. A joint analysis of the strong gravitational lensing data with the more restrictive updated Hubble parameter measurements and the Type Ia supernovae data from SCP Union2 indicates that the recent observations still can not distinguish whether dark energy is a time-independent cosmological constant or a time-varying dynamical component.
Flow equations and the strong-coupling expansion for the Hubbard model
Stein, Jürgen
1997-07-01
Applying the method of continuous unitary transformations to a class of Hubbard models, we reexamine the derivation of the t/U expansion for the strong-coupling case. The flow equations for the coupling parameters of the higher order effective interactions can be solved exactly, resulting in a systematic expansion of the Hamiltonian in powers of t/U, valid for any lattice in arbitrary dimension and for general band filling. The expansion ensures a correct treatment of the operator products generated by the transformation, and only involves the explicit recursive calculation of numerical coefficients. This scheme provides a unifying framework to study the strong-coupling expansion for the Hubbard model, which clarifies and circumvents several difficulties inherent to earlier approaches. Our results are compared with those of other methods, and it is shown that the freedom in the choice of the unitary transformation that eliminates interactions between different Hubbard bands can affect the effective Hamiltonian only at order t 3/U2 or higher.
The largest strongly connected component in the cyclical pedigree model of Wakeley et al.
Blath, Jochen; Kadow, Stephan; Ortgiese, Marcel
2014-12-01
We establish a link between Wakeley et al.'s (2012) cyclical pedigree model from population genetics and a randomized directed configuration model (DCM) considered by Cooper and Frieze (2004). We then exploit this link in combination with asymptotic results for the in-degree distribution of the corresponding DCM to compute the asymptotic size of the largest strongly connected component S(N) (where N is the population size) of the DCM resp. the pedigree. The size of the giant component can be characterized explicitly (amounting to approximately 80% of the total populations size) and thus contributes to a reduced 'pedigree effective population size'. In addition, the second largest strongly connected component is only of size O(logN). Moreover, we describe the size and structure of the 'domain of attraction' of S(N). In particular, we show that with high probability for any individual the shortest ancestral line reaches S(N) after O(loglogN) generations, while almost all other ancestral lines take at most O(logN) generations. Copyright © 2014 Elsevier Inc. All rights reserved.
Pouillot, Régis; Lubran, Meryl B
2011-06-01
Predictive microbiology models are essential tools to model bacterial growth in quantitative microbial risk assessments. Various predictive microbiology models and sets of parameters are available: it is of interest to understand the consequences of the choice of the growth model on the risk assessment outputs. Thus, an exercise was conducted to explore the impact of the use of several published models to predict Listeria monocytogenes growth during food storage in a product that permits growth. Results underline a gap between the most studied factors in predictive microbiology modeling (lag, growth rate) and the most influential parameters on the estimated risk of listeriosis in this scenario (maximum population density, bacterial competition). The mathematical properties of an exponential dose-response model for Listeria accounts for the fact that the mean number of bacteria per serving and, as a consequence, the highest achievable concentrations in the product under study, has a strong influence on the estimated expected number of listeriosis cases in this context.
Counillon, Francois; Kimmritz, Madlen; Keenlyside, Noel; Wang, Yiguo; Bethke, Ingo
2017-04-01
The Norwegian Climate Prediction Model combines the Norwegian Earth System Model and the Ensemble Kalman Filter data assimilation method. The prediction skills of different versions of the system (with 30 members) are tested in the Nordic Seas and the Arctic region. Comparing the hindcasts branched from a SST-only assimilation run with a free ensemble run of 30 members, we are able to dissociate the predictability rooted in the external forcing from the predictability harvest from SST derived initial conditions. The latter adds predictability in the North Atlantic subpolar gyre and the Nordic Seas regions and overall there is very little degradation or forecast drift. Combined assimilation of SST and T-S profiles further improves the prediction skill in the Nordic Seas and into the Arctic. These lead to multi-year predictability in the high-latitudes. Ongoing developments of strongly coupled assimilation (ocean and sea ice) of ice concentration in idealized twin experiment will be shown, as way to further enhance prediction skill in the Arctic.
A new Cumulative Damage Model for Fatigue Life Prediction under Shot Peening Treatment
Directory of Open Access Journals (Sweden)
Abdul-Jabar H. Ali
2015-07-01
Full Text Available In this paper, fatigue damage accumulation were studied using many methods i.e.Corton-Dalon (CD,Corton-Dalon-Marsh(CDM, new non-linear model and experimental method. The prediction of fatigue lifetimes based on the two classical methods, Corton-Dalon (CDandCorton-Dalon-Marsh (CDM, are uneconomic and non-conservative respectively. However satisfactory predictions were obtained by applying the proposed non-linear model (present model for medium carbon steel compared with experimental work. Many shortcomings of the two classical methods are related to their inability to take into account the surface treatment effect as shot peening. It is clear that the new model shows that a much better and conservative prediction of fatigue life in comparison with CD and CDM methods. The prediction of the present model gave slightly below the experimental data while the CDM gave overestimate prediction and CD showed strongly underestimates the life of specimens.
Tjioe, M.; Choo, J.; Borja, R. I.
2013-12-01
In previous studies, it has been found that two dominant micro-mechanisms play important roles in the deformation of high-porosity rocks. They are grain fracturing and crystal plasticity. Grain fracturing is a phenomenon where larger grains cleave to their smaller constituents as they respond to the stress concentration exerted on them close to the open pore spaces. Specimen-scale modeling cannot reflect such mechanism so our investigation is carried out in the next smaller scale, namely the mesoscopic scale. We model a solid matrix microstructure using finite element in which displacement discontinuity is introduced in each element where the slip condition has been exceeded. Such discontinuity is termed strong discontinuity and is characterized by zero band thickness and localized strain in the band that reaches infinity. For grains under compression, this slip condition is the cohesive-frictional law governing the behavior on the surface of discontinuity. The strong discontinuity in the grain scale is modeled via an Assumed Enhanced Strain (AES) method formulated within the context of nonlinear finite elements. Through this method, we can model grain-splitting as well as halos of cataclastic damage that are usually observed before a macropore collapses. The overall stress-strain curve and plastic slip of the mesoscopic element are then obtained and comparison to the crystal plasticity behavior is made to show the differences between the two mechanisms. We demonstrate that the incorporation of grain-fracturing and crystal plasticity can shed light onto the pore-scale deformation of high-porosity rocks.
Remaining Useful Lifetime (RUL - Probabilistic Predictive Model
Directory of Open Access Journals (Sweden)
Ephraim Suhir
2011-01-01
Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.
Predictive modeling for EBPC in EBDW
Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent
2009-10-01
We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.
Energy Technology Data Exchange (ETDEWEB)
Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory
2015-03-01
The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety
Yule-Nielsen based recto-verso color halftone transmittance prediction model.
Hébert, Mathieu; Hersch, Roger D
2011-02-01
The transmittance spectrum of halftone prints on paper is predicted thanks to a model inspired by the Yule-Nielsen modified spectral Neugebauer model used for reflectance predictions. This model is well adapted for strongly scattering printing supports and applicable to recto-verso prints. Model parameters are obtained by a few transmittance measurements of calibration patches printed on one side of the paper. The model was verified with recto-verso specimens printed by inkjet with classical and custom inks, at different halftone frequencies and on various types of paper. Predictions are as accurate as those obtained with a previously developed reflectance and transmittance prediction model relying on the multiple reflections of light between the paper and the print-air interfaces. Optimal n values are smaller in transmission mode compared with the reflection model. This indicates a smaller amount of lateral light propagation in the transmission mode.
Model for predicting mountain wave field uncertainties
Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal
2017-04-01
Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of
Predictive modeling of reactive wetting and metal joining.
Energy Technology Data Exchange (ETDEWEB)
van Swol, Frank B.
2013-09-01
The performance, reproducibility and reliability of metal joints are complex functions of the detailed history of physical processes involved in their creation. Prediction and control of these processes constitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy and reactive wetting. Understanding this process requires coupling strong molecularscale chemistry at the interface with microscopic (diffusion) and macroscopic mass transport (flow) inside the liquid followed by subsequent cooling and solidification of the new metal mixture. The final joint displays compositional heterogeneity and its resulting microstructure largely determines the success or failure of the entire component. At present there exists no computational tool at Sandia that can predict the formation and success of a braze joint, as current capabilities lack the ability to capture surface/interface reactions and their effect on interface properties. This situation precludes us from implementing a proactive strategy to deal with joining problems. Here, we describe what is needed to arrive at a predictive modeling and simulation capability for multicomponent metals with complicated phase diagrams for melting and solidification, incorporating dissolutive and composition-dependent wetting.
Strong constraint on hadronic models of blazar activity from Fermi and IceCube stacking analysis
Neronov, A; Ptitsyna, K
2016-01-01
High-energy emission from blazars is produced by electrons which are either accelerated directly (the assumption of leptonic models of blazar activity) or produced in interactions of accelerated protons with matter and radiation fields (the assumption of hadronic models). The hadronic models predict that gamma-ray emission is accompanied by neutrino emission with comparable energy flux but with a different spectrum. We derive constraints on the hadronic models of activity of blazars imposed by non-detection of neutrino flux from a population of gamma-ray emitting blazars. We stack the gamma-ray and muon neutrino flux from 749 blazars situated in the declination strip above -5 degrees. Non-detection of neutrino flux from the stacked blazar sample rules out the proton induced cacade models in which the high-energy emission is powered by interactions of shock-accelerated proton beam in the AGN jet with the ambient matter or with the radiation field of the black hole accretion disk. The result remains valid also ...
Directory of Open Access Journals (Sweden)
Jing Lu
2014-11-01
Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.
RFI modeling and prediction approach for SATOP applications: RFI prediction models
Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang
2016-05-01
This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.
Marzola, Luca
2014-01-01
Leptogenesis is an attractive scenario in which neutrino masses and baryon asymmetry of the Universe are explained together under a minimal set of assumptions. After formulating the problem of initial conditions and introducing the strong thermal leptogenesis conditions as solution, we show that, within the framework provided by the \\soten~model of leptogenesis, the latter lead to a set of testable predictions on the same neutrino parameters currently under experimental investigations. The emerging scenario selects the normal ordering of the neutrino mass pattern, a large value for the reactor mixing angle, $2\\deg \\lesssim \\theta_{13} \\lesssim 20\\deg$, as well as a non maximal atmospheric mixing angle, $16\\deg \\lesssim \\theta_{23} \\lesssim 41\\deg$, and favours negative values for the Dirac phase \\delta. The signature of the proposed strong thermal \\soten~solutions is in the relation obtained between the effective Majorana mass and the lightest neutrino mass: $m_{ee} \\approx 0.8 \\, m_1 \\approx 15 $ meV.
Earthquake source model using strong motion displacement as response of finite elastic media
Indian Academy of Sciences (India)
R N Iyengar; Shailesh K R Agrawal
2001-03-01
The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the source location and forces generated during an earthquake as an inverse problem in structural dynamics. Based on this analogy, a simple model for the basic earthquake source is proposed. The unknown source is assumed to be a sequence of impulses acting at locations yet to be found. These unknown impulses and their locations are found using the normal mode expansion along with a minimization of mean square error. The medium is assumed to be finite, elastic, homogeneous, layered and horizontal with a specific set of boundary conditions. Detailed results are obtained for Uttarkashi earthquake. The impulse locations exhibit a linear structure closely associated with the causative fault. The results obtained are shown to be in good agreement with reported values. The proposed engineering model is then used to simulate the acceleration time histories at a few recording stations. The earthquake source in terms of a sequence of impulses acting at different locations is applied on a 2D finite elastic medium and acceleration time histories are found using finite element methods. The synthetic accelerations obtained are in close match with the recorded accelerations.
Strong decays of excited 1D charmed(-strange) mesons in the covariant oscillator quark model
Maeda, Tomohito; Yoshida, Kento; Yamada, Kenji; Ishida, Shin; Oda, Masuho
2016-05-01
Recently observed charmed mesons, D1* (2760), D3* (2760) and charmed-strange mesons, Ds1 * (2860), Ds3 * (2860), by BaBar and LHCb collaborations are considered to be plausible candidates for c q ¯ 13 DJ (q = u, d, s) states. We calculate the strong decays with one pion (kaon) emission of these states including well-established 1S and 1P charmed(-strange) mesons within the framework of the covariant oscillator quark model. The results obtained are compared with the experimental data and the typical nonrelativistic quark-model calculations. Concerning the results for 1S and 1P states, we find that, thanks to the relativistic effects of decay form factors, our model parameters take reasonable values, though our relativistic approach and the nonrelativistic quark model give similar decay widths in agreement with experiment. While the results obtained for 13 DJ=1,3 states are roughly consistent with the present data, they should be checked by the future precise measurement.
Strong Internal Wave Solitons in a 2.5 Layer Model
Voronovich, A.
2003-04-01
"Strong" internal wave (IW) solitons, i.e. IW solitary waves with amplitudes comparable to the characteristic vertical scale of stratification are often observed in field experiments. Theoretical description of such solitons is usually based on a 2-layer model, which approximates stratification by two layers of homogeneous fluid with different densities (another possibility is to assume nearly-exponential density profile). Appropriate solitons are investigated in detail by Choi and Camassa (J. Fluid Mech., v. 396, pp. 1-36, 1999). In geophysical applications, however, stratification can be better represented by layers with constant Brunt-Vaisala frequency profiles. The model consisting of two such layers with a density jump between the layers is referred here as a "2.5 layer model". Motion in this case is not potential, however similarly to homogeneous layers, equation of motion in such system in stationary case and in the Boussinesq approximation is also linear, and non-linearity appears due to dynamic boundary condition between layers only. This allows one in the case of long waves to obtain an explicit equation for IW soliton profile. This equation can be reduced to the equation describing zero-energy particle in a potential well. In the case of homogeneous layers with zero density gradients they reduce to the solitons investigated by Choi and Camassa, and in the limit of small amplitudes they reduce to the appropriate KdV solitons. This solution was applied to the case of solitons measured in the COPE experiment. Soliton profiles calculated are in a good agreement with measurements, and the relation between soliton width and amplitude is also in a fair agreement with the data, especially for large-amplitude solitons. In contrast to the two-layer model solitons in the 2.5 layer model could belong to higher modes. Another interesting feature is a presence in a sufficiently strong soliton of a recirculation core, i.e. a portion of fluid which is entrained within
Prediction models : the right tool for the right problem
Kappen, Teus H.; Peelen, Linda M.
2016-01-01
PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders
Foundation Settlement Prediction Based on a Novel NGM Model
Directory of Open Access Journals (Sweden)
Peng-Yu Chen
2014-01-01
Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.
Predictability of the Indian Ocean Dipole in the coupled models
Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao
2017-03-01
In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.
Kaewprag, Pacharmon; Newton, Cheryl; Vermillion, Brenda; Hyun, Sookyung; Huang, Kun; Machiraju, Raghu
2017-07-05
We develop predictive models enabling clinicians to better understand and explore patient clinical data along with risk factors for pressure ulcers in intensive care unit patients from electronic health record data. Identifying accurate risk factors of pressure ulcers is essential to determining appropriate prevention strategies; in this work we examine medication, diagnosis, and traditional Braden pressure ulcer assessment scale measurements as patient features. In order to predict pressure ulcer incidence and better understand the structure of related risk factors, we construct Bayesian networks from patient features. Bayesian network nodes (features) and edges (conditional dependencies) are simplified with statistical network techniques. Upon reviewing a network visualization of our model, our clinician collaborators were able to identify strong relationships between risk factors widely recognized as associated with pressure ulcers. We present a three-stage framework for predictive analysis of patient clinical data: 1) Developing electronic health record feature extraction functions with assistance of clinicians, 2) simplifying features, and 3) building Bayesian network predictive models. We evaluate all combinations of Bayesian network models from different search algorithms, scoring functions, prior structure initializations, and sets of features. From the EHRs of 7,717 ICU patients, we construct Bayesian network predictive models from 86 medication, diagnosis, and Braden scale features. Our model not only identifies known and suspected high PU risk factors, but also substantially increases sensitivity of the prediction - nearly three times higher comparing to logistical regression models - without sacrificing the overall accuracy. We visualize a representative model with which our clinician collaborators identify strong relationships between risk factors widely recognized as associated with pressure ulcers. Given the strong adverse effect of pressure ulcers
Phase field model for strong anisotropy of kinetic and highly anisotropic interfacial energy
Institute of Scientific and Technical Information of China (English)
ZHANG Guo-wei; HOU Hua; CHENG Jun
2006-01-01
A phase-field model was established for simulating pure materials, which was calculated effectively and taken into account the strong anisotropy of kinetic and highly anisotropic interfacial energy. The anisotropy (strong kinetic and highly interfacial energy) of various degrees was simulated with numerical calculation. During a variety of interfacial anisotropy coefficient, equilibrium crystal shape varies from smoothness to corner. There has a critical value during the course of the transformation. When the anisotropy coefficenct is lower than the critical value, the growth velocity v increases monotonically with the increase of it. Whereas the anisotropy coefficent is higher than the critical value, the growth velocity decreases with the increases of it. During a variety of degree of supercooling, the growth velocity is under control from thermal diffusion to kinetics. Under the control of thermal diffusion, the growth velocity increases with the increase of degree of supercooling and tip radius R decreases with the increase of temperature. Under the control of kinetics, with the increase of degree of supercooling both V and R, which can not fit the traditional microcosmic theory.
Hatanaka, Hisaki; Ko, Pyungwon
2016-01-01
In this paper, we revisit a scale-invariant extension of the standard model (SM) with a strongly interacting hidden sector within AdS/QCD approach. Using the AdS/QCD, we reduce the number of input parameters to three, i.e. hidden pion decay constant, hidden pion mass and $\\tan\\beta$ that is defined as the ratio of the vacuum expectation values (VEV) of the singlet scalar field and the SM Higgs boson. As a result, our model has sharp predictability. We perform the phenomenological analysis of the hidden pions which is one of the dark matter (DM) candidates in this model. With various theoretical and experimental constraints we search for the allowed parameter space and find that both resonance and non-resonance solutions are possible. Some typical correlations among various observables such as thermal relic density of hidden pions, Higgs signal strengths and DM-nucleon cross section are investigated. We provide some benchmark points for experimental tests.
Nonconvex model predictive control for commercial refrigeration
Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John
2013-08-01
We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.
Leptogenesis in minimal predictive seesaw models
Energy Technology Data Exchange (ETDEWEB)
Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)
2015-10-15
We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.
QSPR Models for Octane Number Prediction
Directory of Open Access Journals (Sweden)
Jabir H. Al-Fahemi
2014-01-01
Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.
Thermo-magnetic properties of the strong coupling in the local Nambu--Jona-Lasinio model
Ayala, Alejandro; Hernandez, L A; Loewe, M; Raya, Alfredo; Rojas, J C; Villavicencio, C
2016-01-01
We study the thermo-magnetic behavior of the strong coupling constant and quark mass entering the Nambu-Jona-Lasinio model. The behavior of the quark condensate as function of magnetic field strength and temperature is also obtained and confronted with lattice QCD results. We find that for temperatures above the chiral/deconfinement phase transitions, where the condensate decreases monotonically with increasing field, the coupling also decreases monotonically. For temperatures below the transition temperature we find that the coupling initially grows and then decreases with increasing field strength. We consider this turnover behavior as a key element in the behavior of the quark condensate above the transition temperature. Hence, it allows for an understanding of the inverse magnetic catalysis phenomenon.
Pradhan, S.; Taraphder, A.
2016-10-01
A spinless, extended Falicov-Kimball model in the presence of a perpendicular magnetic field is investigated employing a self-consistent mean-field theory in two dimensions. In the presence of the field the excitonic average Δ = is modified: the exciton responds in subtle different ways for different values of the magnetic flux. We examine the effects of Coulomb interaction and hybridization between the localized and itinerant electrons on the excitonic average, for rational values of the applied magnetic field. The excitonic average is found to get enhanced exponentially with the Coulomb interaction while it saturates at large hybridization. The orbital magnetic field suppresses the excitonic average in general, though a strong commensurability effect of the magnetic flux on the behaviour of the excitonic order parameter is observed.
Non-parametric Reconstruction of Cluster Mass Distribution from Strong Lensing Modelling Abell 370
Abdel-Salam, H M; Williams, L L R
1997-01-01
We describe a new non-parametric technique for reconstructing the mass distribution in galaxy clusters with strong lensing, i.e., from multiple images of background galaxies. The observed positions and redshifts of the images are considered as rigid constraints and through the lens (ray-trace) equation they provide us with linear constraint equations. These constraints confine the mass distribution to some allowed region, which is then found by linear programming. Within this allowed region we study in detail the mass distribution with minimum mass-to-light variation; also some others, such as the smoothest mass distribution. The method is applied to the extensively studied cluster Abell 370, which hosts a giant luminous arc and several other multiply imaged background galaxies. Our mass maps are constrained by the observed positions and redshifts (spectroscopic or model-inferred by previous authors) of the giant arc and multiple image systems. The reconstructed maps obtained for A370 reveal a detailed mass d...
Coherent beam combination of fiber lasers with a strongly confined waveguide: numerical model.
Tao, Rumao; Si, Lei; Ma, Yanxing; Zhou, Pu; Liu, Zejin
2012-08-20
Self-imaging properties of fiber lasers in a strongly confined waveguide (SCW) and their application in coherent beam combination (CBC) are studied theoretically. Analytical formulas are derived for the positions, amplitudes, and phases of the N images at the end of an SCW, which is important for quantitative analysis of waveguide CBC. The formulas are verified with experimental results and numerical simulation of a finite difference beam propagation method (BPM). The error of our analytical formulas is less than 6%, which can be reduced to less than 1.5% with Goos-Hahnchen penetration depth considered. Based on the theoretical model and BPM, we studied the combination of two laser beams based on an SCW. The effects of the waveguide refractive index and Gaussian beam waist are studied. We also simulated the CBC of nine and 16 fiber lasers, and a single beam without side lobes was achieved.
Directory of Open Access Journals (Sweden)
Meng Xu
2016-01-01
Full Text Available Regarding straightening unit as the research object, considered the different of roller spacing, a mathematical model of intermesh schedule suitable to the 2800 seven-roller strong heavy plate straightening machine was deduced by the geometric method. According to the mathematical model, the intermesh schedules of several specifications of plate were calculated, and the finite element model of straightening process was established in the finite element analysis software Abaqus. By analysing, it was found that those plates after straightening cannot meet the requirement of flatness, due to the existence of the work hardening. The bending deflection of the last straightening unit was modified and the new calculation formula of intermesh schedule was obtained. The values of the modified coefficients were determined by the finite element method. The intermesh schedules of other specifications of plate were calculated by the modified calculation formula and then verified it in Abaqus. The simulation results showed that those plates after straightening meet the requirement of flatness. So, the research results provide a theoretical basis for development of a new plate straightening machine and formulation of intermesh schedule.
A Microscopic Model for the Strongly Coupled Electron-Ion System in VO2
Lovorn, Timothy; Sarker, Sanjoy
The metal-insulator transition (MIT) in vanadium dioxide (VO2) near 340 K is accompanied by a structural transition, suggesting strong coupling between electronic and lattice degrees of freedom. To help elucidate this relationship, we construct and analyze a microscopic model in which electrons, described by a tight-binding Hamiltonian, are dynamically coupled to Ising-like ionic degrees of freedom. A mean-field decoupling leads to an interacting two-component (pseudo) spin-1 Ising model describing the ions. An analysis of the minimal ionic model reproduces the observed M1 and M2 dimerized phases and rutile metal phase, occurring in the observed order with increasing temperature. All three transitions are first order, as observed. We further find that both dimerization and correlations play crucial roles in describing the insulating M1 phase. We discuss why dynamical coupling of electrons and ions is key to obtain a full understanding of the phenomenology of VO2, particularly in the context of the phase coexistence observed near the MIT. This research was supported by the National Science Foundation (DMR-1508680).
A magnified glance into the dark sector: probing cosmological models with strong lensing in A1689
Magaña, Juan; Cardenas, Victor H; Verdugo, T; Jullo, Eric
2015-01-01
In this paper we constrain four alternative models to the late cosmic acceleration in the Universe: Chevallier-Polarski-Linder (CPL), interacting dark energy (IDE), Ricci holographic dark energy (HDE), and modified polytropic Cardassian (MPC). Strong lensing (SL) images of background galaxies produced by the galaxy cluster Abell $1689$ are used to test these models. To perform this analysis we modify the LENSTOOL lens modeling code. The value added by this probe is compared with other complementary probes: Type Ia supernovae (SNIa), baryon acoustic oscillations (BAO), and cosmic microwave background (CMB). We found that the CPL constraints obtained of the SL data are consistent with those estimated using the other probes. The IDE constraints are consistent with the complementary bounds only if large errors in the SL measurements are considered. The Ricci HDE and MPC constraints are weak but they are similar to the BAO, SNIa and CMB estimations. We also compute the figure-of-merit as a tool to quantify the goo...
Directory of Open Access Journals (Sweden)
Arnold Steven E
2011-04-01
Full Text Available Abstract Objective Motor impairment in old age is a growing public-health concern, and several different constructs have been used to identify motor impairments in older people. We tested the hypothesis that combinations of motor constructs more strongly predict adverse health outcomes in older people. Methods In total, 949 people without dementia, history of stroke or Parkinson's disease, who were participating in the Rush Memory and Aging Project (a longitudinal community-based cohort study, underwent assessment at study entry. From this, three constructs were derived: 1 physical frailty based on grip strength, timed walk, body mass index and fatigue; 2 Parkinsonian Signs Score based on the modified motor section of the Unified Parkinson's Disease Rating Scale; and 3 a motor construct, based on nine strength measures and nine motor performances. Disability and cognitive status were assessed annually. A series of Cox proportional-hazards models, controlling for age, sex and education, were used to examine the association of each of these three constructs alone and in various combinations with death, disability and Alzheimer's disease (AD. Results All three constructs were related (mean r = 0.50, all P Conclusions Physical frailty, parkinsonian signs score and global motor score are related constructs that capture different aspects of motor function. Assessments using several motor constructs may more accurately identify people at the highest risk of adverse health consequences in old age.
Boyce, Christopher J.; Wood, Alex M.; Powdthavee, Nattavudh
2013-01-01
Personality is the strongest and most consistent cross-sectional predictor of high subjective well-being. Less predictive economic factors, such as higher income or improved job status, are often the focus of applied subjective well-being research due to a perception that they can change whereas personality cannot. As such there has been limited…
Energy Technology Data Exchange (ETDEWEB)
Regueiro, Richard A. (University of Colorado, Boulder, CO); Borja, R. I. (Stanford University, Stanford, CA); Foster, C. D. (Stanford University, Stanford, CA)
2006-10-01
Localized shear deformation plays an important role in a number of geotechnical and geological processes. Slope failures, the formation and propagation of faults, cracking in concrete dams, and shear fractures in subsiding hydrocarbon reservoirs are examples of important effects of shear localization. Traditional engineering analyses of these phenomena, such as limit equilibrium techniques, make certain assumptions on the shape of the failure surface as well as other simplifications. While these methods may be adequate for the applications for which they were designed, it is difficult to extrapolate the results to more general scenarios. An alternative approach is to use a numerical modeling technique, such as the finite element method, to predict localization. While standard finite elements can model a wide variety of loading situations and geometries quite well, for numerical reasons they have difficulty capturing the softening and anisotropic damage that accompanies localization. By introducing an enhancement to the element in the form of a fracture surface at an arbitrary position and orientation in the element, we can regularize the solution, model the weakening response, and track the relative motion of the surfaces. To properly model the slip along these surfaces, the traction-displacement response must be properly captured. This report focuses on the development of a constitutive model appropriate to localizing geomaterials, and the embedding of this model into the enhanced finite element framework. This modeling covers two distinct phases. The first, usually brief, phase is the weakening response as the material transitions from intact continuum to a body with a cohesionless fractured surface. Once the cohesion has been eliminated, the response along the surface is completely frictional. We have focused on a rate- and state-dependent frictional model that captures stable and unstable slip along the surface. This model is embedded numerically into the
Modeling Plasmas with Strong Anisotropy, Neutral Fluid Effects, and Open Boundaries
Meier, Eric T.
Three computational plasma science topics are addressed in this research: the challenge of modeling strongly anisotropic thermal conduction, capturing neutral fluid effects in collisional plasmas, and modeling open boundaries in dissipative plasmas. The research efforts on these three topics contribute to a common objective: the improvement and extension of existing magnetohydrodynamic modeling capability. Modeling magnetically confined fusion-related plasmas is the focus of the research, but broader relevance is recognized and discussed. Code development is central to this work, and has been carried out within the flexible physics framework of the highly parallel HiFi implicit spectral element code. In magnetic plasma confinement, heat conduction perpendicular to the magnetic field is extremely slow compared to conduction parallel to the field. The anisotropy in heat conduction can be many orders of magnitude, and the inaccuracy of low-order representations can allow parallel heat transport to "leak" into the perpendicular direction, resulting in numerical perpendicular transport. If the computational grid is aligned to the magnetic field, this numerical error can be eliminated, even for low-order representations. However, grid alignment is possible only in idealized problems. In realistic applications, magnetic topology is chaotic. A general approach for accurately modeling the extreme anisotropy of fusion plasmas is to use high-order representations which do not require grid alignment for sufficient resolution. This research provides a comprehensive assessment of spectral element representation of anisotropy, in terms of dependence of accuracy on grid alignment, polynomial degree, and grid cell size, and gives results for two- and three-dimensional cases. Truncating large physical domains to concentrate computational resources is often necessary or desirable in simulating natural and man-made plasmas. A novel open boundary condition (BC) treatment for such
Strong coupling electroweak symmetry breaking
Energy Technology Data Exchange (ETDEWEB)
Barklow, T.L. [Stanford Linear Accelerator Center, Menlo Park, CA (United States); Burdman, G. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Chivukula, R.S. [Boston Univ., MA (United States). Dept. of Physics
1997-04-01
The authors review models of electroweak symmetry breaking due to new strong interactions at the TeV energy scale and discuss the prospects for their experimental tests. They emphasize the direct observation of the new interactions through high-energy scattering of vector bosons. They also discuss indirect probes of the new interactions and exotic particles predicted by specific theoretical models.
Predictability in models of the atmospheric circulation.
Houtekamer, P.L.
1992-01-01
It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error are. The
Allostasis: a model of predictive regulation.
Sterling, Peter
2012-04-12
The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to
2015-01-01
1. Amazonian droughts are predicted to become increasingly frequent and intense, and the vulnerability of Amazonian trees has become increasingly documented. However, little is known about the physiological mechanisms and the diversity of drought tolerance of tropical trees due to the lack of quantitative measurements. 2. Leaf water potential at wilting or turgor loss point (pi(tlp)) is a determinant of the tolerance of leaves to drought stress and contributes to plant-level physiological...
Directory of Open Access Journals (Sweden)
Sébastien Chalencon
Full Text Available Competitive swimming as a physical activity results in changes to the activity level of the autonomic nervous system (ANS. However, the precise relationship between ANS activity, fatigue and sports performance remains contentious. To address this problem and build a model to support a consistent relationship, data were gathered from national and regional swimmers during two 30 consecutive-week training periods. Nocturnal ANS activity was measured weekly and quantified through wavelet transform analysis of the recorded heart rate variability. Performance was then measured through a subsequent morning 400 meters freestyle time-trial. A model was proposed where indices of fatigue were computed using Banister's two antagonistic component model of fatigue and adaptation applied to both the ANS activity and the performance. This demonstrated that a logarithmic relationship existed between performance and ANS activity for each subject. There was a high degree of model fit between the measured and calculated performance (R(2=0.84±0.14,p<0.01 and the measured and calculated High Frequency (HF power of the ANS activity (R(2=0.79±0.07, p<0.01. During the taper periods, improvements in measured performance and measured HF were strongly related. In the model, variations in performance were related to significant reductions in the level of 'Negative Influences' rather than increases in 'Positive Influences'. Furthermore, the delay needed to return to the initial performance level was highly correlated to the delay required to return to the initial HF power level (p<0.01. The delay required to reach peak performance was highly correlated to the delay required to reach the maximal level of HF power (p=0.02. Building the ANS/performance identity of a subject, including the time to peak HF, may help predict the maximal performance that could be obtained at a given time.
Pajtler, Kristian W; Sadowski, Natalie; Ackermann, Sandra; Althoff, Kristina; Schönbeck, Kerstin; Batzke, Katharina; Sch, Simonäfers; Odersky, Andrea; Heukamp, Lukas; Astrahantseff, Kathy; Künkele, Annette; Deubzer, Hedwig E; Schramm, Alexander; Spr, Annikaüssel; Thor, Theresa; Lindner, Sven; Eggert, Angelika; Fischer, Matthias; Schulte, Johannes H
2017-01-01
Polo-like kinase 1 (PLK1) is a serine/threonine kinase that promotes G2/M-phase transition, is expressed in elevated levels in high-risk neuroblastomas and correlates with unfavorable patient outcome. Recently, we and others have presented PLK1 as a potential drug target for neuroblastoma, and reported that the BI2536 PLK1 inhibitor showed antitumoral actvity in preclinical neuroblastoma models. Here we analyzed the effects of GSK461364, a competitive inhibitor for ATP binding to PLK1, on typical tumorigenic properties of preclinical in vitro and in vivo neuroblastoma models. GSK461364 treatment of neuroblastoma cell lines reduced cell viability and proliferative capacity, caused cell cycle arrest and massively induced apoptosis. These phenotypic consequences were induced by treatment in the low-dose nanomolar range, and were independent of MYCN copy number status. GSK461364 treatment strongly delayed established xenograft tumor growth in nude mice, and significantly increased survival time in the treatment group. These preclinical findings indicate PLK1 inhibitors may be effective for patients with high-risk or relapsed neuroblastomas with upregulated PLK1 and might be considered for entry into early phase clinical trials in pediatric patients. PMID:28036269
Required Collaborative Work in Online Courses: A Predictive Modeling Approach
Smith, Marlene A.; Kellogg, Deborah L.
2015-01-01
This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…
A prediction model for assessing residential radon concentration in Switzerland
Hauri, D.D.; Huss, A.; Zimmermann, F.; Kuehni, C.E.; Roosli, M.
2012-01-01
Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the
Distributional Analysis for Model Predictive Deferrable Load Control
Chen, Niangjun; Gan, Lingwen; Low, Steven H.; Wierman, Adam
2014-01-01
Deferrable load control is essential for handling the uncertainties associated with the increasing penetration of renewable generation. Model predictive control has emerged as an effective approach for deferrable load control, and has received considerable attention. In particular, previous work has analyzed the average-case performance of model predictive deferrable load control. However, to this point, distributional analysis of model predictive deferrable load control has been elusive. In ...
Composite control for raymond mill based on model predictive control and disturbance observer
Directory of Open Access Journals (Sweden)
Dan Niu
2016-03-01
Full Text Available In the raymond mill grinding process, precise control of operating load is vital for the high product quality. However, strong external disturbances, such as variations of ore size and ore hardness, usually cause great performance degradation. It is not easy to control the current of raymond mill constant. Several control strategies have been proposed. However, most of them (such as proportional–integral–derivative and model predictive control reject disturbances just through feedback regulation, which may lead to poor control performance in the presence of strong disturbances. For improving disturbance rejection, a control method based on model predictive control and disturbance observer is put forward in this article. The scheme employs disturbance observer as feedforward compensation and model predictive control controller as feedback regulation. The test results illustrate that compared with model predictive control method, the proposed disturbance observer–model predictive control method can obtain significant superiority in disturbance rejection, such as shorter settling time and smaller peak overshoot under strong disturbances.
Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models
Directory of Open Access Journals (Sweden)
Cheng-Hung Hsieh
2007-09-01
Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.
On hydrological model complexity, its geometrical interpretations and prediction uncertainty
Arkesteijn, E.C.M.M.; Pande, S.
2013-01-01
Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to
Probabilistic Modeling and Visualization for Bankruptcy Prediction
DEFF Research Database (Denmark)
Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara
2017-01-01
In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...
Predictive modeling of dental pain using neural network.
Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill
2009-01-01
The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.
Strongly Coupled Systems: From Quantum Antiferromagnets To Unified Models For Superconductors
Chudnovsky, V
2002-01-01
I discuss the significance of the antiferromagnetic Heisenberg model (AFHM) in both high-energy and condensed-matter physics, and proceed to describe an efficient cluster algorithm used to simulate the AFHM. This is one of two algorithms with which my collaborators and I were able to obtain numerical results that definitively confirm that chiral perturbation theory, corrected for cutoff effects in the AFHM, leads to a correct field-theoretical description of the low- temperature behavior of the spin correlation length in various spin representations S. Using a finite-size-scaling technique, we explored correlation lengths of up to 105 lattice spacings for spins S = 1 and 5/2. We show how the recent prediction of cutoff effects by P. Hasenfratz is approached for moderate correlation lengths, and smoothly connects with other approaches to modeling the AFHM at smaller correlation lengths. I also simulate and discuss classical antiferromagnetic systems with simultaneous SO(M) and SO( N) symmetries, which have bee...
Strongly Coupled Systems From Quantum Antiferromagnets To Unified Models For Superconductors
Chudnovsky, V
2002-01-01
I discuss the significance of the antiferromagnetic Heisenberg model (AFHM) in both high-energy and condensed-matter physics, and proceed to describe an efficient cluster algorithm used to simulate the AFHM. This is one of two algorithms with which my collaborators and I were able to obtain numerical results that definitively confirm that chiral perturbation theory, corrected for cutoff effects in the AFHM, leads to a correct field-theoretical description of the low- temperature behavior of the spin correlation length in various spin representations S. Using a finite-size-scaling technique, we explored correlation lengths of up to 105 lattice spacings for spins S = 1 and 5/2. We show how the recent prediction of cutoff effects by P. Hasenfratz is approached for moderate correlation lengths, and smoothly connects with other approaches to modeling the AFHM at smaller correlation lengths. I also simulate and discuss classical antiferromagnetic systems with simultaneous SO(M) and SO( N) symmetries, which have bee...
A Meson Emission Model of Psi to N Nbar m Charmonium Strong Decays
Barnes, T; Roberts, W
2010-01-01
In this paper we consider a sequential "meson emission" mechanism for charmonium decays of the type Psi -> N Nbar m, where Psi is a generic charmonium state, N is a nucleon and m is a light meson. This decay mechanism, which may not be dominant in general, assumes that an NNbar pair is created during charmonium annihilation, and the light meson m is emitted from the outgoing nucleon or antinucleon line. A straightforward generalization of this model can incorporate intermediate N* resonances. We derive Dalitz plot event densities for the cases Psi = eta_c, J/psi, chi_c0, chi_c1} and psi' and m = pi0, f0 and omega (and implicitly, any 0^{-+}, 0^{++} or 1^{--} final light meson). It may be possible to separate the contribution of this decay mechanism to the full decay amplitude through characteristic event densities. For the decay subset Psi -> p pbar pi0 the two model parameters are known, so we are able to predict absolute numerical partial widths for Gamma(Psi -> p pbar pi0). In the specific case J/psi -> p ...
Massari, Andrea; Essig, Rouven; Albert, Andrea; Bloom, Elliott; Gomez-Vargas, German A
2015-01-01
We set conservative, robust constraints on the annihilation and decay of dark matter into various Standard Model final states under various assumptions about the distribution of the dark matter in the Milky Way halo. We use the inclusive photon spectrum observed by the Fermi Gamma-ray Space Telescope through its main instrument, the Large-Area Telescope (LAT). We use simulated data to first find the "optimal" regions of interest in the gamma-ray sky, where the expected dark matter signal is largest compared with the expected astrophysical foregrounds. We then require the predicted dark matter signal to be less than the observed photon counts in the a priori optimal regions. This yields a very conservative constraint as we do not attempt to model or subtract astrophysical foregrounds. The resulting limits are competitive with other existing limits, and, for some final states with cuspy dark-matter distributions in the Galactic Center region, disfavor the typical cross section required during freeze-out for a w...
Massari, Andrea; Izaguirre, Eder; Essig, Rouven; Albert, Andrea; Bloom, Elliott; Gómez-Vargas, Germán Arturo
2015-04-01
We set conservative, robust constraints on the annihilation and decay of dark matter into various Standard Model final states under various assumptions about the distribution of the dark matter in the Milky Way halo. We use the inclusive photon spectrum observed by the Fermi Gamma-ray Space Telescope through its main instrument, the Large Area Telescope. We use simulated data to first find the "optimal" regions of interest in the γ -ray sky, where the expected dark matter signal is largest compared with the expected astrophysical foregrounds. We then require the predicted dark matter signal to be less than the observed photon counts in the a priori optimal regions. This yields a very conservative constraint as we do not attempt to model or subtract astrophysical foregrounds. The resulting limits are competitive with other existing limits and, for some final states with cuspy dark-matter distributions in the Galactic Center region, disfavor the typical cross section required during freeze-out for a weakly interacting massive particle to obtain the observed relic abundance.
Ben Slimene, Erij; Lassabatere, Laurent; Winiarski, Thierry; Gourdon, Remy
2016-04-01
The understanding of the fate of pollutants in the vadose zone is a prerequisite to manage soil and groundwater quality. Water infiltrates into the soil and carries a large amount of pollutants (heavy metals, organic compounds, etc.). The quality of groundwater depends on the capability of soils to remove pollutants while water infiltrates. The capability of soils to remove pollutants depends not only on their geochemical properties and affinity with pollutants but also on the quality of the contact between the reactive particles of the soil and pollutants. In such a context, preferential flows are the worst scenario since they prevent pollutants from reaching large parts of the soil including reactive zones that could serve for pollutant removal. The negative effects of preferential flow have already been pointed out by several studies. In this paper, we investigate numerically the effect of the establishment of preferential flow in a numerical section (13.5m long and 2.5m deep) that mimics a strongly heterogeneous deposit. The modelled deposit is made of several lithofacies with contrasting hydraulic properties. The numerical study proves that this strong contrast in hydraulic properties triggers the establishment of preferential flow (capillary barriers and funneled flow). Preferential flow develops mainly for low initial water contents and low fluxes imposed at the soil surface. The impact of these flows on solute transfer is also investigated as a function of solute reactivity and affinity to soil sorption sites. Modeled results clearly show that solute transport is greatly impacted by flow heterogeneity. Funneled flows have the same impacts as water fractionation into mobile and immobile transfer with a fast transport of solutes by preferential flow and solute diffusion to zones where the flow is slower. Such a pattern greatly impacts retention and reduces the access of pollutants into large parts of the soil. Retention is thus greatly reduced at the section
Energy Technology Data Exchange (ETDEWEB)
Backes, Steffen
2017-04-15
-local fluctuations. It has been successfully used to study the whole range of weakly to strongly correlated lattice models, including the metal-insulator transition, since even in the relevant dimensions of d = 2 and d = 3 spatial fluctuations are often small. The extension of DMFT towards realistic system by the use of DFT has been termed LDA+DMFT and has since then allowed for a significant improvement of the understanding of strongly correlated materials. We dedicate this thesis to the LDA+DMFT method and the study of the recently discovered ironpnictide superconductors, which are known to show effects of strong electronic correlations. Thus, in many cases these materials cannot be adequately described by a pure DFT approach alone and provide and ideal case for an investigation of their electronic properties within LDA+DMFT. We will first review the DFT method and point out what kind of approximations have to be made in practical calculations and what deficits they entail. Then we will give an introduction into the Green's function formalism in the real and imaginary time representation and discuss the resulting consequences like analytic continuation to pave the way for the derivation of the DMFT equations. After that, we will discuss the combination of DFT and DMFT into the LDA+DMFT method and how to set up the effective lattice models for practical calculations. Then we will apply the LDA+DMFT method to the hole-doped iron-pnictide superconductor KFe{sub 2}As{sub 2}, which we find to be a rather strongly correlated material that can only be reasonably described when electronic correlations are treated on a proper level beyond the the standard DFT approach. Our results show that the LDA+DMFT method is able to significantly improve the agreement of the theoretical calculation with experimental observations. Then we expand our study towards the isovalent series of KFe{sub 2}As{sub 2}, RbFe{sub 2}As{sub 2} and CsFe{sub 2}As{sub 2}, which we propose to show even stronger
Directory of Open Access Journals (Sweden)
G. A. Papadopoulos
2006-01-01
Full Text Available The seismic sequence of October–November 2005 in the Samos area, East Aegean Sea, was studied with the aim to show how it is possible to establish criteria for (a the rapid recognition of both the ongoing foreshock activity and the mainshock, and (b the rapid discrimination between the foreshock and aftershock phases of activity. It has been shown that before the mainshock of 20 October 2005, foreshock activity is not recognizable in the standard earthquake catalogue. However, a detailed examination of the records in the SMG station, which is the closest to the activated area, revealed that hundreds of small shocks not listed in the standard catalogue were recorded in the time interval from 12 October 2005 up to 21 November 2005. The production of reliable relations between seismic signal duration and duration magnitude for earthquakes included in the standard catalogue, made it possible to use signal durations in SMG records and to determine duration magnitudes for 2054 small shocks not included in the standard catalogue. In this way a new catalogue with magnitude determination for 3027 events was obtained while the standard catalogue contains 1025 events. At least 55 of them occurred from 12 October 2005 up to the occurrence of the two strong foreshocks of 17 October 2005. This implies that foreshock activity developed a few days before the strong shocks of 17 October 2005 but it escaped recognition by the routine procedure of seismic analysis. The onset of the foreshock phase of activity is recognizable by the significant increase of the mean seismicity rate which increased exponentially with time. According to the least-squares approach the b-value of the magnitude-frequency relation dropped significantly during the foreshock activity with respect to the b-value prevailing in the declustered background seismicity. However, the maximum likelihood approach does not indicate such a drop of b. The b-value found for the aftershocks that
Prediction of peptide bonding affinity: kernel methods for nonlinear modeling
Bergeron, Charles; Sundling, C Matthew; Krein, Michael; Katt, Bill; Sukumar, Nagamani; Breneman, Curt M; Bennett, Kristin P
2011-01-01
This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.
Bergeot, Baptiste; Vergez, Christophe; Gazengel, Bruno
2014-01-01
Simple models of clarinet instruments based on iterated maps have been used in the past to successfully estimate the threshold of oscillation of this instrument as a function of a constant blowing pressure. However, when the blowing pressure gradually increases through time, the oscillations appear at a much higher value than what is predicted in the static case. This is known as bifurcation delay, a phenomenon studied in [1] for a clarinet model. In numerical simulations the bifurcation delay showed a strong sensitivity to numerical precision.
Comparisons of Faulting-Based Pavement Performance Prediction Models
Directory of Open Access Journals (Sweden)
Weina Wang
2017-01-01
Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.
Agren, Jon; Schemske, Douglas W
2012-06-01
To quantify adaptive differentiation in the model plant Arabidopsis thaliana, we conducted reciprocal transplant experiments for five years between two European populations, one near the northern edge of the native range (Sweden) and one near the southern edge (Italy). We planted seeds (years 1-3) and seedlings (years 4-5), and estimated fitness as the number of fruits produced per seed or seedling planted. In eight of the 10 possible site × year comparisons, the fitness of the local population was significantly higher than that of the nonlocal population (3.1-22.2 times higher at the southern site, and 1.7-3.6 times higher at the northern site); in the remaining two comparisons no significant difference was recorded. At both sites, the local genotype had higher survival than the nonlocal genotype, and at the Italian site, the local genotype also had higher fecundity. Across years, the relative survival of the Italian genotype at the northern site decreased with decreasing winter soil temperature. The results provide evidence of strong adaptive differentiation between natural populations of A. thaliana and indicate that differences in tolerance to freezing contributed to fitness variation at the northern site. In ongoing work, we explore the functional and genetic basis of this adaptive differentiation.
Prediction using patient comparison vs. modeling: a case study for mortality prediction.
Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter
2016-08-01
Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.
Revisiting classic water erosion models in drylands: The strong impact of biological soil crusts
Bowker, M.A.; Belnap, J.; Bala, Chaudhary V.; Johnson, N.C.
2008-01-01
Soil erosion and subsequent degradation has been a contributor to societal collapse in the past and is one of the major expressions of desertification in arid regions. The revised universal soil loss equation (RUSLE) models soil lost to water erosion as a function of climate erosivity (the degree to which rainfall can result in erosion), topography, soil erodibility, and land use/management. The soil erodibility factor (K) is primarily based upon inherent soil properties (those which change slowly or not at all) such as soil texture and organic matter content, while the cover/management factor (C) is based on several parameters including biological soil crust (BSC) cover. We examined the effect of two more precise indicators of BSC development, chlorophyll a and exopolysaccharides (EPS), upon soil stability, which is closely inversely related to soil loss in an erosion event. To examine the relative influence of these elements of the C factor to the K factor, we conducted our investigation across eight strongly differing soils in the 0.8 million ha Grand Staircase-Escalante National Monument. We found that within every soil group, chlorophyll a was a moderate to excellent predictor of soil stability (R2 = 0.21-0.75), and consistently better than EPS. Using a simple structural equation model, we explained over half of the variance in soil stability and determined that the direct effect of chlorophyll a was 3?? more important than soil group in determining soil stability. Our results suggest that, holding the intensity of erosive forces constant, the acceleration or reduction of soil erosion in arid landscapes will primarily be an outcome of management practices. This is because the factor which is most influential to soil erosion, BSC development, is also among the most manageable, implying that water erosion in drylands has a solution. ?? 2008 Elsevier Ltd.
The Star Formation and AGN luminosity relation: Predictions from a semi-analytical model
Gutcke, Thales A; Maccio`, Andrea V; Lacey, Cedric
2015-01-01
In a Universe where AGN feedback regulates star formation in massive galaxies, a strong correlation between these two quantities is expected. If the gas causing star formation is also responsible for feeding the central black hole, then a positive correlation is expected. If powerful AGNs are responsible for the star formation quenching, then a negative correlation is expected. Observations so far have mainly found a mild correlation or no correlation at all (i.e. a flat relation between star formation rate (SFR) and AGN luminosity), raising questions about the whole paradigm of "AGN feedback". In this paper, we report the predictions of the GALFORM semi-analytical model, which has a very strong coupling between AGN activity and quenching of star formation. The predicted SFR-AGN luminosity correlation appears negative in the low AGN luminosity regime, where AGN feedback acts, but becomes strongly positive in the regime of the brightest AGN. Our predictions reproduce reasonably well recent observations by Rosa...
Fuzzy predictive filtering in nonlinear economic model predictive control for demand response
DEFF Research Database (Denmark)
Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.;
2016-01-01
The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...... problem. Moreover, to reduce the computation time and improve the controller's performance, a fuzzy predictive filter is introduced. With the purpose of testing the developed EMPC, a simulation controlling the temperature levels of an intelligent office building (PowerFlexHouse), with and without fuzzy...
Predictive modeling and reducing cyclic variability in autoignition engines
Energy Technology Data Exchange (ETDEWEB)
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-08-30
Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.
Institute of Scientific and Technical Information of China (English)
无
2012-01-01
In this paper,we consider the reaction diffusion equations with strong generic delay kernel and non-local effect,which models the microbial growth in a flow reactor.The existence of traveling waves is established for this model.More precisely,using the geometric singular perturbation theory,we show that traveling wave solutions exist provided that the delay is sufficiently small with the strong generic delay kernel.
Intelligent predictive model of ventilating capacity of imperial smelt furnace
Institute of Scientific and Technical Information of China (English)
唐朝晖; 胡燕瑜; 桂卫华; 吴敏
2003-01-01
In order to know the ventilating capacity of imperial smelt furnace (ISF), and increase the output of plumbum, an intelligent modeling method based on gray theory and artificial neural networks(ANN) is proposed, in which the weight values in the integrated model can be adjusted automatically. An intelligent predictive model of the ventilating capacity of the ISF is established and analyzed by the method. The simulation results and industrial applications demonstrate that the predictive model is close to the real plant, the relative predictive error is 0.72%, which is 50% less than the single model, leading to a notable increase of the output of plumbum.
A Prediction Model of the Capillary Pressure J-Function
Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.
2016-01-01
The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701
Adaptation of Predictive Models to PDA Hand-Held Devices
Directory of Open Access Journals (Sweden)
Lin, Edward J
2008-01-01
Full Text Available Prediction models using multiple logistic regression are appearing with increasing frequency in the medical literature. Problems associated with these models include the complexity of computations when applied in their pure form, and lack of availability at the bedside. Personal digital assistant (PDA hand-held devices equipped with spreadsheet software offer the clinician a readily available and easily applied means of applying predictive models at the bedside. The purposes of this article are to briefly review regression as a means of creating predictive models and to describe a method of choosing and adapting logistic regression models to emergency department (ED clinical practice.
A model to predict the power output from wind farms
Energy Technology Data Exchange (ETDEWEB)
Landberg, L. [Riso National Lab., Roskilde (Denmark)
1997-12-31
This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.
Modelling microbial interactions and food structure in predictive microbiology
Malakar, P.K.
2002-01-01
Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.
Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of
Modelling microbial interactions and food structure in predictive microbiology
Malakar, P.K.
2002-01-01
Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology. Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of new technologies
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Proton Decay and Cosmology Strongly Constrain the Minimal SU(5) Supergravity Model
Lopez, Jorge L.; Pois, H.
1993-01-01
We present the results of an extensive exploration of the five-dimensional parameter space of the minimal $SU(5)$ supergravity model, including the constraints of a long enough proton lifetime ($\\tau_p>1\\times10^{32}\\y$) and a small enough neutralino cosmological relic density ($\\Omega_\\chi h^2_0\\le1$). We find that the combined effect of these two constraints is quite severe, although still leaving a small region of parameter space with $m_{\\tilde g,\\tilde q}<1\\TeV$. The allowed values of the proton lifetime extend up to $\\tau_p\\approx1\\times10^{33}\\y$ and should be fully explored by the SuperKamiokande experiment. The proton lifetime cut also entails the following mass correlations and bounds: $m_h\\lsim100\\GeV$, $m_\\chi\\approx{1\\over2}m_{\\chi^0_2}\\approx0.15\\gluino$, $m_{\\chi^0_2}\\approx m_{\\chi^+_1}$, and $m_\\chi<85\\,(115)\\GeV$, $m_{\\chi^0_2,\\chi^+_1}<165\\,(225)\\GeV$ for $\\alpha_3=0.113\\,(0.120)$. Finally, the {\\it combined} proton decay and cosmology constraints predict that if $m_h\\gsim75\\,(80)\\...
STRONG GROUND MOTION PREDICTION OF URUMQI ACTIVE FAULT%乌鲁木齐市活断层强地面运动预测研究
Institute of Scientific and Technical Information of China (English)
沈军; 宋和平; 赵伯明
2009-01-01
The paper introduced the strong ground motion prediction results based on seismo-tectonic model of Urumqi active fault. During detecting active fault and assessing earthquake risk in Urumqi, two seismo-tectonics models were set. They were thrust nappe structure in front of northern Tienshan Mountain and in west side of Bogeda arcuate structure respectively. The possible max-magnitude of the former was about 7.5 and that of the latter was about 7.0, so the analytical or computational model was established. According to analysis result of microtremor observation, combined with shallow seismic exploration, geological map, topographic map and borehole data, the underground three dimensional speed model was founded. The statistical Green function, 3D finite-difference and hybrid computation methods were used to research ground motion in target area. The prediction result showed the structure of fault, mode of fault failure and three dimensional speed model influenced the distribution of ground motion obviously. The ground motion was evident along fault front, basin margin, in the area with thicker covering layer and front of fault failure.%介绍了基于乌鲁木齐活断层发震构造模型的强地面运动的预测结果.该项工作是在乌鲁木齐城区开展的活断层探测与地震危险性评价的基础上,设定了两个发震构造,分别为北天山山前逆冲推覆构造和博格达弧西翼逆冲推覆构造,前者可能最大地震震级为7.5级,后者为7.0级,据此建立了分析计算模型.根据地脉动观测分析结果,结合浅层地震勘探、地质图、地形图和钻孔资料,建立了地下三维速度模型,采用统计学的格林函数法、三维有限差分法和混合计算方法,对目标区的地震动进行了预测研究.预测结果表明,断层的结构、破裂方式和三维速度模型对地震动的分布具有显著的影响.沿断层前缘、盆地边缘、覆盖层较厚的地区,以及断层破裂的前方地震动比较显著.
2008-01-01
Using NAFTA's effect on Mexico's exports as a natural experiment, this paper conducts an empirical analysis on the explanatory power of the two strands of heterogeneous firms trade models: the heterogeneous firms trade (HFT) model and the quality heterogeneous firms trade (QHFT) model. The paper first discusses the common prediction of the two models on new goods' exports and on the contrasting prediction on unit price evolution. An empirical analysis shows a strong supportive evidence on the...
Predicting Career Advancement with Structural Equation Modelling
Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia
2012-01-01
Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…
Predicting Career Advancement with Structural Equation Modelling
Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia
2012-01-01
Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…
Modeling and prediction of surgical procedure times
P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)
2009-01-01
textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f
Prediction Model of Sewing Technical Condition by Grey Neural Network
Institute of Scientific and Technical Information of China (English)
DONG Ying; FANG Fang; ZHANG Wei-yuan
2007-01-01
The grey system theory and the artificial neural network technology were applied to predict the sewing technical condition. The representative parameters, such as needle, stitch, were selected. Prediction model was established based on the different fabrics' mechanical properties that measured by KES instrument. Grey relevant degree analysis was applied to choose the input parameters of the neural network. The result showed that prediction model has good precision. The average relative error was 4.08% for needle and 4.25% for stitch.
Active diagnosis of hybrid systems - A model predictive approach
2009-01-01
A method for active diagnosis of hybrid systems is proposed. The main idea is to predict the future output of both normal and faulty model of the system; then at each time step an optimization problem is solved with the objective of maximizing the difference between the predicted normal and faulty outputs constrained by tolerable performance requirements. As in standard model predictive control, the first element of the optimal input is applied to the system and the whole procedure is repeate...
Ran, Shi-Ju
2016-05-01
In this work, a simple and fundamental numeric scheme dubbed as ab initio optimization principle (AOP) is proposed for the ground states of translational invariant strongly correlated quantum lattice models. The idea is to transform a nondeterministic-polynomial-hard ground-state simulation with infinite degrees of freedom into a single optimization problem of a local function with finite number of physical and ancillary degrees of freedom. This work contributes mainly in the following aspects: (1) AOP provides a simple and efficient scheme to simulate the ground state by solving a local optimization problem. Its solution contains two kinds of boundary states, one of which play the role of the entanglement bath that mimics the interactions between a supercell and the infinite environment, and the other gives the ground state in a tensor network (TN) form. (2) In the sense of TN, a novel decomposition named as tensor ring decomposition (TRD) is proposed to implement AOP. Instead of following the contraction-truncation scheme used by many existing TN-based algorithms, TRD solves the contraction of a uniform TN in an opposite way by encoding the contraction in a set of self-consistent equations that automatically reconstruct the whole TN, making the simulation simple and unified; (3) AOP inherits and develops the ideas of different well-established methods, including the density matrix renormalization group (DMRG), infinite time-evolving block decimation (iTEBD), network contractor dynamics, density matrix embedding theory, etc., providing a unified perspective that is previously missing in this fields. (4) AOP as well as TRD give novel implications to existing TN-based algorithms: A modified iTEBD is suggested and the two-dimensional (2D) AOP is argued to be an intrinsic 2D extension of DMRG that is based on infinite projected entangled pair state. This paper is focused on one-dimensional quantum models to present AOP. The benchmark is given on a transverse Ising
Evaluation of Fast-Time Wake Vortex Prediction Models
Proctor, Fred H.; Hamilton, David W.
2009-01-01
Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.
Comparison of Simple Versus Performance-Based Fall Prediction Models
Directory of Open Access Journals (Sweden)
Shekhar K. Gadkaree BS
2015-05-01
Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “any fall” and “recurrent falls.” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.
Testing and analysis of internal hardwood log defect prediction models
R. Edward. Thomas
2011-01-01
The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...
Comparison of Simple Versus Performance-Based Fall Prediction Models
Directory of Open Access Journals (Sweden)
Shekhar K. Gadkaree BS
2015-05-01
Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the Committee Approach and Uncertainty Prediction in Hydrological Modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Refining the committee approach and uncertainty prediction in hydrological modelling
Kayastha, N.
2014-01-01
Due to the complexity of hydrological systems a single model may be unable to capture the full range of a catchment response and accurately predict the streamflows. The multi modelling approach opens up possibilities for handling such difficulties and allows improve the predictive capability of mode
Adding propensity scores to pure prediction models fails to improve predictive performance
Directory of Open Access Journals (Sweden)
Amy S. Nowacki
2013-08-01
Full Text Available Background. Propensity score usage seems to be growing in popularity leading researchers to question the possible role of propensity scores in prediction modeling, despite the lack of a theoretical rationale. It is suspected that such requests are due to the lack of differentiation regarding the goals of predictive modeling versus causal inference modeling. Therefore, the purpose of this study is to formally examine the effect of propensity scores on predictive performance. Our hypothesis is that a multivariable regression model that adjusts for all covariates will perform as well as or better than those models utilizing propensity scores with respect to model discrimination and calibration.Methods. The most commonly encountered statistical scenarios for medical prediction (logistic and proportional hazards regression were used to investigate this research question. Random cross-validation was performed 500 times to correct for optimism. The multivariable regression models adjusting for all covariates were compared with models that included adjustment for or weighting with the propensity scores. The methods were compared based on three predictive performance measures: (1 concordance indices; (2 Brier scores; and (3 calibration curves.Results. Multivariable models adjusting for all covariates had the highest average concordance index, the lowest average Brier score, and the best calibration. Propensity score adjustment and inverse probability weighting models without adjustment for all covariates performed worse than full models and failed to improve predictive performance with full covariate adjustment.Conclusion. Propensity score techniques did not improve prediction performance measures beyond multivariable adjustment. Propensity scores are not recommended if the analytical goal is pure prediction modeling.
Predicting weed problems in maize cropping by species distribution modelling
Directory of Open Access Journals (Sweden)
Bürger, Jana
2014-02-01
Full Text Available Increasing maize cultivation and changed cropping practices promote the selection of typical maize weeds that may also profit strongly from climate change. Predicting potential weed problems is of high interest for plant production. Within the project KLIFF, experiments were combined with species distribution modelling for this task in the region of Lower Saxony, Germany. For our study, we modelled ecological and damage niches of nine weed species that are significant and wide spread in maize cropping in a number of European countries. Species distribution models describe the ecological niche of a species, these are the environmental conditions under which a species can maintain a vital population. It is also possible to estimate a damage niche, i.e. the conditions under which a species causes damage in agricultural crops. For this, we combined occurrence data of European national data bases with high resolution climate, soil and land use data. Models were also projected to simulated climate conditions for the time horizon 2070 - 2100 in order to estimate climate change effects. Modelling results indicate favourable conditions for typical maize weed occurrence virtually all over the study region, but only a few species are important in maize cropping. This is in good accordance with the findings of an earlier maize weed monitoring. Reaction to changing climate conditions is species-specific, for some species neutral (E. crus-galli, other species may gain (Polygonum persicaria or loose (Viola arvensis large areas of suitable habitats. All species with damage potential under present conditions will remain important in maize cropping, some more species will gain regional importance (Calystegia sepium, Setara viridis.
Impact of modellers' decisions on hydrological a priori predictions
Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.
2014-06-01
In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of
Econometric models for predicting confusion crop ratios
Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)
1979-01-01
Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.
PEEX Modelling Platform for Seamless Environmental Prediction
Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku
2017-04-01
The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.
Spatiotemporal properties of microsaccades: Model predictions and experimental tests
Zhou, Jian-Fang; Yuan, Wu-Jie; Zhou, Zhao
2016-10-01
Microsaccades are involuntary and very small eye movements during fixation. Recently, the microsaccade-related neural dynamics have been extensively investigated both in experiments and by constructing neural network models. Experimentally, microsaccades also exhibit many behavioral properties. It’s well known that the behavior properties imply the underlying neural dynamical mechanisms, and so are determined by neural dynamics. The behavioral properties resulted from neural responses to microsaccades, however, are not yet understood and are rarely studied theoretically. Linking neural dynamics to behavior is one of the central goals of neuroscience. In this paper, we provide behavior predictions on spatiotemporal properties of microsaccades according to microsaccade-induced neural dynamics in a cascading network model, which includes both retinal adaptation and short-term depression (STD) at thalamocortical synapses. We also successfully give experimental tests in the statistical sense. Our results provide the first behavior description of microsaccades based on neural dynamics induced by behaving activity, and so firstly link neural dynamics to behavior of microsaccades. These results indicate strongly that the cascading adaptations play an important role in the study of microsaccades. Our work may be useful for further investigations of the microsaccadic behavioral properties and of the underlying neural dynamical mechanisms responsible for the behavioral properties.
Cs-137 fallout in Iceland, model predictions and measurements
Energy Technology Data Exchange (ETDEWEB)
Palsson, S.E.; Sigurgeirsson, M.A.; Gudnason, K. [Icelandic Radiation Protection Inst. (Iceland); Arnalds, O.; Karlsdottir, I.A. [Agricultural Research Inst. (Iceland); Palsdottir, P. [Icelandic Meteorological Office (Iceland)
2002-04-01
Basically all the fallout Cs-137 in Iceland came from the atmospheric nuclear weapons tests in the late fifties and early sixties, the addition from the accident in the Chernobyl Nuclear Power Plant was relatively very small. Measurements of fallout from nuclear weapons tests started in Iceland over 40 years ago and samples of soil, vegetation and agricultural products have been collected from various places and measured during this period. Considerable variability has been seen in the results, even between places close to each other. This is understandable due to the mountainous terrain, changing strong winds and high levels of precipitation. This variability has been especially noticeable in the case of soil samples. The important role of uncultivated rangelands in Icelandic agriculture (e.g. for sheep farming) makes it necessary to estimate deposition for many remote areas. It has thus proven difficult to get a good overview of the distribution of the deposition and its subsequent transfer into agricultural products. Over a year ago an attempt was made to assess the distribution of Cs-137 fallout in Iceland. The approach is based on a model predicting deposition using precipitation data, in a similar manner as that used previously within the Arctic Monitoring and Assessment Programme (AMAP). 1999). One station close to Reykjavik has a time series of Cs-137 deposition data and precipitation data from 1960 onwards. The AMAP deposition model was calibrated for Iceland by using deposition and precipitation data from this station. (au)