WorldWideScience

Sample records for model predicts large

  1. Large eddy simulation subgrid model for soot prediction

    Science.gov (United States)

    El-Asrag, Hossam Abd El-Raouf Mostafa

    Soot prediction in realistic systems is one of the most challenging problems in theoretical and applied combustion. Soot formation as a chemical process is very complicated and not fully understood. The major difficulty stems from the chemical complexity of the soot formation process as well as its strong coupling with the other thermochemical and fluid processes that occur simultaneously. Soot is a major byproduct of incomplete combustion, having a strong impact on the environment as well as the combustion efficiency. Therefore, innovative methods is needed to predict soot in realistic configurations in an accurate and yet computationally efficient way. In the current study, a new soot formation subgrid model is developed and reported here. The new model is designed to be used within the context of the Large Eddy Simulation (LES) framework, combined with Linear Eddy Mixing (LEM) as a subgrid combustion model. The final model can be applied equally to premixed and non-premixed flames over any required geometry and flow conditions in the free, the transition, and the continuum regimes. The soot dynamics is predicted using a Method of Moments approach with Lagrangian Interpolative Closure (MOMIC) for the fractional moments. Since no prior knowledge of the particles distribution is required, the model is generally applicable. The current model accounts for the basic soot transport phenomena as transport by molecular diffusion and Thermophoretic forces. The model is first validated against experimental results for non-sooting swirling non-premixed and partially premixed flames. Next, a set of canonical premixed sooting flames are simulated, where the effect of turbulence, binary diffusivity and C/O ratio on soot formation are studied. Finally, the model is validated against a non-premixed jet sooting flame. The effect of the flame structure on the different soot formation stages as well as the particle size distribution is described. Good results are predicted with

  2. A Model for Predicting Thermomechanical Response of Large Space Structures.

    Science.gov (United States)

    1984-06-01

    Dr. Tony Amos (202)767-4937 u.DD FORM 1473, 83 APR EDITION OF I JAN 73 IS OBSOLETE. SECURITY CLASSIFICATION CF THIS PAGE .. ’o 1 v A MODEL FOR...SYMBOL Dr. Tony Amos (202)767-4937 SDD FORM 1473, 83 APR E c , TION OF 1 JAN 73 ,S OBSOLETE. SECLUHITY (CLASSI. ICATION Oi) THI PAGE -i LARGE SPACE...94] for predicting the buckling loads associated with general instability of beam-like lattice trusses. Bazant and Christensen [95] present a

  3. Predictability of the large relaxations in a cellular automaton model

    Energy Technology Data Exchange (ETDEWEB)

    Tejedor, Alejandro; Ambroj, Samuel; Gomez, Javier B; Pacheco, Amalio F [Faculty of Sciences, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)

    2008-09-19

    A simple one-dimensional cellular automaton model with threshold dynamics is introduced. It is loaded at a uniform rate and unloaded by abrupt relaxations. The cumulative distribution of the size of the relaxations is analytically computed and behaves as a power law with an exponent equal to -1. This coincides with the phenomenological Gutenberg-Richter behavior observed in seismology for the cumulative statistics of earthquakes at the regional or global scale. The key point of the model is the zero-load state of the system after the occurrence of any relaxation, no matter what its size. This leads to an equipartition of probability between all possible load configurations in the system during the successive loading cycles. Each cycle ends with the occurrence of the greatest-or characteristic-relaxation in the system. The duration of the cycles in the model is statistically distributed with a coefficient of variation ranging from 0.5 to 1. The predictability of the characteristic relaxations is evaluated by means of error diagrams. This model illustrates the value taking into account the refractory periods to obtain a considerable gain in the quality of the predictions.

  4. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  5. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    Science.gov (United States)

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-09-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.

  6. Nonlinear Model-Based Predictive Control applied to Large Scale Cryogenic Facilities

    CERN Document Server

    Blanco Vinuela, Enrique; de Prada Moraga, Cesar

    2001-01-01

    The thesis addresses the study, analysis, development, and finally the real implementation of an advanced control system for the 1.8 K Cooling Loop of the LHC (Large Hadron Collider) accelerator. The LHC is the next accelerator being built at CERN (European Center for Nuclear Research), it will use superconducting magnets operating below a temperature of 1.9 K along a circumference of 27 kilometers. The temperature of these magnets is a control parameter with strict operating constraints. The first control implementations applied a procedure that included linear identification, modelling and regulation using a linear predictive controller. It did improve largely the overall performance of the plant with respect to a classical PID regulator, but the nature of the cryogenic processes pointed out the need of a more adequate technique, such as a nonlinear methodology. This thesis is a first step to develop a global regulation strategy for the overall control of the LHC cells when they will operate simultaneously....

  7. A hydrogeomorphic river network model predicts where and why hyporheic exchange is important in large basins

    Science.gov (United States)

    Gomez-Velez, Jesus D.; Harvey, Judson W.

    2014-09-01

    Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data and by models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bed forms rather than lateral exchange through meanders dominates hyporheic fluxes and turnover rates along river corridors. Per kilometer, low-order streams have a biogeochemical potential at least 2 orders of magnitude larger than higher-order streams. However, when biogeochemical potential is examined per average length of each stream order, low- and high-order streams were often found to be comparable. As a result, the hyporheic zone's intrinsic potential for biogeochemical transformations is comparable across different stream orders, but the greater river miles and larger total streambed area of lower order streams result in the highest cumulative impact from low-order streams. Lateral exchange through meander banks may be important in some cases but generally only in large rivers.

  8. Prospective large-scale field study generates predictive model identifying major contributors to colony losses.

    Directory of Open Access Journals (Sweden)

    Merav Gleit Kielmanowicz

    2015-04-01

    Full Text Available Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees. The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health.

  9. Prospective large-scale field study generates predictive model identifying major contributors to colony losses.

    Science.gov (United States)

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J R; Ballam, Joan M

    2015-04-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health.

  10. Economic Model Predictive Control for Large-Scale and Distributed Energy Systems

    DEFF Research Database (Denmark)

    Standardi, Laura

    . Energy systems often involve stochastic variables due to the share of fluctuating Renewable Energy Sources (RESs). Moreover, the related control problems are multi variables and they are hard, or impossible, to split into single-input-single-output control systems. MPC strategy can handle multi variables......In this thesis, we consider control strategies for large and distributed energy systems that are important for the implementation of smart grid technologies.  An electrical grid has to ensure reliability and avoid long-term interruptions in the power supply. Moreover, the share of Renewable Energy...... Sources (RESs) in the smart grids is increasing. These energy sources bring uncertainty to the production due to their fluctuations. Hence,smart grids need suitable control systems that are able to continuously balance power production and consumption.  We apply the Economic Model Predictive Control (EMPC...

  11. Large Discriminative Structured Set Prediction Modeling With Max-Margin Markov Network for Lossless Image Coding.

    Science.gov (United States)

    Dai, Wenrui; Xiong, Hongkai; Wang, Jia; Zheng, Yuan F

    2014-02-01

    Inherent statistical correlation for context-based prediction and structural interdependencies for local coherence is not fully exploited in existing lossless image coding schemes. This paper proposes a novel prediction model where the optimal correlated prediction for a set of pixels is obtained in the sense of the least code length. It not only exploits the spatial statistical correlations for the optimal prediction directly based on 2D contexts, but also formulates the data-driven structural interdependencies to make the prediction error coherent with the underlying probability distribution for coding. Under the joint constraints for local coherence, max-margin Markov networks are incorporated to combine support vector machines structurally to make max-margin estimation for a correlated region. Specifically, it aims to produce multiple predictions in the blocks with the model parameters learned in such a way that the distinction between the actual pixel and all possible estimations is maximized. It is proved that, with the growth of sample size, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. Incorporated into the lossless image coding framework, the proposed model outperforms most prediction schemes reported.

  12. A low-dimensional model predicting geometry-dependent dynamics of large-scale coherent structures in turbulence

    CERN Document Server

    Bai, Kunlun; Brown, Eric

    2015-01-01

    We test the ability of a general low-dimensional model for turbulence to predict geometry-dependent dynamics of large-scale coherent structures, such as convection rolls. The model consists of stochastic ordinary differential equations, which are derived as a function of boundary geometry from the Navier-Stokes equations (Brown and Ahlers 2008). We test the model using Rayleigh-B\\'enard convection experiments in a cubic container. The model predicts a new mode in which the alignment of a convection roll switches between diagonals. We observe this mode with a measured switching rate within 30% of the prediction.

  13. On the Deviation of the Standard Model Predictions in the Large Hadron Collider Experiments (Letters to Progress in Physics

    Directory of Open Access Journals (Sweden)

    Belyakov A. V.

    2016-01-01

    Full Text Available The newest Large Hadron Collider experiments targeting the search for New Physics manifested the possibility of new heavy particles. Such particles are not predicted in the framework of Standard Model, however their existence is lawful in the framework of another model based on J. A.Wheeler’s geometrodynamcs.

  14. PREDICTIONS OF WAVE INDUCED SHIP MOTIONS AND LOADS BY LARGE-SCALE MODEL MEASUREMENT AT SEA AND NUMERICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Jialong Jiao

    2016-06-01

    Full Text Available In order to accurately predict wave induced motion and load responses of ships, a new experimental methodology is proposed. The new method includes conducting tests with large-scale models under natural environment conditions. The testing technique for large-scale model measurement proposed is quite applicable and general to a wide range of standard hydrodynamics experiments in naval architecture. In this study, a large-scale segmented self-propelling model allowed for investigating seakeeping performance and wave load behaviour as well as the testing systems were designed and experiments performed. A 2-hour voyage trial of the large-scale model aimed to perform a series of simulation exercises was carried out at Huludao harbour in October 2014. During the voyage, onboard systems, operated by crew, were used to measure and record the sea waves and the model responses. The post-voyage analysis of the measurements, both of the sea waves and the model’s responses, were made to predict the ship’s motion and load responses of short-term under the corresponding sea state. Furthermore, numerical analysis of short-term prediction was made by an in-house code and the result was compared with the experiment data. The long-term extreme prediction of motions and loads was also carried out based on the numerical results of short-term prediction.

  15. A large animal neuropathic pain model in sheep: a strategy for improving the predictability of preclinical models for therapeutic development

    Directory of Open Access Journals (Sweden)

    Wilkes D

    2012-10-01

    Full Text Available Denise Wilkes,1 Guangwen Li,2 Carmina F Angeles,3 Joel T Patterson,4 Li-Yen Mae Huang21Department of Anesthesiology, 2Department of Neuroscience and Cell Biology, 3Department of Neurosurgery University of Texas Medical Branch, Galveston, TX, USA; 4Neurospine Institute, Eugene, OR, USABackground: Evaluation of analgesics in large animals is a necessary step in the development of better pain medications or gene therapy prior to clinical trials. However, chronic neuropathic pain models in large animals are limited. To address this deficiency, we developed a neuropathic pain model in sheep, which shares many anatomical similarities in spine dimensions and cerebrospinal fluid volume as humans.Methods: A neuropathic pain state was induced in sheep by tight ligation and axotomy of the common peroneal nerve. The analgesic effect of intrathecal (IT morphine was investigated. Interspecies comparison was conducted by analyzing the ceiling doses of IT morphine for humans, sheep, and rats.Results: Peroneal nerve injury (PNI produced an 86% decrease in von-Frey filament-evoked withdrawal threshold on postsurgery day 3 and the decrease lasted for the 8-week test period. Compared to the pre-injury, sham, and contralateral hindlimb, the IT morphine dose that produces 50% of maximum analgesia (ED50 for injured PNI hindlimb was 1.8-fold larger and Emax, the dose that produces maximal analgesia, was 6.1-fold lower. The sheep model closely predicts human IT morphine ceiling dose by allometric scaling. This is in contrast to the approximately 10-fold lower morphine ceiling dose predicted by the rat spinal nerve ligated or spared nerve injury models.Conclusion: PNI sheep model has a fast onset and shows stable and long-lasting pain behavioral characteristics. Since the antinociceptive properties of IT morphine are similar to those observed in humans, the PNI sheep model will be a useful tool for the development of analgesics. Its large size and consistent chronic pain

  16. An efficient model for prediction of underwater noise due to pile driving at large ranges

    NARCIS (Netherlands)

    Nijhof, M.J.J.; Binnerts, B.; Jong, C.A.F. de; Ainslie, M.A.

    2014-01-01

    Modelling the sound levels in the water column due to pile driving operations nearby and out to large distances from the pile is crucial in assessing the likely impact on marine life. Standard numerical techniques for modelling the sound radiation from mechanical structures such as the finite elemen

  17. An efficient model for prediction of underwater noise due to pile driving at large ranges

    NARCIS (Netherlands)

    Nijhof, M.J.J.; Binnerts, B.; Jong, C.A.F. de; Ainslie, M.A.

    2014-01-01

    Modelling the sound levels in the water column due to pile driving operations nearby and out to large distances from the pile is crucial in assessing the likely impact on marine life. Standard numerical techniques for modelling the sound radiation from mechanical structures such as the finite elemen

  18. Hierarchical Bayesian spatial models for predicting multiple forest variables using waveform LiDAR, hyperspectral imagery, and large inventory datasets

    Science.gov (United States)

    Finley, Andrew O.; Banerjee, Sudipto; Cook, Bruce D.; Bradford, John B.

    2013-01-01

    In this paper we detail a multivariate spatial regression model that couples LiDAR, hyperspectral and forest inventory data to predict forest outcome variables at a high spatial resolution. The proposed model is used to analyze forest inventory data collected on the US Forest Service Penobscot Experimental Forest (PEF), ME, USA. In addition to helping meet the regression model's assumptions, results from the PEF analysis suggest that the addition of multivariate spatial random effects improves model fit and predictive ability, compared with two commonly applied modeling approaches. This improvement results from explicitly modeling the covariation among forest outcome variables and spatial dependence among observations through the random effects. Direct application of such multivariate models to even moderately large datasets is often computationally infeasible because of cubic order matrix algorithms involved in estimation. We apply a spatial dimension reduction technique to help overcome this computational hurdle without sacrificing richness in modeling.

  19. A model for the evolution of large density perturbations - Normalization and predictions. [In universe

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Gonzalez, E.; Sanz, J.L. (Cantabria Universidad, Santander (Spain))

    1991-01-01

    The nonlinear evolution of matter density fluctuations in the universe is studied. The Zeldovich solution is applied to the quasi-linear regime, and a model to stop the fluctuations from growing in the very nonlinear regime is considered. The model is based in the virialization of collapsing pancakes. The density contrast of a typical pancake at the time it starts to relax is given for universes with different values of Omega. With this model, it is possible to calculate the probability density of the final density fluctuations. Results on the normalization of the power spectrum of the initial density fluctuations are given as a function of Omega. Predictions of the model on the filling factor of superclusters and voids are compared with observations. 37 refs.

  20. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    Science.gov (United States)

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  1. Variably-saturated flow in large weighing lysimeters under dry conditions: inverse and predictive modeling

    Science.gov (United States)

    Iden, Sascha; Reineke, Daniela; Koonce, Jeremy; Berli, Markus; Durner, Wolfgang

    2015-04-01

    A reliable quantification of the soil water balance in semi-arid regions requires an accurate determination of bare soil evaporation. Modeling of soil water movement in relatively dry soils and the quantitative prediction of evaporation rates and groundwater recharge pose considerable challenges in these regions. Actual evaporation from dry soil cannot be predicted without detailed knowledge of the complex interplay between liquid, vapor and heat flow and soil hydraulic properties exert a strong influence on evaporation rates during stage-two evaporation. We have analyzed data from the SEPHAS lysimeter facility in Boulder City (NV) which was installed to investigate the near-surface processes of water and energy exchange in desert environments. The scientific instrumentation consists of 152 sensors per Lysimeter which measured soil temperature, soil water content, and soil water potential. Data from three weighing lysimeters (3 m long, surface area 4 m2) were used to identifiy effective soil hydraulic properties of the disturbed soil monoliths by inverse modeling with the Richards equation assuming isothermal flow conditions. Results indicate that the observed soil water content in 8 different soil depths can be well matched for all three lysimeters and that the effective soil hydraulic properties of the three lysimeters agree well. These results could only be obtained with a flexible model of the soil hydraulic properties which guaranteed physical plausibility of water retention towards complete dryness and accounted for capillary, film and isothermal vapor flow. Conversely, flow models using traditional parameterizations of the soil hydraulic properties were not able to match the observed evaporation fluxes and water contents. After identifying the system properties by inverse modeling, we checked the possibility to forecast evaporation rates by running a fully coupled water, heat and vapor flow model which solved the energy balance of the soil surface. In these

  2. Supersaturation calculation in large eddy simulation models for prediction of the droplet number concentration

    Directory of Open Access Journals (Sweden)

    O. Thouron

    2011-12-01

    Full Text Available A new parameterization scheme is described for calculation of supersaturation in LES models that specifically aims at the simulation of cloud condensation nuclei (CCN activation and prediction of the droplet number concentration. The scheme is tested against current parameterizations in the framework of the Meso-NH LES model. It is shown that the saturation adjustment scheme based on parameterizations of CCN activation in a convective updraft over estimates the droplet concentration in the cloud core while it cannot simulate cloud top supersaturation production due to mixing between cloudy and clear air. A supersaturation diagnostic scheme mitigates these artefacts by accounting for the presence of already condensed water in the cloud core but it is too sensitive to supersaturation fluctuations at cloud top and produces spurious CCN activation during cloud top mixing. The proposed pseudo-prognostic scheme shows performance similar to the diagnostic one in the cloud core but significantly mitigates CCN activation at cloud top.

  3. Modeling flow around bluff bodies and predicting urban dispersion using large eddy simulation.

    Science.gov (United States)

    Tseng, Yu-Heng; Meneveau, Charles; Parlange, Marc B

    2006-04-15

    Modeling air pollutant transport and dispersion in urban environments is especially challenging due to complex ground topography. In this study, we describe a large eddy simulation (LES) tool including a new dynamic subgrid closure and boundary treatment to model urban dispersion problems. The numerical model is developed, validated, and extended to a realistic urban layout. In such applications fairly coarse grids must be used in which each building can be represented using relatively few grid-points only. By carrying out LES of flow around a square cylinder and of flow over surface-mounted cubes, the coarsest resolution required to resolve the bluff body's cross section while still producing meaningful results is established. Specifically, we perform grid refinement studies showing that at least 6-8 grid points across the bluff body are required for reasonable results. The performance of several subgrid models is also compared. Although effects of the subgrid models on the mean flow are found to be small, dynamic Lagrangian models give a physically more realistic subgrid-scale (SGS) viscosity field. When scale-dependence is taken into consideration, these models lead to more realistic resolved fluctuating velocities and spectra. These results set the minimum grid resolution and subgrid model requirements needed to apply LES in simulations of neutral atmospheric boundary layer flow and scalar transport over a realistic urban geometry. The results also illustrate the advantages of LES over traditional modeling approaches, particularly its ability to take into account the complex boundary details and the unsteady nature of atmospheric boundary layer flow. Thus LES can be used to evaluate probabilities of extreme events (such as probabilities of exceeding threshold pollutant concentrations). Some comments about computer resources required for LES are also included.

  4. An Anthropometric-Based Subject-Specific Finite Element Model of the Human Breast for Predicting Large Deformations

    Science.gov (United States)

    Pianigiani, Silvia; Ruggiero, Leonardo; Innocenti, Bernardo

    2015-01-01

    The large deformation of the human breast threatens proper nodules tracking when the subject mammograms are used as pre-planning data for biopsy. However, techniques capable of accurately supporting the surgeons during biopsy are missing. Finite element (FE) models are at the basis of currently investigated methodologies to track nodules displacement. Nonetheless, the impact of breast material modeling on the mechanical response of its tissues (e.g., tumors) is not clear. This study proposes a subject-specific FE model of the breast, obtained by anthropometric measurements, to predict breast large deformation. A healthy breast subject-specific FE parametric model was developed and validated by Cranio-caudal (CC) and Medio-Lateral Oblique (MLO) mammograms. The model was successively modified, including nodules, and utilized to investigate the effect of nodules size, typology, and material modeling on nodules shift under the effect of CC, MLO, and gravity loads. Results show that a Mooney–Rivlin material model can estimate healthy breast large deformation. For a pathological breast, under CC compression, the nodules displacement is very close to zero when a linear elastic material model is used. Finally, when nodules are modeled, including tumor material properties, under CC, or MLO or gravity loads, nodules shift shows ~15% average relative difference. PMID:26734604

  5. The potential of large studies for building genetic risk prediction models

    Science.gov (United States)

    NCI scientists have developed a new paradigm to assess hereditary risk prediction in common diseases, such as prostate cancer. This genetic risk prediction concept is based on polygenic analysis—the study of a group of common DNA sequences, known as singl

  6. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    Science.gov (United States)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  7. A Comparison of Model-Scale Experimental Measurements and Computational Predictions for a Large Transom-Stern Wave

    CERN Document Server

    Drazen, David A; Fu, Thomas C; Beale, Kristine L C; O'Shea, Thomas T; Brucker, Kyle A; Dommermuth, Douglas G; Wyatt, Donald C; Bhushan, Shanti; Carrica, Pablo M; Stern, Fred

    2014-01-01

    The flow field generated by a transom stern hull form is a complex, broad-banded, three-dimensional system marked by a large breaking wave. This unsteady multiphase turbulent flow feature is difficult to study experimentally and simulate numerically. Recent model-scale experimental measurements and numerical predictions of the wave-elevation topology behind a transom-sterned hull form, Model 5673, are compared and assessed in this paper. The mean height, surface roughness (RMS), and spectra of the breaking stern-waves were measured by Light Detection And Ranging (LiDAR) and Quantitative Visualization (QViz) sensors over a range of model speeds covering both wet- and dry-transom operating conditions. Numerical predictions for this data set from two Office of Naval Research (ONR) supported naval-design codes, Numerical Flow Analysis (NFA) and CFDship-Iowa-V.4, have been performed. Comparisons of experimental data, including LiDAR and QViz measurements, to the numerical predictions for wet-transom and dry transo...

  8. Identifying sensitive areas of adaptive observations for prediction of the Kuroshio large meander using a shallow-water model

    Science.gov (United States)

    Zou, Guang'an; Wang, Qiang; Mu, Mu

    2016-09-01

    Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.

  9. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Directory of Open Access Journals (Sweden)

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  10. Reduced Order Modeling for Prediction and Control of Large-Scale Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Kalashnikova, Irina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Computational Mathematics; Arunajatesan, Srinivasan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Aerosciences Dept.; Barone, Matthew Franklin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Aerosciences Dept.; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Uncertainty Quantification and Optimization Dept.; Fike, Jeffrey A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Component Science and Mechanics Dept.

    2014-05-01

    This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier

  11. Stochastic backscatter modelling for the prediction of pollutant removal from an urban street canyon: A large-eddy simulation

    Science.gov (United States)

    O'Neill, J. J.; Cai, X.-M.; Kinnersley, R.

    2016-10-01

    The large-eddy simulation (LES) approach has recently exhibited its appealing capability of capturing turbulent processes inside street canyons and the urban boundary layer aloft, and its potential for deriving the bulk parameters adopted in low-cost operational urban dispersion models. However, the thin roof-level shear layer may be under-resolved in most LES set-ups and thus sophisticated subgrid-scale (SGS) parameterisations may be required. In this paper, we consider the important case of pollutant removal from an urban street canyon of unit aspect ratio (i.e. building height equal to street width) with the external flow perpendicular to the street. We show that by employing a stochastic SGS model that explicitly accounts for backscatter (energy transfer from unresolved to resolved scales), the pollutant removal process is better simulated compared with the use of a simpler (fully dissipative) but widely-used SGS model. The backscatter induces additional mixing within the shear layer which acts to increase the rate of pollutant removal from the street canyon, giving better agreement with a recent wind-tunnel experiment. The exchange velocity, an important parameter in many operational models that determines the mass transfer between the urban canopy and the external flow, is predicted to be around 15% larger with the backscatter SGS model; consequently, the steady-state mean pollutant concentration within the street canyon is around 15% lower. A database of exchange velocities for various other urban configurations could be generated and used as improved input for operational street canyon models.

  12. A predictive model of muscle excitations based on muscle modularity for a large repertoire of human locomotion conditions.

    Science.gov (United States)

    Gonzalez-Vargas, Jose; Sartori, Massimo; Dosen, Strahinja; Torricelli, Diego; Pons, Jose L; Farina, Dario

    2015-01-01

    Humans can efficiently walk across a large variety of terrains and locomotion conditions with little or no mental effort. It has been hypothesized that the nervous system simplifies neuromuscular control by using muscle synergies, thus organizing multi-muscle activity into a small number of coordinative co-activation modules. In the present study we investigated how muscle modularity is structured across a large repertoire of locomotion conditions including five different speeds and five different ground elevations. For this we have used the non-negative matrix factorization technique in order to explain EMG experimental data with a low-dimensional set of four motor components. In this context each motor components is composed of a non-negative factor and the associated muscle weightings. Furthermore, we have investigated if the proposed descriptive analysis of muscle modularity could be translated into a predictive model that could: (1) Estimate how motor components modulate across locomotion speeds and ground elevations. This implies not only estimating the non-negative factors temporal characteristics, but also the associated muscle weighting variations. (2) Estimate how the resulting muscle excitations modulate across novel locomotion conditions and subjects. The results showed three major distinctive features of muscle modularity: (1) the number of motor components was preserved across all locomotion conditions, (2) the non-negative factors were consistent in shape and timing across all locomotion conditions, and (3) the muscle weightings were modulated as distinctive functions of locomotion speed and ground elevation. Results also showed that the developed predictive model was able to reproduce well the muscle modularity of un-modeled data, i.e., novel subjects and conditions. Muscle weightings were reconstructed with a cross-correlation factor greater than 70% and a root mean square error less than 0.10. Furthermore, the generated muscle excitations matched

  13. Occurrence of large and medium-sized mammals: Occurrence but not count models predict pronghorn distribution: Chapter 8

    Science.gov (United States)

    2011-01-01

    Management of medium to large-sized terrestrial mammals (Antilocapridae, Canidae, Cervidae, Leporidae, Mustelidae, Ochotonidae) in the western United States is multifaceted and complex. Species in this group generally are charismatic and provide economic opportunities, although others are considered a nuisance at one extreme or are listed as species of conservation concern at the other. Understanding the relative influence of land cover, habitat fragmentation, and human land use on their distribution during the breeding season is imperative to inform management decisions on land use and conservation planning for these species. We surveyed medium to large-sized sagebrush (Artemisia spp.)-associated mammal species in 2005 and 2006 on 141 random transects (mean length = 1.1 km) in the Wyoming Basins, an area undergoing rapid land cover transformation due to human actions including energy development. Overall, we observed 10 species but only obtained enough observations of pronghorn (Antilocapra americana) to develop spatially explicit distribution models. For pronghorn, occurrence related positively to proportion of sagebrush land cover within 0.27 km, mixed shrubland land cover within 3 km, riparian land cover within 5 km, Normalized Difference Vegetation Index (NDVI) within 0.27 km, road density within 5 km, and decay distance to power line corridors at 1 km, but negatively to salt-desert shrubland cover within 18 km and an interaction between sagebrush and NDVI within 0.27 km. We found excellent predictive capability of this model when evaluated with independent test data. The model provides a basis for assessing the effects of proposed development on pronghorn and can aid planning efforts to avoid or mitigate adverse effects on pronghorn.

  14. A predictive model of muscle excitations based on muscle modularity for a large repertoire of human locomotion conditions

    Directory of Open Access Journals (Sweden)

    Jose eGonzalez-Vargas

    2015-09-01

    Full Text Available Humans can efficiently walk across a large variety of terrains and locomotion conditions with little or no mental effort. It has been hypothesized that the nervous system simplifies neuromuscular control by using muscle synergies, thus organizing multi-muscle activity into a small number of coordinative co-activation modules. In the present study we investigated how muscle modularity is structured across a large repertoire of locomotion conditions including five different speeds and five different ground elevations. For this we have used the non-negative matrix factorization technique in order to explain EMG experimental data with a low-dimensional set of four motor components. In this context each motor components is composed of a non-negative factor and the associated muscle weightings. Furthermore, we have investigated if the proposed descriptive analysis of muscle modularity could be translated into a predictive model that could: 1 Estimate how motor components modulate across locomotion speeds and ground elevations. This implies not only estimating the non-negative factors temporal characteristics, but also the associated muscle weighting variations. 2 Estimate how the resulting muscle excitations modulate across novel locomotion conditions and subjects.The results showed three major distinctive features of muscle modularity: 1 the number of motor components was preserved across all locomotion conditions, 2 the non-negative factors were consistent in shape and timing across all locomotion conditions, and 3 the muscle weightings were modulated as distinctive functions of locomotion speed and ground elevation. Results also showed that the developed predictive model was able to reproduce well the muscle modularity of un-modeled data, i.e. novel subjects and conditions. Muscle weightings were reconstructed with a cross-correlation factor greater than 70% and a root mean square error less than 0.10. Furthermore, the generated muscle excitations

  15. Design of a model to predict surge capacity bottlenecks for burn mass casualties at a large academic medical center.

    Science.gov (United States)

    Abir, Mahshid; Davis, Matthew M; Sankar, Pratap; Wong, Andrew C; Wang, Stewart C

    2013-02-01

    To design and test a model to predict surge capacity bottlenecks at a large academic medical center in response to a mass-casualty incident (MCI) involving multiple burn victims. Using the simulation software ProModel, a model of patient flow and anticipated resource use, according to principles of disaster management, was developed based upon historical data from the University Hospital of the University of Michigan Health System. Model inputs included: (a) age and weight distribution for casualties, and distribution of size and depth of burns; (b) rate of arrival of casualties to the hospital, and triage to ward or critical care settings; (c) eligibility for early discharge of non-MCI inpatients at time of MCI; (d) baseline occupancy of intensive care unit (ICU), surgical step-down, and ward; (e) staff availability-number of physicians, nurses, and respiratory therapists, and the expected ratio of each group to patients; (f) floor and operating room resources-anticipating the need for mechanical ventilators, burn care and surgical resources, blood products, and intravenous fluids; (g) average hospital length of stay and mortality rate for patients with inhalation injury and different size burns; and (h) average number of times that different size burns undergo surgery. Key model outputs include time to bottleneck for each limiting resource and average waiting time to hospital bed availability. Given base-case model assumptions (including 100 mass casualties with an inter-arrival rate to the hospital of one patient every three minutes), hospital utilization is constrained within the first 120 minutes to 21 casualties, due to the limited number of beds. The first bottleneck is attributable to exhausting critical care beds, followed by floor beds. Given this limitation in number of patients, the temporal order of the ensuing bottlenecks is as follows: Lactated Ringer's solution (4 h), silver sulfadiazine/Silvadene (6 h), albumin (48 h), thrombin topical (72 h), type

  16. Comparing large-scale hydrological model predictions with observed streamflow in the Pacific Northwest: effects of climate and groundwater

    Science.gov (United States)

    Mohammad Safeeq; Guillaume S. Mauger; Gordon E. Grant; Ivan Arismendi; Alan F. Hamlet; Se-Yeun Lee

    2014-01-01

    Assessing uncertainties in hydrologic models can improve accuracy in predicting future streamflow. Here, simulated streamflows using the Variable Infiltration Capacity (VIC) model at coarse (1/16°) and fine (1/120°) spatial resolutions were evaluated against observed streamflows from 217 watersheds. In...

  17. Coupling a Mesoscale Numerical Weather Prediction Model with Large-Eddy Simulation for Realistic Wind Plant Aerodynamics Simulations (Poster)

    Energy Technology Data Exchange (ETDEWEB)

    Draxl, C.; Churchfield, M.; Mirocha, J.; Lee, S.; Lundquist, J.; Michalakes, J.; Moriarty, P.; Purkayastha, A.; Sprague, M.; Vanderwende, B.

    2014-06-01

    Wind plant aerodynamics are influenced by a combination of microscale and mesoscale phenomena. Incorporating mesoscale atmospheric forcing (e.g., diurnal cycles and frontal passages) into wind plant simulations can lead to a more accurate representation of microscale flows, aerodynamics, and wind turbine/plant performance. Our goal is to couple a numerical weather prediction model that can represent mesoscale flow [specifically the Weather Research and Forecasting model] with a microscale LES model (OpenFOAM) that can predict microscale turbulence and wake losses.

  18. What if the power-law model did not apply for the prediction of very large rockfall events?

    Science.gov (United States)

    Rohmer, J.; Dewez, T.

    2012-04-01

    Extreme events are of primary importance for risk management in a variety of natural phenomena, and more particularly for landslides and rockfalls, because they might be associated with huge losses. Numerous research works have addressed this problem based on the same paradigm: if events exhibit the same statistical properties across a broad range of sizes, the probability of extreme events can be evaluated by extrapolating the frequency-size distribution. Considering landslides' areas or rockfalls' volumes, the frequency distribution has been found to be heavy-tailed and the well-known power law distribution has been proposed to model it. Yet, the vision of very large extreme event (catastrophic) frequency being an extrapolation of the power laws fitted on small and intermediate events has been challenged in various contexts, in particular by Sornette and co-authors, who proposed viewing such catastrophic events as "outliers" from the power-law model, i.e. they deviate by an abnormal large distance from the extrapolated prediction. In this study, we address such an issue considering a rockfall inventory, containing >8500 events spanning 8 orders of magnitudes of volume and collated from 2.5 years of high-accuracy repeated terrestrial laser scanning (TLS) surveys on a coastal chalk cliff in Normandy (France). This inventory contains a particularly large event of 70,949 m3 which occurred some time between February 1 and 7 April 2008. It is the second largest cliff failure reported in Normandy, and is larger than those collated in historical cliff failure inventories across various geological and geomorphological coastal settings. Is this event an outlier of the power-law volume-frequency distribution ? And if so, why? This largest event recorded appears to stand out of the rest of the sample. We use it to revisit the techniques to fit power-law distribution with robust techniques (robust weighted maximum likelihood estimator), rarely used in rockfall studies, and

  19. The "AQUASCOPE" simplified model for predicting 89, 90Sr, 131l and 134, 137Cs in surface waters after a large-scale radioactive fallout

    NARCIS (Netherlands)

    Smith, J.T.; Belova, N.V.; Bulgakov, A.A.; Comans, R.N.J.; Konoplev, A.V.; Kudelsky, A.V.; Madruga, M.J.; Voitsekhovitch, O.V.; Zibolt, G.

    2005-01-01

    Simplified dynamic models have been developed for predicting the concentrations of radiocesium, radiostrontium, and 131I in surface waters and freshwater fish following a large-scale radioactive fallout. The models are intended to give averaged estimates for radionuclides in water bodies and in fish

  20. Assessment of the prediction error in a large-scale application of a dynamic soil acidification model

    NARCIS (Netherlands)

    Kros, J.; Mol-Dijkstra, J.P.; Pebesma, E.J.

    2002-01-01

    The prediction error of a relatively simple soil acidification model (SMART2) was assessed before and after calibration, focussing on the aluminium and nitrate concentrations on a block scale. Although SMART2 is especially developed for application ona national to European scale, it still runs at a

  1. Inpatient trial of an artificial pancreas based on multiple model probabilistic predictive control with repeated large unannounced meals.

    Science.gov (United States)

    Cameron, Fraser; Niemeyer, Günter; Wilson, Darrell M; Bequette, B Wayne; Benassi, Kari S; Clinton, Paula; Buckingham, Bruce A

    2014-11-01

    Closed-loop control of blood glucose levels in people with type 1 diabetes offers the potential to reduce the incidence of diabetes complications and reduce the patients' burden, particularly if meals do not need to be announced. We therefore tested a closed-loop algorithm that does not require meal announcement. A multiple model probabilistic predictive controller (MMPPC) was assessed on four patients, revised to improve performance, and then assessed on six additional patients. Each inpatient admission lasted for 32 h with five unannounced meals containing approximately 1 g/kg of carbohydrate per admission. The system used an Abbott Diabetes Care (Alameda, CA) Navigator(®) continuous glucose monitor (CGM) and Insulet (Bedford, MA) Omnipod(®) insulin pump, with the MMPPC implemented through the artificial pancreas system platform. The controller was initialized only with the patient's total daily dose and daily basal pattern. On a 24-h basis, the first cohort had mean reference and CGM readings of 179 and 167 mg/dL, respectively, with 53% and 62%, respectively, of readings between 70 and 180 mg/dL and four treatments for glucose values time spent euglycemic. There was one controller-induced hypoglycemic episode. For the 30 unannounced meals in the second cohort, the mean reference and CGM premeal, postmeal maximum, and 3-h postmeal values were 139 and 132, 223 and 208, and 168 and 156 mg/dL, respectively. The MMPPC, tested in-clinic against repeated, large, unannounced meals, maintained reasonable glycemic control with a mean blood glucose level that would equate to a mean glycated hemoglobin value of 7.2%, with only one controller-induced hypoglycemic event occurring in the second cohort.

  2. Hydrogen production by steam reforming of DME in a large scale CFB reactor. Part I:computational model and predictions

    OpenAIRE

    Elewuwa, Francis A.; Makkawi, Yassir T.

    2015-01-01

    This study presents a computational fluid dynamic (CFD) study of Dimethyl Ether steam reforming (DME-SR) in a large scale Circulating Fluidized Bed (CFB) reactor. The CFD model is based on Eulerian-Eulerian dispersed flow and solved using commercial software (ANSYS FLUENT). The DME-SR reactions scheme and kinetics in the presence of a bifunctional catalyst of CuO/ZnO/Al2O3+ZSM-5 were incorporated in the model using in-house developed user-defined function. The model was validated by comparing...

  3. Ability of a low-dimensional model to predict geometry-dependent dynamics of large-scale coherent structures in turbulence.

    Science.gov (United States)

    Bai, Kunlun; Ji, Dandan; Brown, Eric

    2016-02-01

    We test the ability of a general low-dimensional model for turbulence to predict geometry-dependent dynamics of large-scale coherent structures, such as convection rolls. The model consists of stochastic ordinary differential equations, which are derived as a function of boundary geometry from the Navier-Stokes equations [Brown and Ahlers, Phys. Fluids 20, 075101 (2008); Phys. Fluids 20, 105105 (2008)]. We test the model using Rayleigh-Bénard convection experiments in a cubic container. The model predicts a mode in which the alignment of a convection roll stochastically crosses a potential barrier to switch between diagonals. We observe this mode with a measured switching rate within 30% of the prediction.

  4. (Studies of ocean predictability at decade to century time scales using a global ocean general circulation model in a parallel competing environment). [Large Scale Geostrophic Model

    Energy Technology Data Exchange (ETDEWEB)

    1992-03-10

    The first phase of the proposed work is largely completed on schedule. Scientists at the San Diego Supercomputer Center (SDSC) succeeded in putting a version of the Hamburg isopycnal coordinate ocean model (OPYC) onto the INTEL parallel computer. Due to the slow run speeds of the OPYC on the parallel machine, another ocean is being model used during the first part of phase 2. The model chosen is the Large Scale Geostrophic (LSG) model form the Max Planck Institute.

  5. Does distance decay modelling of supermarket accessibility predict fruit and vegetable intake by individuals in a large metropolitan area?

    Science.gov (United States)

    Robinson, Paul L; Dominguez, Fred.; Teklehaimanot, Senait.; Lee, Martin; Brown, Arleen; Goodchild, Michael

    2013-01-01

    Background Obesity, a major risk factor for hypertension, diabetes, and other chronic diseases is influenced by a person’s local environmental setting. Accessibility to supermarkets has been shown to influence nutritional behaviors and obesity rates; however the specific local environmental conditions and behavioral mechanisms at work in this process remain unclear. Purpose To determine how individual fruit and vegetable consumption behavior was influenced by a distance decay based gravity model of neighborhood geographic accessibility to supermarkets, across neighborhoods in Los Angeles County, independent of other factors that are known to influence nutritional behaviors. Methods A distance decay based accessibility model (gravity model) was specified for a large sample (n=7,514) of urban residents. The associations between their fruit and vegetable consumption patterns and their local accessibility to supermarkets were explored, while controlling for covariates known to influence eating behaviors. Results Significant variation in geographic accessibility and nutritional behavior existed by age, gender, race and ethnicity, education, marital status, poverty status, neighborhood safety and knowledge of nutritional guidelines. Logistic regression showed an independent effect of geographic accessibility to supermarkets, even after the inclusion of known controlling factors. Conclusion A basic gravity model was an effective predictor of fruit and vegetable consumption in an urban population, setting the stage for inclusion of supply and demand parameters, and the ability to estimate local directions and magnitudes of the factors that contribute to the differential obesity rates found in United States urban areas. This knowledge will facilitate more targeted interventions that can help eliminate health disparities. PMID:23395954

  6. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    Science.gov (United States)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model

  7. Applied the additive hazard model to predict the survival time of patient with diffuse large B- cell lymphoma and determine the effective genes, using microarray data

    Directory of Open Access Journals (Sweden)

    Arefa Jafarzadeh Kohneloo

    2015-09-01

    Full Text Available Background: Recent studies have shown that effective genes on survival time of cancer patients play an important role as a risk factor or preventive factor. Present study was designed to determine effective genes on survival time for diffuse large B-cell lymphoma patients and predict the survival time using these selected genes. Materials & Methods: Present study is a cohort study was conducted on 40 patients with diffuse large B-cell lymphoma. For these patients, 2042 gene expression was measured. In order to predict the survival time, the composition of the semi-parametric additive survival model with two gene selection methods elastic net and lasso were used. Two methods were evaluated by plotting area under the ROC curve over time and calculating the integral of this curve. Results: Based on our findings, the elastic net method identified 10 genes, and Lasso-Cox method identified 7 genes. GENE3325X increased the survival time (P=0.006, Whereas GENE3980X and GENE377X reduced the survival time (P=0.004. These three genes were selected as important genes in both methods. Conclusion: This study showed that the elastic net method outperformed the common Lasso method in terms of predictive power. Moreover, apply the additive model instead Cox regression and using microarray data is usable way for predict the survival time of patients.

  8. Development and evaluation of a prediction model for underestimated invasive breast cancer in women with ductal carcinoma in situ at stereotactic large core needle biopsy.

    Directory of Open Access Journals (Sweden)

    Suzanne C E Diepstraten

    Full Text Available BACKGROUND: We aimed to develop a multivariable model for prediction of underestimated invasiveness in women with ductal carcinoma in situ at stereotactic large core needle biopsy, that can be used to select patients for sentinel node biopsy at primary surgery. METHODS: From the literature, we selected potential preoperative predictors of underestimated invasive breast cancer. Data of patients with nonpalpable breast lesions who were diagnosed with ductal carcinoma in situ at stereotactic large core needle biopsy, drawn from the prospective COBRA (Core Biopsy after RAdiological localization and COBRA2000 cohort studies, were used to fit the multivariable model and assess its overall performance, discrimination, and calibration. RESULTS: 348 women with large core needle biopsy-proven ductal carcinoma in situ were available for analysis. In 100 (28.7% patients invasive carcinoma was found at subsequent surgery. Nine predictors were included in the model. In the multivariable analysis, the predictors with the strongest association were lesion size (OR 1.12 per cm, 95% CI 0.98-1.28, number of cores retrieved at biopsy (OR per core 0.87, 95% CI 0.75-1.01, presence of lobular cancerization (OR 5.29, 95% CI 1.25-26.77, and microinvasion (OR 3.75, 95% CI 1.42-9.87. The overall performance of the multivariable model was poor with an explained variation of 9% (Nagelkerke's R(2, mediocre discrimination with area under the receiver operating characteristic curve of 0.66 (95% confidence interval 0.58-0.73, and fairly good calibration. CONCLUSION: The evaluation of our multivariable prediction model in a large, clinically representative study population proves that routine clinical and pathological variables are not suitable to select patients with large core needle biopsy-proven ductal carcinoma in situ for sentinel node biopsy during primary surgery.

  9. Experimental real-time multi-model ensemble (MME) prediction of rainfall during Monsoon 2008: Large-scale medium-range aspects

    Indian Academy of Sciences (India)

    A K Mitra; G R Iyengar; V R Durai; J Sanjay; T N Krishnamurti; A Mishra; D R Sikka

    2011-02-01

    Realistic simulation/prediction of the Asian summer monsoon rainfall on various space–time scales is a challenging scientific task. Compared to mid-latitudes, a proportional skill improvement in the prediction of monsoon rainfall in the medium range has not happened in recent years. Global models and data assimilation techniques are being improved for monsoon/tropics. However, multi-model ensemble (MME) forecasting is gaining popularity, as it has the potential to provide more information for practical forecasting in terms of making a consensus forecast and handling model uncertainties. As major centers are exchanging model output in near real-time, MME is a viable inexpensive way of enhancing the forecasting skill and information content. During monsoon 2008, on an experimental basis, an MME forecasting of large-scale monsoon precipitation in the medium range was carried out in real-time at National Centre for Medium Range Weather Forecasting (NCMRWF), India. Simple ensemble mean (EMN) giving equal weight to member models, bias-corrected ensemble mean (BCEMn) and MME forecast, where different weights are given to member models, are the products of the algorithm tested here. In general, the aforementioned products from the multi-model ensemble forecast system have a higher skill than individual model forecasts. The skill score for the Indian domain and other sub-regions indicates that the BCEMn produces the best result, compared to EMN and MME. Giving weights to different models to obtain an MME product helps to improve individual member models only marginally. It is noted that for higher rainfall values, the skill of the global model rainfall forecast decreases rapidly beyond day-3, and hence for day-4 and day-5, the MME products could not bring much improvement over member models. However, up to day-3, the MME products were always better than individual member models.

  10. Why choose Random Forest to predict rare species distribution with few samples in large undersampled areas? Three Asian crane species models provide supporting evidence.

    Science.gov (United States)

    Mi, Chunrong; Huettmann, Falk; Guo, Yumin; Han, Xuesong; Wen, Lijia

    2017-01-01

    Species distribution models (SDMs) have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane (Grus monacha, n = 33), White-naped Crane (Grus vipio, n = 40), and Black-necked Crane (Grus nigricollis, n = 75) in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model), Random Forest, CART (Classification and Regression Tree) and Maxent (Maximum Entropy Models). In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC) and true skill statistic (TSS)) were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial) ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid

  11. Bounded link prediction in very large networks

    Science.gov (United States)

    Cui, Wei; Pu, Cunlai; Xu, Zhongqi; Cai, Shimin; Yang, Jian; Michaelson, Andrew

    2016-09-01

    Evaluating link prediction methods is a hard task in very large complex networks due to the prohibitive computational cost. However, if we consider the lower bound of node pairs' similarity scores, this task can be greatly optimized. In this paper, we study CN index in the bounded link prediction framework, which is applicable to enormous heterogeneous networks. Specifically, we propose a fast algorithm based on the parallel computing scheme to obtain all node pairs with CN values larger than the lower bound. Furthermore, we propose a general measurement, called self-predictability, to quantify the performance of similarity indices in link prediction, which can also indicate the link predictability of networks with respect to given similarity indices.

  12. Optimization of a novel biophysical model using large scale in vivo antisense hybridization data displays improved prediction capabilities of structurally accessible RNA regions

    Science.gov (United States)

    Vazquez-Anderson, Jorge; Mihailovic, Mia K.; Baldridge, Kevin C.; Reyes, Kristofer G.; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B.

    2017-01-01

    Abstract Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA–RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA–RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA–mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. PMID:28334800

  13. Toward high-resolution flash flood prediction in large urban areas - Analysis of sensitivity to spatiotemporal resolution of rainfall input and hydrologic modeling

    Science.gov (United States)

    Rafieeinasab, Arezoo; Norouzi, Amir; Kim, Sunghee; Habibi, Hamideh; Nazari, Behzad; Seo, Dong-Jun; Lee, Haksu; Cosgrove, Brian; Cui, Zhengtao

    2015-12-01

    Urban flash flooding is a serious problem in large, highly populated areas such as the Dallas-Fort Worth Metroplex (DFW). Being able to monitor and predict flash flooding at a high spatiotemporal resolution is critical to providing location-specific early warnings and cost-effective emergency management in such areas. Under the idealized conditions of perfect models and precipitation input, one may expect that spatiotemporal specificity and accuracy of the model output improve as the resolution of the models and precipitation input increases. In reality, however, due to the errors in the precipitation input, and in the structures, parameters and states of the models, there are practical limits to the model resolution. In this work, we assess the sensitivity of streamflow simulation in urban catchments to the spatiotemporal resolution of precipitation input and hydrologic modeling to identify the resolution at which the simulation errors may be at minimum given the quality of the precipitation input and hydrologic models used, and the response time of the catchment. The hydrologic modeling system used in this work is the National Weather Service (NWS) Hydrology Laboratory's Research Distributed Hydrologic Model (HLRDHM) applied at spatiotemporal resolutions ranging from 250 m to 2 km and from 1 min to 1 h applied over the Cities of Fort Worth, Arlington and Grand Prairie in DFW. The high-resolution precipitation input is from the DFW Demonstration Network of the Collaborative Adaptive Sensing of the Atmosphere (CASA) radars. For comparison, the NWS Multisensor Precipitation Estimator (MPE) product, which is available at a 4-km 1-h resolution, was also used. The streamflow simulation results are evaluated for 5 urban catchments ranging in size from 3.4 to 54.6 km2 and from about 45 min to 3 h in time-to-peak in the Cities of Fort Worth, Arlington and Grand Prairie. The streamflow observations used in evaluation were obtained from water level measurements via rating

  14. Mathematical principles of predicting the probabilities of large earthquakes

    CERN Document Server

    Ghertzik, V M

    2009-01-01

    A multicomponent random process used as a model for the problem of space-time earthquake prediction; this allows us to develop consistent estimation for conditional probabilities of large earthquakes if the values of the predictor characterizing the seismicity prehistory are known. We introduce tools for assessing prediction efficiency, including a separate determination of efficiency for "time prediction" and "location prediction": a generalized correlation coefficient and the density of information gain. We suggest a technique for testing the predictor to decide whether the hypothesis of no prediction can be rejected.

  15. Large-scale evaluation of dynamically important residues in proteins predicted by the perturbation analysis of a coarse-grained elastic model

    Directory of Open Access Journals (Sweden)

    Tekpinar Mustafa

    2009-07-01

    Full Text Available Abstract Backgrounds It is increasingly recognized that protein functions often require intricate conformational dynamics, which involves a network of key amino acid residues that couple spatially separated functional sites. Tremendous efforts have been made to identify these key residues by experimental and computational means. Results We have performed a large-scale evaluation of the predictions of dynamically important residues by a variety of computational protocols including three based on the perturbation and correlation analysis of a coarse-grained elastic model. This study is performed for two lists of test cases with >500 pairs of protein structures. The dynamically important residues predicted by the perturbation and correlation analysis are found to be strongly or moderately conserved in >67% of test cases. They form a sparse network of residues which are clustered both in 3D space and along protein sequence. Their overall conservation is attributed to their dynamic role rather than ligand binding or high network connectivity. Conclusion By modeling how the protein structural fluctuations respond to residue-position-specific perturbations, our highly efficient perturbation and correlation analysis can be used to dissect the functional conformational changes in various proteins with a residue level of detail. The predictions of dynamically important residues serve as promising targets for mutational and functional studies.

  16. Predicting watershed sediment yields after wildland fire with the InVEST sediment retention model at large geographic extent in the western USA: accuracy and uncertainties

    Science.gov (United States)

    Sankey, J. B.; Kreitler, J.; McVay, J.; Hawbaker, T. J.; Vaillant, N.; Lowe, S. E.

    2014-12-01

    Wildland fire is a primary threat to watersheds that can impact water supply through increased sedimentation, water quality decline, and change the timing and amount of runoff leading to increased risk from flood and sediment natural hazards. It is of great societal importance in the western USA and throughout the world to improve understanding of how changing fire frequency, extent, and location, in conjunction with fuel treatments will affect watersheds and the ecosystem services they supply to communities. In this work we assess the utility of the InVEST Sediment Retention Model to accurately characterize vulnerability of burned watersheds to erosion and sedimentation. The InVEST tools are GIS-based implementations of common process models, engineered for high-end computing to allow the faster simulation of larger landscapes and incorporation into decision-making. The InVEST Sediment Retention Model is based on common soil erosion models (e.g., RUSLE -Revised Universal Soil Loss Equation) and determines which areas of the landscape contribute the greatest sediment loads to a hydrological network and conversely evaluate the ecosystem service of sediment retention on a watershed basis. We evaluate the accuracy and uncertainties for InVEST predictions of increased sedimentation after fire, using measured post-fire sedimentation rates available for many watersheds in different rainfall regimes throughout the western USA from an existing, large USGS database of post-fire sediment yield [synthesized in Moody J, Martin D (2009) Synthesis of sediment yields after wildland fire in different rainfall regimes in the western United States. International Journal of Wildland Fire 18: 96-115]. The ultimate goal of this work is to calibrate and implement the model to accurately predict variability in post-fire sediment yield as a function of future landscape heterogeneity predicted by wildfire simulations, and future landscape fuel treatment scenarios, within watersheds.

  17. Wind power prediction models

    Science.gov (United States)

    Levy, R.; Mcginness, H.

    1976-01-01

    Investigations were performed to predict the power available from the wind at the Goldstone, California, antenna site complex. The background for power prediction was derived from a statistical evaluation of available wind speed data records at this location and at nearby locations similarly situated within the Mojave desert. In addition to a model for power prediction over relatively long periods of time, an interim simulation model that produces sample wind speeds is described. The interim model furnishes uncorrelated sample speeds at hourly intervals that reproduce the statistical wind distribution at Goldstone. A stochastic simulation model to provide speed samples representative of both the statistical speed distributions and correlations is also discussed.

  18. From GenBank to GBIF: Phylogeny-Based Predictive Niche Modeling Tests Accuracy of Taxonomic Identifications in Large Occurrence Data Repositories.

    Directory of Open Access Journals (Sweden)

    B Eugene Smith

    Full Text Available Accuracy of taxonomic identifications is crucial to data quality in online repositories of species occurrence data, such as the Global Biodiversity Information Facility (GBIF, which have accumulated several hundred million records over the past 15 years. These data serve as basis for large scale analyses of macroecological and biogeographic patterns and to document environmental changes over time. However, taxonomic identifications are often unreliable, especially for non-vascular plants and fungi including lichens, which may lack critical revisions of voucher specimens. Due to the scale of the problem, restudy of millions of collections is unrealistic and other strategies are needed. Here we propose to use verified, georeferenced occurrence data of a given species to apply predictive niche modeling that can then be used to evaluate unverified occurrences of that species. Selecting the charismatic lichen fungus, Usnea longissima, as a case study, we used georeferenced occurrence records based on sequenced specimens to model its predicted niche. Our results suggest that the target species is largely restricted to a narrow range of boreal and temperate forest in the Northern Hemisphere and that occurrence records in GBIF from tropical regions and the Southern Hemisphere do not represent this taxon, a prediction tested by comparison with taxonomic revisions of Usnea for these regions. As a novel approach, we employed Principal Component Analysis on the environmental grid data used for predictive modeling to visualize potential ecogeographical barriers for the target species; we found that tropical regions conform a strong barrier, explaining why potential niches in the Southern Hemisphere were not colonized by Usnea longissima and instead by morphologically similar species. This approach is an example of how data from two of the most important biodiversity repositories, GenBank and GBIF, can be effectively combined to remotely address the problem

  19. Model-based analysis of push-pull experiments in deep aquifers to predict large-scale impacts of CSG product water reinjection

    Science.gov (United States)

    Prommer, H.; Rathi, B.; Morris, R.; Helm, L.; Siade, A. J.; Davis, J. A.

    2015-12-01

    Over the next two decades coal seam gas production in Australia will require the management of large quantities of production water. For some sites the most viable option is to treat the water to a high standard via reverse osmosis (RO) and to inject it into deep aquifers. The design and implementation of these field-scale injection schemes requires a thorough understanding of the anticipated water quality changes within the target aquifers. In this study we use reactive transport modeling to integrate the results of a multi-scale hydrogeological and geochemical characterization, and to analyze a series of short-term push-pull experiments with the aim to better understand and reliably accurately predict long-term water quality evolution and the risks for mobilizing geogenic arsenic. Sequential push-pull tests with varying injectant compositions were undertaken, with concentrations recorded during the recovery phase reaching levels of up to 180 ppb above the ambient concentrations observed prior to the push-pull experiments. The highest As concentrations were observed in conjunction with the injection of aerobic water, while de-oxygenation of the injectant lowered As concentrations significantly. The lowest As concentrations were observed when the injectant was de-oxygenated and acid-amended. The latter was underpinned by complementary laboratory As sorption experiments using sediments from the target aquifer at various pHs, which, consistent with literature, show a decrease in As sorption affinity under alkaline conditions. In the model-based analysis of the experimental data, model parameters for each conceptual model variant were estimated through an automatic calibration procedure using Particle Swarm Optimization (PSO) whereby bromide and temperature data were used to constrain flow, solute and heat transport parameters. A series of predictive model scenarios were performed to determine whether advanced manipulation of the injectant composition is required.

  20. Computational fluid dynamics model for predicting flow of viscous fluids in a large fermentor with hydrofoil flow impellers and internal cooling coils

    Science.gov (United States)

    Kelly; Humphrey

    1998-03-01

    Considerable debate has occurred over the use of hydrofoil impellers in large-scale fermentors to improve mixing and mass transfer in highly viscous non-Newtonian systems. Using a computational fluid dynamics software package (Fluent, version 4.30) extensive calculations were performed to study the effect of impeller speed (70-130 rpm), broth rheology (value of power law flow behavior index from 0.2 to 0.6), and distance between the cooling coil bank and the fermentor wall (6-18 in.) on flow near the perimeter of a large (75-m3) fermentor equipped with A315 impellers. A quadratic model utilizing the data was developed in an attempt to correlate the effect of A315 impeller speed, power law flow behavior index, and distance between the cooling coil bank and the fermentor wall on the average axial velocity in the coil bank-wall region. The results suggest that there is a potential for slow or stagnant flow in the coil bank-wall region which could result in poor oxygen and heat transfer for highly viscous fermentations. The results also indicate that there is the potential for slow or stagnant flow in the region between the top impeller and the gas headspace when flow through the coil bank-wall region is slow. Finally, a simple guideline was developed to allow fermentor design engineers to predict the degree of flow behind a bank of helical cooling coils in a large fermentor with hydrofoil flow impellers.

  1. Melanoma risk prediction models

    Directory of Open Access Journals (Sweden)

    Nikolić Jelena

    2014-01-01

    Full Text Available Background/Aim. The lack of effective therapy for advanced stages of melanoma emphasizes the importance of preventive measures and screenings of population at risk. Identifying individuals at high risk should allow targeted screenings and follow-up involving those who would benefit most. The aim of this study was to identify most significant factors for melanoma prediction in our population and to create prognostic models for identification and differentiation of individuals at risk. Methods. This case-control study included 697 participants (341 patients and 356 controls that underwent extensive interview and skin examination in order to check risk factors for melanoma. Pairwise univariate statistical comparison was used for the coarse selection of the most significant risk factors. These factors were fed into logistic regression (LR and alternating decision trees (ADT prognostic models that were assessed for their usefulness in identification of patients at risk to develop melanoma. Validation of the LR model was done by Hosmer and Lemeshow test, whereas the ADT was validated by 10-fold cross-validation. The achieved sensitivity, specificity, accuracy and AUC for both models were calculated. The melanoma risk score (MRS based on the outcome of the LR model was presented. Results. The LR model showed that the following risk factors were associated with melanoma: sunbeds (OR = 4.018; 95% CI 1.724- 9.366 for those that sometimes used sunbeds, solar damage of the skin (OR = 8.274; 95% CI 2.661-25.730 for those with severe solar damage, hair color (OR = 3.222; 95% CI 1.984-5.231 for light brown/blond hair, the number of common naevi (over 100 naevi had OR = 3.57; 95% CI 1.427-8.931, the number of dysplastic naevi (from 1 to 10 dysplastic naevi OR was 2.672; 95% CI 1.572-4.540; for more than 10 naevi OR was 6.487; 95%; CI 1.993-21.119, Fitzpatricks phototype and the presence of congenital naevi. Red hair, phototype I and large congenital naevi were

  2. Predictive models in urology.

    Science.gov (United States)

    Cestari, Andrea

    2013-01-01

    Predictive modeling is emerging as an important knowledge-based technology in healthcare. The interest in the use of predictive modeling reflects advances on different fronts such as the availability of health information from increasingly complex databases and electronic health records, a better understanding of causal or statistical predictors of health, disease processes and multifactorial models of ill-health and developments in nonlinear computer models using artificial intelligence or neural networks. These new computer-based forms of modeling are increasingly able to establish technical credibility in clinical contexts. The current state of knowledge is still quite young in understanding the likely future direction of how this so-called 'machine intelligence' will evolve and therefore how current relatively sophisticated predictive models will evolve in response to improvements in technology, which is advancing along a wide front. Predictive models in urology are gaining progressive popularity not only for academic and scientific purposes but also into the clinical practice with the introduction of several nomograms dealing with the main fields of onco-urology.

  3. Investigation of Prediction Accuracy, Sensitivity, and Parameter Stability of Large-Scale Propagation Path Loss Models for 5G Wireless Communications

    DEFF Research Database (Denmark)

    Sun, Shu; Rappaport, Theodore S.; Thomas, Timothy

    2016-01-01

    This paper compares three candidate large-scale propagation path loss models for use over the entire microwave and millimeter-wave (mmWave) radio spectrum: the alpha–beta–gamma (ABG) model, the close-in (CI) free-space reference distance model, and the CI model with a frequency-weighted path loss...

  4. MODEL PREDICTIVE CONTROL FUNDAMENTALS

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... paper, we will present an introduction to the theory and application of MPC with Matlab codes written to ... model predictive control, linear systems, discrete-time systems, ... and then compute very rapidly for this open-loop con-.

  5. Environmental relevance of laboratory-derived kinetic models to predict trace metal bioaccumulation in gammarids: Field experimentation at a large spatial scale (France).

    Science.gov (United States)

    Urien, N; Lebrun, J D; Fechner, L C; Uher, E; François, A; Quéau, H; Coquery, M; Chaumot, A; Geffard, O

    2016-05-15

    Kinetic models have become established tools for describing trace metal bioaccumulation in aquatic organisms and offer a promising approach for linking water contamination to trace metal bioaccumulation in biota. Nevertheless, models are based on laboratory-derived kinetic parameters, and the question of their relevance to predict trace metal bioaccumulation in the field is poorly addressed. In the present study, we propose to assess the capacity of kinetic models to predict trace metal bioaccumulation in gammarids in the field at a wide spatial scale. The field validation consisted of measuring dissolved Cd, Cu, Ni and Pb concentrations in the water column at 141 sites in France, running the models with laboratory-derived kinetic parameters, and comparing model predictions and measurements of trace metal concentrations in gammarids caged for 7 days to the same sites. We observed that gammarids poorly accumulated Cu showing the limited relevance of that species to monitor Cu contamination. Therefore, Cu was not considered for model predictions. In contrast, gammarids significantly accumulated Pb, Cd, and Ni over a wide range of exposure concentrations. These results highlight the relevance of using gammarids for active biomonitoring to detect spatial trends of bioavailable Pb, Cd, and Ni contamination in freshwaters. The best agreements between model predictions and field measurements were observed for Cd with 71% of good estimations (i.e. field measurements were predicted within a factor of two), which highlighted the potential for kinetic models to link Cd contamination to bioaccumulation in the field. The poorest agreements were observed for Ni and Pb (39% and 48% of good estimations, respectively). However, models developed for Ni, Pb, and to a lesser extent for Cd, globally underestimated bioaccumulation in caged gammarids. These results showed that the link between trace metal concentration in water and in biota remains complex, and underlined the limits of

  6. Large Hadron Collider (LHC) phenomenology, operational challenges and theoretical predictions

    CERN Document Server

    Gilles, Abelin R

    2013-01-01

    The Large Hadron Collider (LHC) is the highest-energy particle collider ever constructed and is considered "one of the great engineering milestones of mankind." It was built by the European Organization for Nuclear Research (CERN) from 1998 to 2008, with the aim of allowing physicists to test the predictions of different theories of particle physics and high-energy physics, and particularly prove or disprove the existence of the theorized Higgs boson and of the large family of new particles predicted by supersymmetric theories. In this book, the authors study the phenomenology, operational challenges and theoretical predictions of LHC. Topics discussed include neutral and charged black hole remnants at the LHC; the modified statistics approach for the thermodynamical model of multiparticle production; and astroparticle physics and cosmology in the LHC era.

  7. Nominal model predictive control

    OpenAIRE

    Grüne, Lars

    2013-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  8. Nominal Model Predictive Control

    OpenAIRE

    Grüne, Lars

    2014-01-01

    5 p., to appear in Encyclopedia of Systems and Control, Tariq Samad, John Baillieul (eds.); International audience; Model Predictive Control is a controller design method which synthesizes a sampled data feedback controller from the iterative solution of open loop optimal control problems.We describe the basic functionality of MPC controllers, their properties regarding feasibility, stability and performance and the assumptions needed in order to rigorously ensure these properties in a nomina...

  9. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  10. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  11. Electro-thermal Modeling for Junction Temperature Cycling-Based Lifetime Prediction of a Press-Pack IGBT 3L-NPC-VSC Applied to Large Wind Turbines

    DEFF Research Database (Denmark)

    Senturk, Osman Selcuk; Munk-Nielsen, Stig; Teodorescu, Remus;

    2011-01-01

    reliability is investigated regarding IGBT lifetime based on junction temperature cycling for the grid-side press-pack IGBT 3L-NPC-VSC, which is a state-of-the art high reliability solution. In order to acquire IGBT junction temperatures for given wind power profiles and to use them in IGBT lifetime...... prediction, the converter electro-thermal model including electrical, power loss, and dynamical thermal models is developed with the main focus on the thermal modeling regarding converter topology, switch technology, and physical structure. Moreover, these models are simplified for their practical...

  12. Assessment of RANS to predict flows with large streamline curvature

    Science.gov (United States)

    Yin, J. L.; Wang, D. Z.; Cheng, H.; Gu, W. G.

    2013-12-01

    In order to provide a guideline for choosing turbulence models in computation of complex flows with large streamline curvature, this paper presents a comprehensive comparison investigation of different RANS models widely used in engineering to check each model's sensibility on the streamline curvature. First, different models including standard k-ε, Realizable k-ε, Renormalization-group (RNG) k-ε model, Shear-stress transport k-ω model and non-linear eddy-viscosity model v2-f model are tested to simulated the flow in a 2D U-bend which has the standard bench mark available. The comparisons in terms of non-dimensional velocity and turbulent kinetic energy show that large differences exist among the results calculated by various models. To further validate the capability to predict flows with secondary flows, the involved models are tested in a 3D 90° bend flow. Also, the velocities are compared. As a summary, the advantages and disadvantages of each model are analysed and guidelines for choice of turbulence model are presented.

  13. Predicting Positive and Negative Relationships in Large Social Networks.

    Directory of Open Access Journals (Sweden)

    Guan-Nan Wang

    Full Text Available In a social network, users hold and express positive and negative attitudes (e.g. support/opposition towards other users. Those attitudes exhibit some kind of binary relationships among the users, which play an important role in social network analysis. However, some of those binary relationships are likely to be latent as the scale of social network increases. The essence of predicting latent binary relationships have recently began to draw researchers' attention. In this paper, we propose a machine learning algorithm for predicting positive and negative relationships in social networks inspired by structural balance theory and social status theory. More specifically, we show that when two users in the network have fewer common neighbors, the prediction accuracy of the relationship between them deteriorates. Accordingly, in the training phase, we propose a segment-based training framework to divide the training data into two subsets according to the number of common neighbors between users, and build a prediction model for each subset based on support vector machine (SVM. Moreover, to deal with large-scale social network data, we employ a sampling strategy that selects small amount of training data while maintaining high accuracy of prediction. We compare our algorithm with traditional algorithms and adaptive boosting of them. Experimental results of typical data sets show that our algorithm can deal with large social networks and consistently outperforms other methods.

  14. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...

  15. Development of a model for the prediction of the fuel consumption and nitrogen oxides emission trade-off for large ships

    DEFF Research Database (Denmark)

    Larsen, Ulrik; Pierobon, Leonardo; Baldi, Francesco;

    2015-01-01

    consideration of this trade-off mechanism is required in the design of marine propulsion systems. This study investigates five different configurations of two-stroke diesel-based machinery systems for large ships and their influence on the mentioned trade-off. Numerical models of a low-speed two-stroke diesel...

  16. Improving Prediction Accuracy of a Rate-Based Model of an MEA-Based Carbon Capture Process for Large-Scale Commercial Deployment

    Directory of Open Access Journals (Sweden)

    Xiaobo Luo

    2017-04-01

    Full Text Available Carbon capture and storage (CCS technology will play a critical role in reducing anthropogenic carbon dioxide (CO2 emission from fossil-fired power plants and other energy-intensive processes. However, the increment of energy cost caused by equipping a carbon capture process is the main barrier to its commercial deployment. To reduce the capital and operating costs of carbon capture, great efforts have been made to achieve optimal design and operation through process modeling, simulation, and optimization. Accurate models form an essential foundation for this purpose. This paper presents a study on developing a more accurate rate-based model in Aspen Plus® for the monoethanolamine (MEA-based carbon capture process by multistage model validations. The modeling framework for this process was established first. The steady-state process model was then developed and validated at three stages, which included a thermodynamic model, physical properties calculations, and a process model at the pilot plant scale, covering a wide range of pressures, temperatures, and CO2 loadings. The calculation correlations of liquid density and interfacial area were updated by coding Fortran subroutines in Aspen Plus®. The validation results show that the correlation combination for the thermodynamic model used in this study has higher accuracy than those of three other key publications and the model prediction of the process model has a good agreement with the pilot plant experimental data. A case study was carried out for carbon capture from a 250 MWe combined cycle gas turbine (CCGT power plant. Shorter packing height and lower specific duty were achieved using this accurate model.

  17. The predictability of large-scale wind-driven flows

    Directory of Open Access Journals (Sweden)

    A. Mahadevan

    2001-01-01

    Full Text Available The singular values associated with optimally growing perturbations to stationary and time-dependent solutions for the general circulation in an ocean basin provide a measure of the rate at which solutions with nearby initial conditions begin to diverge, and hence, a measure of the predictability of the flow. In this paper, the singular vectors and singular values of stationary and evolving examples of wind-driven, double-gyre circulations in different flow regimes are explored. By changing the Reynolds number in simple quasi-geostrophic models of the wind-driven circulation, steady, weakly aperiodic and chaotic states may be examined. The singular vectors of the steady state reveal some of the physical mechanisms responsible for optimally growing perturbations. In time-dependent cases, the dominant singular values show significant variability in time, indicating strong variations in the predictability of the flow. When the underlying flow is weakly aperiodic, the dominant singular values co-vary with integral measures of the large-scale flow, such as the basin-integrated upper ocean kinetic energy and the transport in the western boundary current extension. Furthermore, in a reduced gravity quasi-geostrophic model of a weakly aperiodic, double-gyre flow, the behaviour of the dominant singular values may be used to predict a change in the large-scale flow, a feature not shared by an analogous two-layer model. When the circulation is in a strongly aperiodic state, the dominant singular values no longer vary coherently with integral measures of the flow. Instead, they fluctuate in a very aperiodic fashion on mesoscale time scales. The dominant singular vectors then depend strongly on the arrangement of mesoscale features in the flow and the evolved forms of the associated singular vectors have relatively short spatial scales. These results have several implications. In weakly aperiodic, periodic, and stationary regimes, the mesoscale energy

  18. Large Unifying Hybrid Supernetwork Model

    Institute of Scientific and Technical Information of China (English)

    LIU; Qiang; FANG; Jin-qing; LI; Yong

    2015-01-01

    For depicting multi-hybrid process,large unifying hybrid network model(so called LUHNM)has two sub-hybrid ratios except dr.They are deterministic hybrid ratio(so called fd)and random hybrid ratio(so called gr),respectively.

  19. Large N Expansion. Vector Models

    CERN Document Server

    Nissimov, E; Nissimov, Emil; Pacheva, Svetlana

    2006-01-01

    Preliminary version of a contribution to the "Quantum Field Theory. Non-Perturbative QFT" topical area of "Modern Encyclopedia of Mathematical Physics" (SELECTA), eds. Aref'eva I, and Sternheimer D, Springer (2007). Consists of two parts - "main article" (Large N Expansion. Vector Models) and a "brief article" (BPHZL Renormalization).

  20. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  1. scaling theory of floods for predictions in a changing climate: a model to generate ensembles of runoff from a large number of hillslopes (Invited)

    Science.gov (United States)

    Furey, P.; Gupta, V. K.; Troutman, B. M.

    2013-12-01

    Peak flows in individual rainfall-runoff events exhibit spatial scaling in the 21 km2 Goodwin Creek Experimental Watershed (GCEW) in Mississippi, USA. A nonlinear geophysical theory has been developing to understand how scaling in peak flows for Rainfall-Runoff events arises from solutions of mass and momentum conservation equations in channel networks with self-similar topologies and geometries. The conservation equations are specified at the natural hillslope-link scale. The central hypothesis of the theory is that scaling is an emergent property in the limit of large drainage area. To develop a physical understanding of scaling, runoff generation from each hillslope in the basin is needed. GCEW contains 544 hillslopes, and direct observations of infiltration only exist, at best, at few locations. This situation is typical of all river basins in the world. As a result, representing the spatial and temporal variability of runoff generation throughout any river basin presents a great scientific challenge. Most models use point-scale equations for infiltration and point-scale observations to represent runoff generation at a larger scale, e.g. hillslope scale. We develop a physical-statistical hypothesis, combining both top-down and bottom-up observations, that hillslope water loss is inversely related to a function of a lognormal random variable. We take a top-down approach to develop a new runoff generation model to test our hypothesis. The model is based on the assumption that the probability distributions of a runoff-loss ratio have a space-time rescaling property. For over 100 rainfall-runoff events in GCEW, we found that the spatial probability distributions of a runoff-loss ratio can be rescaled to a new distribution that is common to all events. We interpret that random within-event differences in runoff-loss ratios in the model arise due to soil moisture spatial variability of water loss during events that is supported by observations. As an application of

  2. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  3. Childhood asthma prediction models: a systematic review.

    Science.gov (United States)

    Smit, Henriette A; Pinart, Mariona; Antó, Josep M; Keil, Thomas; Bousquet, Jean; Carlsen, Kai H; Moons, Karel G M; Hooft, Lotty; Carlsen, Karin C Lødrup

    2015-12-01

    Early identification of children at risk of developing asthma at school age is crucial, but the usefulness of childhood asthma prediction models in clinical practice is still unclear. We systematically reviewed all existing prediction models to identify preschool children with asthma-like symptoms at risk of developing asthma at school age. Studies were included if they developed a new prediction model or updated an existing model in children aged 4 years or younger with asthma-like symptoms, with assessment of asthma done between 6 and 12 years of age. 12 prediction models were identified in four types of cohorts of preschool children: those with health-care visits, those with parent-reported symptoms, those at high risk of asthma, or children in the general population. Four basic models included non-invasive, easy-to-obtain predictors only, notably family history, allergic disease comorbidities or precursors of asthma, and severity of early symptoms. Eight extended models included additional clinical tests, mostly specific IgE determination. Some models could better predict asthma development and other models could better rule out asthma development, but the predictive performance of no single model stood out in both aspects simultaneously. This finding suggests that there is a large proportion of preschool children with wheeze for which prediction of asthma development is difficult.

  4. Zephyr - the prediction models

    DEFF Research Database (Denmark)

    Nielsen, Torben Skov; Madsen, Henrik; Nielsen, Henrik Aalborg

    2001-01-01

    This paper briefly describes new models and methods for predicationg the wind power output from wind farms. The system is being developed in a project which has the research organization Risø and the department of Informatics and Mathematical Modelling (IMM) as the modelling team and all the Dani...

  5. Predictability of extreme values in geophysical models

    Directory of Open Access Journals (Sweden)

    A. E. Sterk

    2012-09-01

    Full Text Available Extreme value theory in deterministic systems is concerned with unlikely large (or small values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical models. We study whether finite-time Lyapunov exponents are larger or smaller for initial conditions leading to extremes. General statements on whether extreme values are better or less predictable are not possible: the predictability of extreme values depends on the observable, the attractor of the system, and the prediction lead time.

  6. Models of large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Frenk, C.S. (Physics Dept., Univ. of Durham (UK))

    1991-01-01

    The ingredients required to construct models of the cosmic large scale structure are discussed. Input from particle physics leads to a considerable simplification by offering concrete proposals for the geometry of the universe, the nature of the dark matter and the primordial fluctuations that seed the growth of structure. The remaining ingredient is the physical interaction that governs dynamical evolution. Empirical evidence provided by an analysis of a redshift survey of IRAS galaxies suggests that gravity is the main agent shaping the large-scale structure. In addition, this survey implies large values of the mean cosmic density, {Omega}> or approx.0.5, and is consistent with a flat geometry if IRAS galaxies are somewhat more clustered than the underlying mass. Together with current limits on the density of baryons from Big Bang nucleosynthesis, this lends support to the idea of a universe dominated by non-baryonic dark matter. Results from cosmological N-body simulations evolved from a variety of initial conditions are reviewed. In particular, neutrino dominated and cold dark matter dominated universes are discussed in detail. Finally, it is shown that apparent periodicities in the redshift distributions in pencil-beam surveys arise frequently from distributions which have no intrinsic periodicity but are clustered on small scales. (orig.).

  7. Confidence scores for prediction models

    DEFF Research Database (Denmark)

    Gerds, Thomas Alexander; van de Wiel, MA

    2011-01-01

    modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used...... to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer...

  8. Multiplexed Predictive Control of a Large Commercial Turbofan Engine

    Science.gov (United States)

    Richter, hanz; Singaraju, Anil; Litt, Jonathan S.

    2008-01-01

    Model predictive control is a strategy well-suited to handle the highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. However, it has thus far been infeasible to implement model predictive control in engine control applications, because of the combination of model complexity and the time allotted for the control update calculation. In this paper, a multiplexed implementation is proposed that dramatically reduces the computational burden of the quadratic programming optimization that must be solved online as part of the model-predictive-control algorithm. Actuator updates are calculated sequentially and cyclically in a multiplexed implementation, as opposed to the simultaneous optimization taking place in conventional model predictive control. Theoretical aspects are discussed based on a nominal model, and actual computational savings are demonstrated using a realistic commercial engine model.

  9. Modelling, controlling, predicting blackouts

    CERN Document Server

    Wang, Chengwei; Baptista, Murilo S

    2016-01-01

    The electric power system is one of the cornerstones of modern society. One of its most serious malfunctions is the blackout, a catastrophic event that may disrupt a substantial portion of the system, playing havoc to human life and causing great economic losses. Thus, understanding the mechanisms leading to blackouts and creating a reliable and resilient power grid has been a major issue, attracting the attention of scientists, engineers and stakeholders. In this paper, we study the blackout problem in power grids by considering a practical phase-oscillator model. This model allows one to simultaneously consider different types of power sources (e.g., traditional AC power plants and renewable power sources connected by DC/AC inverters) and different types of loads (e.g., consumers connected to distribution networks and consumers directly connected to power plants). We propose two new control strategies based on our model, one for traditional power grids, and another one for smart grids. The control strategie...

  10. Melanoma Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing melanoma cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    are calculated using on-line measurements of power production as well as HIRLAM predictions as input thus taking advantage of the auto-correlation, which is present in the power production for shorter pediction horizons. Statistical models are used to discribe the relationship between observed energy production......The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...... and HIRLAM predictions. The statistical models belong to the class of conditional parametric models. The models are estimated using local polynomial regression, but the estimation method is here extended to be adaptive in order to allow for slow changes in the system e.g. caused by the annual variations...

  12. Dark Radiation predictions from general Large Volume Scenarios

    CERN Document Server

    Hebecker, Arthur; Rompineve, Fabrizio; Witkowski, Lukas T

    2014-01-01

    Recent observations constrain the amount of Dark Radiation ($\\Delta N_{\\rm eff}$) and may even hint towards a non-zero value of $\\Delta N_{\\rm eff}$. It is by now well-known that this puts stringent constraints on the sequestered Large Volume Scenario (LVS), i.e. on LVS realisations with the Standard Model at a singularity. We go beyond this setting by considering LVS models where SM fields are realised on 7-branes in the geometric regime. As we argue, this naturally goes together with high-scale supersymmetry. The abundance of Dark Radiation is determined by the competition between the decay of the lightest modulus to axions, to the SM Higgs and to gauge fields. The latter decay channel avoids the most stringent constraints of the sequestered setting. Nevertheless, a rather robust prediction for a substantial amount of Dark Radiation can be made. This applies both to cases where the SM 4-cycles are stabilised by D-terms and are small "by accident" as well as to fibred models with the small cycles stabilised ...

  13. Predicting the Presence of Large Fish through Benthic Geomorphic Features

    Science.gov (United States)

    Knuth, F.; Sautter, L.; Levine, N. S.; Kracker, L.

    2013-12-01

    Marine Protected Areas are critical in sustaining the resilience of fish populations to commercial fishing operations. Using acoustic data to survey these areas promises efficiency, accuracy, and minimal environmental impact. In July, 2013, the NOAA Ship Pisces collected bathymetric, backscatter and water column data for 10 proposed MPA sites along the U.S. Southeast Atlantic continental shelf. A total of 205 km2 of seafloor were mapped between Mayport, FL and Wilmington, NC, using the SIMRAD ME70 and EK60 echosounder systems. These data were processed in Caris HIPS, QPS FMGT, MATLAB and ArcGIS. The backscatter and bathymetry reveal various benthic geomorphic features, including flat sand, rippled sand, and rugose hard bottom. Water column data directly above highly rugose hardbottom contains the greatest counts for large fish populations. Using spatial statistics, such as a geographically weighted regression model, we aim to identify features of the benthic profile, including rugosity, curvature and slope, that can predict the presence of large fish. The success of this approach will greatly expedite fishery surveys, minimize operational cost and aid in making timely management decisions.

  14. Predictability of extreme values in geophysical models

    NARCIS (Netherlands)

    Sterk, A.E.; Holland, M.P.; Rabassa, P.; Broer, H.W.; Vitolo, R.

    2012-01-01

    Extreme value theory in deterministic systems is concerned with unlikely large (or small) values of an observable evaluated along evolutions of the system. In this paper we study the finite-time predictability of extreme values, such as convection, energy, and wind speeds, in three geophysical model

  15. Prediction of nonlinear optical properties of large organic molecules

    Science.gov (United States)

    Cardelino, Beatriz H.

    1992-01-01

    The preparation of materials with large nonlinear responses usually requires involved synthetic processes. Thus, it is very advantageous for materials scientists to have a means of predicting nonlinear optical properties. The prediction of nonlinear optical properties has to be addressed first at the molecular level and then as bulk material. For relatively large molecules, two types of calculations may be used, which are the sum-over-states and the finite-field approach. The finite-field method was selected for this research, because this approach is better suited for larger molecules.

  16. Prediction models in complex terrain

    DEFF Research Database (Denmark)

    Marti, I.; Nielsen, Torben Skov; Madsen, Henrik

    2001-01-01

    The objective of the work is to investigatethe performance of HIRLAM in complex terrain when used as input to energy production forecasting models, and to develop a statistical model to adapt HIRLAM prediction to the wind farm. The features of the terrain, specially the topography, influence...

  17. Hybrid modeling and prediction of dynamical systems

    Science.gov (United States)

    Lloyd, Alun L.; Flores, Kevin B.

    2017-01-01

    Scientific analysis often relies on the ability to make accurate predictions of a system’s dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model’s equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data. PMID:28692642

  18. Spatial Economics Model Predicting Transport Volume

    Directory of Open Access Journals (Sweden)

    Lu Bo

    2016-10-01

    Full Text Available It is extremely important to predict the logistics requirements in a scientific and rational way. However, in recent years, the improvement effect on the prediction method is not very significant and the traditional statistical prediction method has the defects of low precision and poor interpretation of the prediction model, which cannot only guarantee the generalization ability of the prediction model theoretically, but also cannot explain the models effectively. Therefore, in combination with the theories of the spatial economics, industrial economics, and neo-classical economics, taking city of Zhuanghe as the research object, the study identifies the leading industry that can produce a large number of cargoes, and further predicts the static logistics generation of the Zhuanghe and hinterlands. By integrating various factors that can affect the regional logistics requirements, this study established a logistics requirements potential model from the aspect of spatial economic principles, and expanded the way of logistics requirements prediction from the single statistical principles to an new area of special and regional economics.

  19. Dynamic Predictive Density Combinations for Large Data Sets in Economics and Finance

    NARCIS (Netherlands)

    R. Casarin (Roberto); S. Grassi (Stefano); F. Ravazzolo (Francesco); H.K. van Dijk (Herman)

    2015-01-01

    markdownabstract__Abstract__ A Bayesian nonparametric predictive model is introduced to construct time-varying weighted combinations of a large set of predictive densities. A clustering mechanism allocates these densities into a smaller number of mutually exclusive subsets. Using properties of Aitc

  20. Predictive models of forest dynamics.

    Science.gov (United States)

    Purves, Drew; Pacala, Stephen

    2008-06-13

    Dynamic global vegetation models (DGVMs) have shown that forest dynamics could dramatically alter the response of the global climate system to increased atmospheric carbon dioxide over the next century. But there is little agreement between different DGVMs, making forest dynamics one of the greatest sources of uncertainty in predicting future climate. DGVM predictions could be strengthened by integrating the ecological realities of biodiversity and height-structured competition for light, facilitated by recent advances in the mathematics of forest modeling, ecological understanding of diverse forest communities, and the availability of forest inventory data.

  1. Predicting intracranial hemorrhage after traumatic brain injury in low and middle-income countries: A prognostic model based on a large, multi-center, international cohort

    Directory of Open Access Journals (Sweden)

    Subaiya Saleena

    2012-11-01

    Full Text Available Abstract Background Traumatic brain injury (TBI affects approximately 10 million people annually, of which intracranial hemorrhage is a devastating sequelae, occurring in one-third to half of cases. Patients in low and middle-income countries (LMIC are twice as likely to die following TBI as compared to those in high-income countries. Diagnostic capabilities and treatment options for intracranial hemorrhage are limited in LMIC as there are fewer computed tomography (CT scanners and neurosurgeons per patient as in high-income countries. Methods The Medical Research Council CRASH-1 trial was utilized to build this model. The study cohort included all patients from LMIC who received a CT scan of the brain (n = 5669. Prognostic variables investigated included age, sex, time from injury to randomization, pupil reactivity, cause of injury, seizure and the presence of major extracranial injury. Results There were five predictors that were included in the final model; age, Glasgow Coma Scale, pupil reactivity, the presence of a major extracranial injury and time from injury to presentation. The model demonstrated good discrimination and excellent calibration (c-statistic 0.71. A simplified risk score was created for clinical settings to estimate the percentage risk of intracranial hemorrhage among TBI patients. Conclusion Simple prognostic models can be used in LMIC to estimate the risk of intracranial hemorrhage among TBI patients. Combined with clinical judgment this may facilitate risk stratification, rapid transfer to higher levels of care and treatment in resource-poor settings.

  2. Prediction of Large Structure Welding Residual Stress by Similitude Principles

    Institute of Scientific and Technical Information of China (English)

    Shude Ji; Liguo Zhang; Xuesong Liu; Jianguo Yang

    2009-01-01

    On basis of the similitude principles, the conception of virtual simulative component and the auxiliary value of welding residual stress, which is deduced by the welding conduction theory, the relation of the welding residual stress between the simulative component and the practical component was attained. In order to verify the correctness of the relation, the investigation was done from the view of the welding experiment and the numerical simulation about the simulative component and the practical component. The results show that the distribution of welding residual stress of the simulative component is the same as that of the practical component. The ratio of welding residual stress attained by the experiment or the simulation method between the practical runner and the simulative component was compared with the ratio obtained by the similitude principles. Moreover, the error is less than 10%. This provides a new idea to predict the welding stress distribution of large practical structure by the contractible physical model, which is important for the welding experiment and the numerical simulation.

  3. Evaluating Predictive Densities of US Output Growth and Inflation in a Large Macroeconomic Data Set

    OpenAIRE

    Rossi, Barbara; Sekhposyan, Tatevik

    2013-01-01

    We evaluate conditional predictive densities for U.S. output growth and inflation using a number of commonly used forecasting models that rely on a large number of macroeconomic predictors. More specifically, we evaluate how well conditional predictive densities based on the commonly used normality assumption fit actual realizations out-of-sample. Our focus on predictive densities acknowledges the possibility that, although some predictors can improve or deteriorate point forecasts, they migh...

  4. Large Representation Recurrences in Large N Random Unitary Matrix Models

    CERN Document Server

    Karczmarek, Joanna L

    2011-01-01

    In a random unitary matrix model at large N, we study the properties of the expectation value of the character of the unitary matrix in the rank k symmetric tensor representation. We address the problem of whether the standard semiclassical technique for solving the model in the large N limit can be applied when the representation is very large, with k of order N. We find that the eigenvalues do indeed localize on an extremum of the effective potential; however, for finite but sufficiently large k/N, it is not possible to replace the discrete eigenvalue density with a continuous one. Nonetheless, the expectation value of the character has a well-defined large N limit, and when the discreteness of the eigenvalues is properly accounted for, it shows an intriguing approximate periodicity as a function of k/N.

  5. Predicting MHC class I epitopes in large datasets

    Directory of Open Access Journals (Sweden)

    Lengauer Thomas

    2010-02-01

    Full Text Available Abstract Background Experimental screening of large sets of peptides with respect to their MHC binding capabilities is still very demanding due to the large number of possible peptide sequences and the extensive polymorphism of the MHC proteins. Therefore, there is significant interest in the development of computational methods for predicting the binding capability of peptides to MHC molecules, as a first step towards selecting peptides for actual screening. Results We have examined the performance of four diverse MHC Class I prediction methods on comparatively large HLA-A and HLA-B allele peptide binding datasets extracted from the Immune Epitope Database and Analysis resource (IEDB. The chosen methods span a representative cross-section of available methodology for MHC binding predictions. Until the development of IEDB, such an analysis was not possible, as the available peptide sequence datasets were small and spread out over many separate efforts. We tested three datasets which differ in the IC50 cutoff criteria used to select the binders and non-binders. The best performance was achieved when predictions were performed on the dataset consisting only of strong binders (IC50 less than 10 nM and clear non-binders (IC50 greater than 10,000 nM. In addition, robustness of the predictions was only achieved for alleles that were represented with a sufficiently large (greater than 200, balanced set of binders and non-binders. Conclusions All four methods show good to excellent performance on the comprehensive datasets, with the artificial neural networks based method outperforming the other methods. However, all methods show pronounced difficulties in correctly categorizing intermediate binders.

  6. Large-scale prediction of drug-target relationships

    DEFF Research Database (Denmark)

    Kuhn, Michael; Campillos, Mónica; González, Paula

    2008-01-01

    , but also provides a more global view on drug-target relations. Here we review recent attempts to apply large-scale computational analyses to predict novel interactions of drugs and targets from molecular and cellular features. In this context, we quantify the family-dependent probability of two proteins...... to bind the same ligand as function of their sequence similarity. We finally discuss how phenotypic data could help to expand our understanding of the complex mechanisms of drug action....

  7. Quantitative Prediction of Concentrated Regions of Large and Superlarge Deposits in China

    Institute of Scientific and Technical Information of China (English)

    Wang Shicheng; Zhao Zhenyu; Wang Yutian

    2003-01-01

    Identification and quantitative prediction of large and superlarge mineral deposits of solid mineral resources using the mineral resource prediction theory and method with comprehensive information is carried out nationwide in China at a scale of 1: 5 000 000. Using deposit concentrated regions as the model units and concentrated mineralization anomaly regions as prediction units, the prediction is performed on GIS platform. The technical route and research method of locating large and superlarge mineral deposits and principle of compiling attribute table of independent variables and functional variables are proposed. Upon methodology study, the qualitative locating and quantitative predicting mineral deposits are carried out with quantitative theory Ⅲ and characteristic analysis, respectively, and the advantage and disadvantage of two methods are discussed. This research is significant for mineral resource prediction in ten provinces of western China.

  8. Specialized Language Models using Dialogue Predictions

    CERN Document Server

    Popovici, C; Popovici, Cosmin; Baggia, Paolo

    1996-01-01

    This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues coll...

  9. Creep Rupture Life Prediction Based on Analysis of Large Creep Deformation

    OpenAIRE

    YE Wenming; HU Xuteng; Ma, Xiaojian; SONG Yingdong

    2016-01-01

    A creep rupture life prediction method for high temperature component was proposed. The method was based on a true stress-strain elastoplastic creep constitutive model and the large deformation finite element analysis method. This method firstly used the high-temperature tensile stress-strain curve expressed by true stress and strain and the creep curve to build materials' elastoplastic and creep constitutive model respectively, then used the large deformation finite element method to calcula...

  10. Structuring very large domain models

    DEFF Research Database (Denmark)

    Störrle, Harald

    2010-01-01

    View/Viewpoint approaches like IEEE 1471-2000, or Kruchten's 4+1-view model are used to structure software architectures at a high level of granularity. While research has focused on architectural languages and with consistency between multiple views, practical questions such as the structuring a...

  11. Characterizing Attention with Predictive Network Models.

    Science.gov (United States)

    Rosenberg, M D; Finn, E S; Scheinost, D; Constable, R T; Chun, M M

    2017-04-01

    Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals' attentional abilities. While being some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that: (i) attention is a network property of brain computation; (ii) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task; and (iii) this architecture supports a general attentional ability that is common to several laboratory-based tasks and is impaired in attention deficit hyperactivity disorder (ADHD). Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Modeling of Carbon Tetrachloride Flow and Transport in the Subsurface of the 200 West Disposal Sites: Large-Scale Model Configuration and Prediction of Future Carbon Tetrachloride Distribution Beneath the 216-Z-9 Disposal Site

    Energy Technology Data Exchange (ETDEWEB)

    Oostrom, Mart; Thorne, Paul D.; Zhang, Z. F.; Last, George V.; Truex, Michael J.

    2008-12-17

    Three-dimensional simulations considered migration of dense, nonaqueous phase liquid (DNAPL) consisting of CT and co disposed organics in the subsurface as a function of the properties and distribution of subsurface sediments and of the properties and disposal history of the waste. Simulations of CT migration were conducted using the Water-Oil-Air mode of Subsurface Transport Over Multiple Phases (STOMP) simulator. A large-scale model was configured to model CT and waste water discharge from the major CT and waste-water disposal sites.

  13. Very Large System Dynamics Models - Lessons Learned

    Energy Technology Data Exchange (ETDEWEB)

    Jacob J. Jacobson; Leonard Malczynski

    2008-10-01

    This paper provides lessons learned from developing several large system dynamics (SD) models. System dynamics modeling practice emphasize the need to keep models small so that they are manageable and understandable. This practice is generally reasonable and prudent; however, there are times that large SD models are necessary. This paper outlines two large SD projects that were done at two Department of Energy National Laboratories, the Idaho National Laboratory and Sandia National Laboratories. This paper summarizes the models and then discusses some of the valuable lessons learned during these two modeling efforts.

  14. Biotic and abiotic factors predicting the global distribution and population density of an invasive large mammal

    Science.gov (United States)

    Lewis, Jesse S.; Farnsworth, Matthew L.; Burdett, Chris L.; Theobald, David M.; Gray, Miranda; Miller, Ryan S.

    2017-01-01

    Biotic and abiotic factors are increasingly acknowledged to synergistically shape broad-scale species distributions. However, the relative importance of biotic and abiotic factors in predicting species distributions is unclear. In particular, biotic factors, such as predation and vegetation, including those resulting from anthropogenic land-use change, are underrepresented in species distribution modeling, but could improve model predictions. Using generalized linear models and model selection techniques, we used 129 estimates of population density of wild pigs (Sus scrofa) from 5 continents to evaluate the relative importance, magnitude, and direction of biotic and abiotic factors in predicting population density of an invasive large mammal with a global distribution. Incorporating diverse biotic factors, including agriculture, vegetation cover, and large carnivore richness, into species distribution modeling substantially improved model fit and predictions. Abiotic factors, including precipitation and potential evapotranspiration, were also important predictors. The predictive map of population density revealed wide-ranging potential for an invasive large mammal to expand its distribution globally. This information can be used to proactively create conservation/management plans to control future invasions. Our study demonstrates that the ongoing paradigm shift, which recognizes that both biotic and abiotic factors shape species distributions across broad scales, can be advanced by incorporating diverse biotic factors. PMID:28276519

  15. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  16. Large-Angle CMB Suppression and Polarisation Predictions

    CERN Document Server

    Copi, C.J.; Schwarz, D.J.; Starkman, G.D.

    2013-01-01

    The anomalous lack of large angle temperature correlations has been a surprising feature of the CMB since first observed by COBE-DMR and subsequently confirmed and strengthened by WMAP. This anomaly may point to the need for modifications of the standard model of cosmology or may show that our Universe is a rare statistical fluctuation within that model. Further observations of the temperature auto-correlation function will not elucidate the issue; sufficiently high precision statistical observations already exist. Instead, alternative probes are required. In this work we explore the expectations for forthcoming polarisation observations. We define a prescription to test the hypothesis that the large-angle CMB temperature perturbations in our Universe represent a rare statistical fluctuation within the standard cosmological model. These tests are based on the temperature-Q Stokes parameter correlation. Unfortunately these tests cannot be expected to be definitive. However, we do show that if this TQ-correlati...

  17. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  18. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  19. 大规模水力压裂过程中超级13Cr 油管冲蚀预测模型建立%Erosion prediction model for super 13Cr tubing during large-scale hydraulic fracturing

    Institute of Scientific and Technical Information of China (English)

    王治国; 杨向同; 窦益华; 罗生俊

    2016-01-01

    大规模水力压裂过程中,高速流动的携砂压裂液会对油管内壁造成冲蚀,导致油管壁厚减薄,承载能力降低。为了准确预测大规模水力压裂过程中油管的冲蚀速率,利用自制的冲蚀实验装置,采用0.2%胍胶压裂液与40/70目石英砂混合形成的液固两相流体,实验研究了冲蚀角度和流体流速对超级13Cr 油管冲蚀速率的影响,建立了适用于大排量高砂比压裂的冲蚀预测模型,运用新模型,可以比较准确地预测注入总液量和排量对超级13Cr 油管壁厚损失的影响。算例分析结果表明,大规模压裂过程中,超级13Cr油管的壁厚损失范围为0.2~1.3 mm,应该控制排量和砂含量,防止油管壁由于冲蚀而导致安全性降低。%Sand-carrying fracturing fluid flowing at high-speed during large-scale hydraulic fracturing can erode inner walls of tubing, resulting in thinning of tubing sidewall and reduction of tubing loading capacity. To predict erosion rate of tubing during large-scale hydraulic fracturing accurately, the impacts of erosion angle and fluid flow speed on erosion rate of the super 13Cr tubing have been tested with an erosion testing unit made by ourselves, solid-liquid dual-phase fluid made of 0.2 % guar fracturing fluid and quartz sand of 40/70 meshes, and an erosion prediction model for fracturing with large discharging rate and high sand proportion has been constructed. By using the newly constructed model, impact of total fluid volume and discharging rate on wall thickness loss of the super 13Cr tubing can be predicted accurately. Case study results show the super 13Cr tubing may lose sidewall thicknesses of 0.2-1.3 mm during large-scale fracturing. Therefore, cares shall be taken to control discharging rate and sand content properly to maintain necessary safety performance of tubing sidewalls in case of erosion.

  20. PREDICT : model for prediction of survival in localized prostate cancer

    NARCIS (Netherlands)

    Kerkmeijer, Linda G W; Monninkhof, Evelyn M.; van Oort, Inge M.; van der Poel, Henk G.; de Meerleer, Gert; van Vulpen, Marco

    2016-01-01

    Purpose: Current models for prediction of prostate cancer-specific survival do not incorporate all present-day interventions. In the present study, a pre-treatment prediction model for patients with localized prostate cancer was developed.Methods: From 1989 to 2008, 3383 patients were treated with I

  1. Predictions via large {theta}{sub 13} from cascades

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki, E-mail: haba@phys.sci.osaka-u.ac.jp [Department of Physics, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043 (Japan); Takahashi, Ryo, E-mail: ryo.takahashi@mpi-hd.mpg.de [Max-Planck-Institut fuer Kernphysik, Saupfercheckweg 1, 69117 Heidelberg (Germany)

    2011-08-26

    We investigate a relation among neutrino observables, three mixing angles and two mass squared differences, from a cascade texture of neutrino mass matrix. We show an allowed region of the correlation by use of current data of neutrino oscillation experiments. The relation predicts sharp correlations among neutrino mixing angles as 0.315{<=}sin{sup 2}{theta}{sub 12}{<=}0.332 and 0.480{<=}sin{sup 2}{theta}{sub 23}{<=}0.500 with a large {theta}{sub 13} (0.03

  2. Prediction of broadband noise from large horizontal axis wind turbine generators

    Science.gov (United States)

    Grosveld, F. W.

    1984-01-01

    A method is presented for predicting the broadband noise spectra of large horizontal axis wind turbine generators. It includes contributions from such noise sources as the inflow turbulence to the rotor, the interactions between the turbulent boundary layers on the blade surfaces with their trailing edges and the wake due to a blunt trailing edge. The method is partly empirical and is based on acoustic measurements of large wind turbines and airfoil models. The predicted frequency spectra are compared with measured data from several machines including the MOD-OA, the MOD-2, the WTS-4 and the U.S. Wind-power Inc. machine. Also included is a broadband noise prediction for the proposed MOD-5B. The significance of the effects of machine size, power output, trailing edge bluntness and distance to the receiver is illustrated. Good agreement is obtained between the predicted and measured far field noise spectra.

  3. Predictive Modeling of Cardiac Ischemia

    Science.gov (United States)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  4. Composite model with large mixing of neutrinos

    CERN Document Server

    Haba, N

    1999-01-01

    We suggest a simple composite model that induces the large flavor mixing of neutrino in the supersymmetric theory. This model has only one hyper-color in addition to the standard gauge group, which makes composite states of preons. In this model, {\\bf 10} and {\\bf 1} representations in SU(5) grand unified theory are composite states and produce the mass hierarchy. This explains why the large mixing is realized in the lepton sector, while the small mixing is realized in the quark sector. This model can naturally solve the atmospheric neutrino problem. We can also solve the solar neutrino problem by improving the model.

  5. Measurement and prediction of broadband noise from large horizontal axis wind turbine generators

    Science.gov (United States)

    Grosveld, F. W.; Shepherd, K. P.; Hubbard, H. H.

    1995-01-01

    A method is presented for predicting the broadband noise spectra of large wind turbine generators. It includes contributions from such noise sources as the inflow turbulence to the rotor, the interactions between the turbulent boundary layers on the blade surfaces with their trailing edges and the wake due to a blunt trailing edge. The method is partly empirical and is based on acoustic measurements of large wind turbines and airfoil models. Spectra are predicted for several large machines including the proposed MOD-5B. Measured data are presented for the MOD-2, the WTS-4, the MOD-OA, and the U.S. Windpower Inc. machines. Good agreement is shown between the predicted and measured far field noise spectra.

  6. Neural Fuzzy Inference System-Based Weather Prediction Model and Its Precipitation Predicting Experiment

    Directory of Open Access Journals (Sweden)

    Jing Lu

    2014-11-01

    Full Text Available We propose a weather prediction model in this article based on neural network and fuzzy inference system (NFIS-WPM, and then apply it to predict daily fuzzy precipitation given meteorological premises for testing. The model consists of two parts: the first part is the “fuzzy rule-based neural network”, which simulates sequential relations among fuzzy sets using artificial neural network; and the second part is the “neural fuzzy inference system”, which is based on the first part, but could learn new fuzzy rules from the previous ones according to the algorithm we proposed. NFIS-WPM (High Pro and NFIS-WPM (Ave are improved versions of this model. It is well known that the need for accurate weather prediction is apparent when considering the benefits. However, the excessive pursuit of accuracy in weather prediction makes some of the “accurate” prediction results meaningless and the numerical prediction model is often complex and time-consuming. By adapting this novel model to a precipitation prediction problem, we make the predicted outcomes of precipitation more accurate and the prediction methods simpler than by using the complex numerical forecasting model that would occupy large computation resources, be time-consuming and which has a low predictive accuracy rate. Accordingly, we achieve more accurate predictive precipitation results than by using traditional artificial neural networks that have low predictive accuracy.

  7. Refactoring Process Models in Large Process Repositories.

    NARCIS (Netherlands)

    Weber, B.; Reichert, M.U.

    2008-01-01

    With the increasing adoption of process-aware information systems (PAIS), large process model repositories have emerged. Over time respective models have to be re-aligned to the real-world business processes through customization or adaptation. This bears the risk that model redundancies are introdu

  8. Refactoring Process Models in Large Process Repositories.

    NARCIS (Netherlands)

    Weber, B.; Reichert, M.U.

    With the increasing adoption of process-aware information systems (PAIS), large process model repositories have emerged. Over time respective models have to be re-aligned to the real-world business processes through customization or adaptation. This bears the risk that model redundancies are

  9. Numerical weather prediction model tuning via ensemble prediction system

    Science.gov (United States)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  10. Uncertainties in predicting rice yield by current crop models under a wide range of climatic conditions

    NARCIS (Netherlands)

    Li, T.; Hasegawa, T.; Yin, X.; Zhu, Y.; Boote, K.; Adam, M.; Bregaglio, S.; Buis, S.; Confalonieri, R.; Fumoto, T.; Gaydon, D.; Marcaida III, M.; Nakagawa, H.; Oriol, P.; Ruane, A.C.; Ruget, F.; Singh, B.; Singh, U.; Tang, L.; Yoshida, H.; Zhang, Z.; Bouman, B.

    2015-01-01

    Predicting rice (Oryza sativa) productivity under future climates is important for global food security. Ecophysiological crop models in combination with climate model outputs are commonly used in yield prediction, but uncertainties associated with crop models remain largely unquantified. We evaluat

  11. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...

  12. A prediction model for Clostridium difficile recurrence

    Directory of Open Access Journals (Sweden)

    Francis D. LaBarbera

    2015-02-01

    Full Text Available Background: Clostridium difficile infection (CDI is a growing problem in the community and hospital setting. Its incidence has been on the rise over the past two decades, and it is quickly becoming a major concern for the health care system. High rate of recurrence is one of the major hurdles in the successful treatment of C. difficile infection. There have been few studies that have looked at patterns of recurrence. The studies currently available have shown a number of risk factors associated with C. difficile recurrence (CDR; however, there is little consensus on the impact of most of the identified risk factors. Methods: Our study was a retrospective chart review of 198 patients diagnosed with CDI via Polymerase Chain Reaction (PCR from February 2009 to Jun 2013. In our study, we decided to use a machine learning algorithm called the Random Forest (RF to analyze all of the factors proposed to be associated with CDR. This model is capable of making predictions based on a large number of variables, and has outperformed numerous other models and statistical methods. Results: We came up with a model that was able to accurately predict the CDR with a sensitivity of 83.3%, specificity of 63.1%, and area under curve of 82.6%. Like other similar studies that have used the RF model, we also had very impressive results. Conclusions: We hope that in the future, machine learning algorithms, such as the RF, will see a wider application.

  13. Predictive Model Assessment for Count Data

    Science.gov (United States)

    2007-09-05

    critique count regression models for patent data, and assess the predictive performance of Bayesian age-period-cohort models for larynx cancer counts...the predictive performance of Bayesian age-period-cohort models for larynx cancer counts in Germany. We consider a recent suggestion by Baker and...Figure 5. Boxplots for various scores for patent data count regressions. 11 Table 1 Four predictive models for larynx cancer counts in Germany, 1998–2002

  14. Large pupils predict goal-driven eye movements.

    Science.gov (United States)

    Mathôt, Sebastiaan; Siebold, Alisha; Donk, Mieke; Vitu, Françoise

    2015-06-01

    Here we report that large pupils predict fixations of the eye on low-salient, inconspicuous parts of a visual scene. We interpret this as showing that mental effort, reflected by a dilation of the pupil, is required to guide gaze toward objects that are relevant to current goals, but that may not be very salient. When mental effort is low, reflected by a constriction of the pupil, the eyes tend to be captured by high-salient parts of the image, irrespective of top-down goals. The relationship between pupil size and visual saliency was not driven by luminance or a range of other factors that we considered. Crucially, the relationship was strongest when mental effort was invested exclusively in eye-movement control (i.e., reduced in a dual-task setting), which suggests that it is not due to general effort or arousal. Our finding illustrates that goal-driven control during scene viewing requires mental effort, and that pupil size can be used as an online measure to track the goal-drivenness of behavior. (c) 2015 APA, all rights reserved).

  15. Approach to Model Predictive Control of Large Wind Turbine Using Light Detection and Ranging Measurements%大型风电机组激光雷达辅助模型预测控制方法

    Institute of Scientific and Technical Information of China (English)

    韩兵; 周腊吾; 陈浩; 田猛; 邓宁峰

    2016-01-01

    With increasing size of large wind turbine, control methods faces new opportunities and challenges, the development of remote sensing technology provides a new research field of the traditional control strategy. This paper focused on the design of light detection and ranging (LIDAR) assisted model predictive control (MPC) of wind turbine, it achieved wind speed disturbance feedforward compensation control. First, the blade element momentum (BEM) theory have analyzed the wind turbine loads and LIDAR forecast wind speed of the rotor windward side, used of extended Kalman filter reconstruct unknown nonlinear wind turbine model for prediction horizon state values real-time processing, it solved the minimum objective function to get the current system time of the optimal control strategy, to minimize the reference trajectory and the output value. Finally, the experiment of traditional control method comparative with LIDAR assisted LMPC and NMPC, the results show that combination of LIDAR and MPC can improve power coefficient of large wind turbines and mitigate the fatigue load of wind turbine.%随着风电机组基础结构的不断增大,风电机组的控制方法面临新的机遇和挑战,而遥感测量技术的发展给传统风电机组控制策略提供一个新的研究领域。该文提出了基于激光雷达(light detection and ranging,LIDAR)辅助风电机组模型预测控制方法来实现控制系统对风速扰动的前馈补偿控制。首先根据叶素动量理论分析风电机组的载荷情况和LIDAR 预测风轮迎风面的有效风速,利用扩展卡尔曼滤波重建噪声状态的非线性风电机组模型的未知状态,对预测时域状态值的进行预测实时处理,以求解最小目标函数获取系统当前时刻的最优化控制,使得系统参考轨迹和未来输出值之间差值实现最小化。最后,通过进行风电机组传统控制方法与LIDAR辅助线性模型预测控制、非线性模型预测控

  16. Predictive modelling of ferroelectric tunnel junctions

    Science.gov (United States)

    Velev, Julian P.; Burton, John D.; Zhuravlev, Mikhail Ye; Tsymbal, Evgeny Y.

    2016-05-01

    Ferroelectric tunnel junctions combine the phenomena of quantum-mechanical tunnelling and switchable spontaneous polarisation of a nanometre-thick ferroelectric film into novel device functionality. Switching the ferroelectric barrier polarisation direction produces a sizable change in resistance of the junction—a phenomenon known as the tunnelling electroresistance effect. From a fundamental perspective, ferroelectric tunnel junctions and their version with ferromagnetic electrodes, i.e., multiferroic tunnel junctions, are testbeds for studying the underlying mechanisms of tunnelling electroresistance as well as the interplay between electric and magnetic degrees of freedom and their effect on transport. From a practical perspective, ferroelectric tunnel junctions hold promise for disruptive device applications. In a very short time, they have traversed the path from basic model predictions to prototypes for novel non-volatile ferroelectric random access memories with non-destructive readout. This remarkable progress is to a large extent driven by a productive cycle of predictive modelling and innovative experimental effort. In this review article, we outline the development of the ferroelectric tunnel junction concept and the role of theoretical modelling in guiding experimental work. We discuss a wide range of physical phenomena that control the functional properties of ferroelectric tunnel junctions and summarise the state-of-the-art achievements in the field.

  17. Nonlinear chaotic model for predicting storm surges

    Directory of Open Access Journals (Sweden)

    M. Siek

    2010-09-01

    Full Text Available This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.

  18. Nonlinear chaotic model for predicting storm surges

    NARCIS (Netherlands)

    Siek, M.; Solomatine, D.P.

    This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables.

  19. EFFICIENT PREDICTIVE MODELLING FOR ARCHAEOLOGICAL RESEARCH

    OpenAIRE

    Balla, A.; Pavlogeorgatos, G.; Tsiafakis, D.; Pavlidis, G.

    2014-01-01

    The study presents a general methodology for designing, developing and implementing predictive modelling for identifying areas of archaeological interest. The methodology is based on documented archaeological data and geographical factors, geospatial analysis and predictive modelling, and has been applied to the identification of possible Macedonian tombs’ locations in Northern Greece. The model was tested extensively and the results were validated using a commonly used predictive gain,...

  20. Distributed Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus Fogtmann; Vandenberghe, Lieven; Poulsen, Niels Kjølstad

    2016-01-01

    Integration of a large number of flexible consumers in a smart grid requires a scalable power balancing strategy. We formulate the control problem as an optimization problem to be solved repeatedly by the aggregator in a model predictive control framework. To solve the large-scale control problem...

  1. Laboratory Modeling of Aspects of Large Fires,

    Science.gov (United States)

    1984-04-30

    7 -7 g~L AD-A153 152 DNA-TR- 84-18 LABORATORY MODELING OF ASPECTS OF LARGE FIRES G.F. Carrier "URARY F.E. Fendell b DVSO R.D. Fleeter N. Got L.M...I1I TITLE (include Socurty Olassihicarion) LABORATORY MODELING OF ASPECTS OF LARGE FIRES 12. PERSONAL AUrHoR(S G.F. Carrier F.E. Fendell R.D. Fleeter N...Motorbuch Verlag.___ Caidin, M. (1960). A Torch to the Enemy: the Fire Raid on Tokyo. New York, NY: Ballantine. Carrier, G. F., Fendell , F. E., and

  2. Large scale topic modeling made practical

    DEFF Research Database (Denmark)

    Wahlgreen, Bjarne Ørum; Hansen, Lars Kai

    2011-01-01

    Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number of docume......Topic models are of broad interest. They can be used for query expansion and result structuring in information retrieval and as an important component in services such as recommender systems and user adaptive advertising. In large scale applications both the size of the database (number...... topics at par with a much larger case specific vocabulary....

  3. How to Establish Clinical Prediction Models

    Directory of Open Access Journals (Sweden)

    Yong-ho Lee

    2016-03-01

    Full Text Available A clinical prediction model can be applied to several challenging clinical scenarios: screening high-risk individuals for asymptomatic disease, predicting future events such as disease or death, and assisting medical decision-making and health education. Despite the impact of clinical prediction models on practice, prediction modeling is a complex process requiring careful statistical analyses and sound clinical judgement. Although there is no definite consensus on the best methodology for model development and validation, a few recommendations and checklists have been proposed. In this review, we summarize five steps for developing and validating a clinical prediction model: preparation for establishing clinical prediction models; dataset selection; handling variables; model generation; and model evaluation and validation. We also review several studies that detail methods for developing clinical prediction models with comparable examples from real practice. After model development and vigorous validation in relevant settings, possibly with evaluation of utility/usability and fine-tuning, good models can be ready for the use in practice. We anticipate that this framework will revitalize the use of predictive or prognostic research in endocrinology, leading to active applications in real clinical practice.

  4. Application of Nonlinear Predictive Control Based on RBF Network Predictive Model in MCFC Plant

    Institute of Scientific and Technical Information of China (English)

    CHEN Yue-hua; CAO Guang-yi; ZHU Xin-jian

    2007-01-01

    This paper described a nonlinear model predictive controller for regulating a molten carbonate fuel cell (MCFC). A detailed mechanism model of output voltage of a MCFC was presented at first. However, this model was too complicated to be used in a control system. Consequently, an off line radial basis function (RBF) network was introduced to build a nonlinear predictive model. And then, the optimal control sequences were obtained by applying golden mean method. The models and controller have been realized in the MATLAB environment. Simulation results indicate the proposed algorithm exhibits satisfying control effect even when the current densities vary largely.

  5. Large-scale transportation network congestion evolution prediction using deep learning theory.

    Science.gov (United States)

    Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai

    2015-01-01

    Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.

  6. Large-scale transportation network congestion evolution prediction using deep learning theory.

    Directory of Open Access Journals (Sweden)

    Xiaolei Ma

    Full Text Available Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS and Internet of Things (IoT, transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.

  7. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  8. Modeling capillary forces for large displacements

    NARCIS (Netherlands)

    Mastrangeli, M.; Arutinov, G.; Smits, E.C.P.; Lambert, P.

    2014-01-01

    Originally applied to the accurate, passive positioning of submillimetric devices, recent works proved capillary self-alignment as effective also for larger components and relatively large initial offsets. In this paper, we describe an analytic quasi-static model of 1D capillary restoring forces tha

  9. Pronunciation Modeling for Large Vocabulary Speech Recognition

    Science.gov (United States)

    Kantor, Arthur

    2010-01-01

    The large pronunciation variability of words in conversational speech is one of the major causes of low accuracy in automatic speech recognition (ASR). Many pronunciation modeling approaches have been developed to address this problem. Some explicitly manipulate the pronunciation dictionary as well as the set of the units used to define the…

  10. Inverse modeling for Large-Eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.

    1998-01-01

    Approximate higher order polynomial inversion of the top-hat filter is developed with which the turbulent stress tensor in Large-Eddy Simulation can be consistently represented using the filtered field. Generalized (mixed) similarity models are proposed which improved the agreement with the kinetic

  11. Large eddy simulation modelling of combustion for propulsion applications.

    Science.gov (United States)

    Fureby, C

    2009-07-28

    Predictive modelling of turbulent combustion is important for the development of air-breathing engines, internal combustion engines, furnaces and for power generation. Significant advances in modelling non-reactive turbulent flows are now possible with the development of large eddy simulation (LES), in which the large energetic scales of the flow are resolved on the grid while modelling the effects of the small scales. Here, we discuss the use of combustion LES in predictive modelling of propulsion applications such as gas turbine, ramjet and scramjet engines. The LES models used are described in some detail and are validated against laboratory data-of which results from two cases are presented. These validated LES models are then applied to an annular multi-burner gas turbine combustor and a simplified scramjet combustor, for which some additional experimental data are available. For these cases, good agreement with the available reference data is obtained, and the LES predictions are used to elucidate the flow physics in such devices to further enhance our knowledge of these propulsion systems. Particular attention is focused on the influence of the combustion chemistry, turbulence-chemistry interaction, self-ignition, flame holding burner-to-burner interactions and combustion oscillations.

  12. Distributed model predictive control made easy

    CERN Document Server

    Negenborn, Rudy

    2014-01-01

    The rapid evolution of computer science, communication, and information technology has enabled the application of control techniques to systems beyond the possibilities of control theory just a decade ago. Critical infrastructures such as electricity, water, traffic and intermodal transport networks are now in the scope of control engineers. The sheer size of such large-scale systems requires the adoption of advanced distributed control approaches. Distributed model predictive control (MPC) is one of the promising control methodologies for control of such systems.   This book provides a state-of-the-art overview of distributed MPC approaches, while at the same time making clear directions of research that deserve more attention. The core and rationale of 35 approaches are carefully explained. Moreover, detailed step-by-step algorithmic descriptions of each approach are provided. These features make the book a comprehensive guide both for those seeking an introduction to distributed MPC as well as for those ...

  13. Case studies in archaeological predictive modelling

    NARCIS (Netherlands)

    Verhagen, Jacobus Wilhelmus Hermanus Philippus

    2007-01-01

    In this thesis, a collection of papers is put together dealing with various quantitative aspects of predictive modelling and archaeological prospection. Among the issues covered are the effects of survey bias on the archaeological data used for predictive modelling, and the complexities of testing p

  14. A Predictive Model of Geosynchronous Magnetopause Crossings

    CERN Document Server

    Dmitriev, A; Chao, J -K

    2013-01-01

    We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...

  15. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  16. Predicting artificailly drained areas by means of selective model ensemble

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Beucher, Amélie; Iversen, Bo Vangsø

    . The approaches employed include decision trees, discriminant analysis, regression models, neural networks and support vector machines amongst others. Several models are trained with each method, using variously the original soil covariates and principal components of the covariates. With a large ensemble...... out since the mid-19th century, and it has been estimated that half of the cultivated area is artificially drained (Olesen, 2009). A number of machine learning approaches can be used to predict artificially drained areas in geographic space. However, instead of choosing the most accurate model....... The study aims firstly to train a large number of models to predict the extent of artificially drained areas using various machine learning approaches. Secondly, the study will develop a method for selecting the models, which give a good prediction of artificially drained areas, when used in conjunction...

  17. A Dipole on the Sky: Predictions for Hypervelocity Stars from the Large Magellanic Cloud

    CERN Document Server

    Boubert, Douglas

    2016-01-01

    We predict the distribution of hypervelocity stars (HVSs) ejected from the Large Magellanic Cloud (LMC), under the assumption that the dwarf galaxy hosts a central massive black hole (MBH). For the majority of stars ejected from the LMC the orbital velocity of the LMC has contributed a significant fraction of their galactic rest frame velocity, leading to a dipole density distribution on the sky. We quantify the dipole using spherical harmonic analysis and contrast with the monopole expected for HVSs ejected from the Galactic Center. There is a tendril in the density distribution that leads the LMC which is coincident with the well-known and unexplained clustering of HVSs in the constellations of Leo and Sextans. Our model is falsifiable, since it predicts that Gaia will reveal a large density of HVSs in the southern hemisphere.

  18. Validation of Biomarker-based risk prediction models

    OpenAIRE

    Taylor, Jeremy M.G.; Ankerst, Donna P.; Andridge, Rebecca R.

    2008-01-01

    The increasing availability and use of predictive models to facilitate informed decision making highlights the need for careful assessment of the validity of these models. In particular, models involving biomarkers require careful validation for two reasons: issues with overfitting when complex models involve a large number of biomarkers, and inter-laboratory variation in assays used to measure biomarkers. In this paper we distinguish between internal and external statistical validation. Inte...

  19. An economic model of large Medicaid practices.

    Science.gov (United States)

    Cromwell, J; Mitchell, J B

    1984-06-01

    Public attention given to Medicaid "mills" prompted this more general investigation of the origins of large Medicaid practices. A dual market demand model is proposed showing how Medicaid competes with private insurers for scarce physician time. Various program parameters--fee schedules, coverage, collection costs--are analyzed along with physician preferences, specialties, and other supply-side characteristics. Maximum likelihood techniques are used to test the model. The principal finding is that in raising Medicaid fees, as many physicians opt into the program as expand their Medicaid caseloads to exceptional levels, leaving the maldistribution of patients unaffected while notably improving access. Still, the fact that Medicaid fees are lower than those of private insurers does lead to reduced access to more qualified practitioners. Where anti-Medicaid sentiment is stronger, access is also reduced and large Medicaid practices more likely to flourish.

  20. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  1. A first large-scale flood inundation forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  2. Engineering large animal models of human disease.

    Science.gov (United States)

    Whitelaw, C Bruce A; Sheets, Timothy P; Lillico, Simon G; Telugu, Bhanu P

    2016-01-01

    The recent development of gene editing tools and methodology for use in livestock enables the production of new animal disease models. These tools facilitate site-specific mutation of the genome, allowing animals carrying known human disease mutations to be produced. In this review, we describe the various gene editing tools and how they can be used for a range of large animal models of diseases. This genomic technology is in its infancy but the expectation is that through the use of gene editing tools we will see a dramatic increase in animal model resources available for both the study of human disease and the translation of this knowledge into the clinic. Comparative pathology will be central to the productive use of these animal models and the successful translation of new therapeutic strategies.

  3. Large-scale multimedia modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  4. Prediction of Catastrophes: an experimental model

    CERN Document Server

    Peters, Randall D; Pomeau, Yves

    2012-01-01

    Catastrophes of all kinds can be roughly defined as short duration-large amplitude events following and followed by long periods of "ripening". Major earthquakes surely belong to the class of 'catastrophic' events. Because of the space-time scales involved, an experimental approach is often difficult, not to say impossible, however desirable it could be. Described in this article is a "laboratory" setup that yields data of a type that is amenable to theoretical methods of prediction. Observations are made of a critical slowing down in the noisy signal of a solder wire creeping under constant stress. This effect is shown to be a fair signal of the forthcoming catastrophe in both of two dynamical models. The first is an "abstract" model in which a time dependent quantity drifts slowly but makes quick jumps from time to time. The second is a realistic physical model for the collective motion of dislocations (the Ananthakrishna set of equations for creep). Hope thus exists that similar changes in the response to ...

  5. A large-scale evaluation of computational protein function prediction

    NARCIS (Netherlands)

    Radivojac, P.; Clark, W.T.; Oron, T.R.; Schnoes, A.M.; Wittkop, T.; Kourmpetis, Y.A.I.; Dijk, van A.D.J.; Friedberg, I.

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high

  6. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Directory of Open Access Journals (Sweden)

    Saerom Park

    Full Text Available Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  7. Predicting Market Impact Costs Using Nonparametric Machine Learning Models.

    Science.gov (United States)

    Park, Saerom; Lee, Jaewook; Son, Youngdoo

    2016-01-01

    Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.

  8. Evaluation of burst pressure prediction models for line pipes

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xian-Kui, E-mail: zhux@battelle.org [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States); Leis, Brian N. [Battelle Memorial Institute, 505 King Avenue, Columbus, OH 43201 (United States)

    2012-01-15

    Accurate prediction of burst pressure plays a central role in engineering design and integrity assessment of oil and gas pipelines. Theoretical and empirical solutions for such prediction are evaluated in this paper relative to a burst pressure database comprising more than 100 tests covering a variety of pipeline steel grades and pipe sizes. Solutions considered include three based on plasticity theory for the end-capped, thin-walled, defect-free line pipe subjected to internal pressure in terms of the Tresca, von Mises, and ZL (or Zhu-Leis) criteria, one based on a cylindrical instability stress (CIS) concept, and a large group of analytical and empirical models previously evaluated by Law and Bowie (International Journal of Pressure Vessels and Piping, 84, 2007: 487-492). It is found that these models can be categorized into either a Tresca-family or a von Mises-family of solutions, except for those due to Margetson and Zhu-Leis models. The viability of predictions is measured via statistical analyses in terms of a mean error and its standard deviation. Consistent with an independent parallel evaluation using another large database, the Zhu-Leis solution is found best for predicting burst pressure, including consideration of strain hardening effects, while the Tresca strength solutions including Barlow, Maximum shear stress, Turner, and the ASME boiler code provide reasonably good predictions for the class of line-pipe steels with intermediate strain hardening response. - Highlights: Black-Right-Pointing-Pointer This paper evaluates different burst pressure prediction models for line pipes. Black-Right-Pointing-Pointer The existing models are categorized into two major groups of Tresca and von Mises solutions. Black-Right-Pointing-Pointer Prediction quality of each model is assessed statistically using a large full-scale burst test database. Black-Right-Pointing-Pointer The Zhu-Leis solution is identified as the best predictive model.

  9. Analytical modeling of large-angle CMBR anisotropies from textures

    CERN Document Server

    Magueijo, J

    1995-01-01

    We propose an analytic method for predicting the large angle CMBR temperature fluctuations induced by model textures. The model makes use of only a small number of phenomenological parameters which ought to be measured from simple simulations. We derive semi-analytically the C^l-spectrum for 2\\leq l\\leq 30 together with its associated non-Gaussian cosmic variance error bars. A slightly tilted spectrum with an extra suppression at low l is found, and we investigate the dependence of the tilt on the parameters of the model. We also produce a prediction for the two point correlation function. We find a high level of cosmic confusion between texture scenarios and standard inflationary theories in any of these quantities. However, we discover that a distinctive non-Gaussian signal ought to be expected at low l, reflecting the prominent effect of the last texture in these multipoles.

  10. Model for predicting mountain wave field uncertainties

    Science.gov (United States)

    Damiens, Florentin; Lott, François; Millet, Christophe; Plougonven, Riwal

    2017-04-01

    Studying the propagation of acoustic waves throughout troposphere requires knowledge of wind speed and temperature gradients from the ground up to about 10-20 km. Typical planetary boundary layers flows are known to present vertical low level shears that can interact with mountain waves, thereby triggering small-scale disturbances. Resolving these fluctuations for long-range propagation problems is, however, not feasible because of computer memory/time restrictions and thus, they need to be parameterized. When the disturbances are small enough, these fluctuations can be described by linear equations. Previous works by co-authors have shown that the critical layer dynamics that occur near the ground produces large horizontal flows and buoyancy disturbances that result in intense downslope winds and gravity wave breaking. While these phenomena manifest almost systematically for high Richardson numbers and when the boundary layer depth is relatively small compare to the mountain height, the process by which static stability affects downslope winds remains unclear. In the present work, new linear mountain gravity wave solutions are tested against numerical predictions obtained with the Weather Research and Forecasting (WRF) model. For Richardson numbers typically larger than unity, the mesoscale model is used to quantify the effect of neglected nonlinear terms on downslope winds and mountain wave patterns. At these regimes, the large downslope winds transport warm air, a so called "Foehn" effect than can impact sound propagation properties. The sensitivity of small-scale disturbances to Richardson number is quantified using two-dimensional spectral analysis. It is shown through a pilot study of subgrid scale fluctuations of boundary layer flows over realistic mountains that the cross-spectrum of mountain wave field is made up of the same components found in WRF simulations. The impact of each individual component on acoustic wave propagation is discussed in terms of

  11. Large genetic animal models of Huntington's Disease.

    Science.gov (United States)

    Morton, A Jennifer; Howland, David S

    2013-01-01

    The dominant nature of the Huntington's disease gene mutation has allowed genetic models to be developed in multiple species, with the mutation causing an abnormal neurological phenotype in all animals in which it is expressed. Many different rodent models have been generated. The most widely used of these, the transgenic R6/2 mouse, carries the mutation in a fragment of the human huntingtin gene and has a rapidly progressive and fatal neurological phenotype with many relevant pathological changes. Nevertheless, their rapid decline has been frequently questioned in the context of a disease that takes years to manifest in humans, and strenuous efforts have been made to make rodent models that are genetically more 'relevant' to the human condition, including full length huntingtin gene transgenic and knock-in mice. While there is no doubt that we have learned, and continue to learn much from rodent models, their usefulness is limited by two species constraints. First, the brains of rodents differ significantly from humans in both their small size and their neuroanatomical organization. Second, rodents have much shorter lifespans than humans. Here, we review new approaches taken to these challenges in the development of models of Huntington's disease in large brained, long-lived animals. We discuss the need for such models, and how they might be used to fill specific niches in preclinical Huntington's disease research, particularly in testing gene-based therapeutics. We discuss the advantages and disadvantages of animals in which the prodromal period of disease extends over a long time span. We suggest that there is considerable 'value added' for large animal models in preclinical Huntington's disease research.

  12. Impact of modellers' decisions on hydrological a priori predictions

    Science.gov (United States)

    Holländer, H. M.; Bormann, H.; Blume, T.; Buytaert, W.; Chirico, G. B.; Exbrayat, J.-F.; Gustafsson, D.; Hölzel, H.; Krauße, T.; Kraft, P.; Stoll, S.; Blöschl, G.; Flühler, H.

    2014-06-01

    In practice, the catchment hydrologist is often confronted with the task of predicting discharge without having the needed records for calibration. Here, we report the discharge predictions of 10 modellers - using the model of their choice - for the man-made Chicken Creek catchment (6 ha, northeast Germany, Gerwin et al., 2009b) and we analyse how well they improved their prediction in three steps based on adding information prior to each following step. The modellers predicted the catchment's hydrological response in its initial phase without having access to the observed records. They used conceptually different physically based models and their modelling experience differed largely. Hence, they encountered two problems: (i) to simulate discharge for an ungauged catchment and (ii) using models that were developed for catchments, which are not in a state of landscape transformation. The prediction exercise was organized in three steps: (1) for the first prediction the modellers received a basic data set describing the catchment to a degree somewhat more complete than usually available for a priori predictions of ungauged catchments; they did not obtain information on stream flow, soil moisture, nor groundwater response and had therefore to guess the initial conditions; (2) before the second prediction they inspected the catchment on-site and discussed their first prediction attempt; (3) for their third prediction they were offered additional data by charging them pro forma with the costs for obtaining this additional information. Holländer et al. (2009) discussed the range of predictions obtained in step (1). Here, we detail the modeller's assumptions and decisions in accounting for the various processes. We document the prediction progress as well as the learning process resulting from the availability of added information. For the second and third steps, the progress in prediction quality is evaluated in relation to individual modelling experience and costs of

  13. Energy based prediction models for building acoustics

    DEFF Research Database (Denmark)

    Brunskog, Jonas

    2012-01-01

    In order to reach robust and simplified yet accurate prediction models, energy based principle are commonly used in many fields of acoustics, especially in building acoustics. This includes simple energy flow models, the framework of statistical energy analysis (SEA) as well as more elaborated...... principles as, e.g., wave intensity analysis (WIA). The European standards for building acoustic predictions, the EN 12354 series, are based on energy flow and SEA principles. In the present paper, different energy based prediction models are discussed and critically reviewed. Special attention is placed...

  14. Massive Predictive Modeling using Oracle R Enterprise

    CERN Document Server

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  15. Loan Default Prediction on Large Imbalanced Data Using Random Forests

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2012-10-01

    Full Text Available In this paper, we propose an improved random forest algorithm which allocates weights to decision trees in the forest during tree aggregation for prediction and their weights are easily calculated based on out-of-bag errors in training. We compare the performance of our proposed algorithm and the original one on loan default prediction datasets. We also use these two algorithms to create two kinds of balanced random forests to deal with imbalanced data problem. Experiments results show that our proposed algorithm beats the original random forest in terms of both balanced and overall accuracy metrics. Experiments also show that parallel random forests can greatly improve random forests’ efficiency during the learning process.

  16. An overview of comparative modelling and resources dedicated to large-scale modelling of genome sequences.

    Science.gov (United States)

    Lam, Su Datt; Das, Sayoni; Sillitoe, Ian; Orengo, Christine

    2017-08-01

    Computational modelling of proteins has been a major catalyst in structural biology. Bioinformatics groups have exploited the repositories of known structures to predict high-quality structural models with high efficiency at low cost. This article provides an overview of comparative modelling, reviews recent developments and describes resources dedicated to large-scale comparative modelling of genome sequences. The value of subclustering protein domain superfamilies to guide the template-selection process is investigated. Some recent cases in which structural modelling has aided experimental work to determine very large macromolecular complexes are also cited.

  17. Development and application of chronic disease risk prediction models.

    Science.gov (United States)

    Oh, Sun Min; Stefani, Katherine M; Kim, Hyeon Chang

    2014-07-01

    Currently, non-communicable chronic diseases are a major cause of morbidity and mortality worldwide, and a large proportion of chronic diseases are preventable through risk factor management. However, the prevention efficacy at the individual level is not yet satisfactory. Chronic disease prediction models have been developed to assist physicians and individuals in clinical decision-making. A chronic disease prediction model assesses multiple risk factors together and estimates an absolute disease risk for the individual. Accurate prediction of an individual's future risk for a certain disease enables the comparison of benefits and risks of treatment, the costs of alternative prevention strategies, and selection of the most efficient strategy for the individual. A large number of chronic disease prediction models, especially targeting cardiovascular diseases and cancers, have been suggested, and some of them have been adopted in the clinical practice guidelines and recommendations of many countries. Although few chronic disease prediction tools have been suggested in the Korean population, their clinical utility is not as high as expected. This article reviews methodologies that are commonly used for developing and evaluating a chronic disease prediction model and discusses the current status of chronic disease prediction in Korea.

  18. Seasonal prediction of US summertime ozone using statistical analysis of large scale climate patterns

    Science.gov (United States)

    Mickley, Loretta J.

    2017-01-01

    We develop a statistical model to predict June–July–August (JJA) daily maximum 8-h average (MDA8) ozone concentrations in the eastern United States based on large-scale climate patterns during the previous spring. We find that anomalously high JJA ozone in the East is correlated with these springtime patterns: warm tropical Atlantic and cold northeast Pacific sea surface temperatures (SSTs), as well as positive sea level pressure (SLP) anomalies over Hawaii and negative SLP anomalies over the Atlantic and North America. We then develop a linear regression model to predict JJA MDA8 ozone from 1980 to 2013, using the identified SST and SLP patterns from the previous spring. The model explains ∼45% of the variability in JJA MDA8 ozone concentrations and ∼30% variability in the number of JJA ozone episodes (>70 ppbv) when averaged over the eastern United States. This seasonal predictability results from large-scale ocean–atmosphere interactions. Warm tropical Atlantic SSTs can trigger diabatic heating in the atmosphere and influence the extratropical climate through stationary wave propagation, leading to greater subsidence, less precipitation, and higher temperatures in the East, which increases surface ozone concentrations there. Cooler SSTs in the northeast Pacific are also associated with more summertime heatwaves and high ozone in the East. On average, models participating in the Atmospheric Model Intercomparison Project fail to capture the influence of this ocean–atmosphere interaction on temperatures in the eastern United States, implying that such models would have difficulty simulating the interannual variability of surface ozone in this region. PMID:28223483

  19. Liver Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing liver cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  20. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  1. Cervical Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  2. Prostate Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing prostate cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  3. Pancreatic Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing pancreatic cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  4. Colorectal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing colorectal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  5. Bladder Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing bladder cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  6. Esophageal Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing esophageal cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  7. Lung Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing lung cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  8. Breast Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing breast cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  9. Ovarian Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of developing ovarian cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  10. Testicular Cancer Risk Prediction Models

    Science.gov (United States)

    Developing statistical models that estimate the probability of testicular cervical cancer over a defined period of time will help clinicians identify individuals at higher risk of specific cancers, allowing for earlier or more frequent screening and counseling of behavioral changes to decrease risk.

  11. Identifiability of large phylogenetic mixture models.

    Science.gov (United States)

    Rhodes, John A; Sullivant, Seth

    2012-01-01

    Phylogenetic mixture models are statistical models of character evolution allowing for heterogeneity. Each of the classes in some unknown partition of the characters may evolve by different processes, or even along different trees. Such models are of increasing interest for data analysis, as they can capture the variety of evolutionary processes that may be occurring across long sequences of DNA or proteins. The fundamental question of whether parameters of such a model are identifiable is difficult to address, due to the complexity of the parameterization. Identifiability is, however, essential to their use for statistical inference.We analyze mixture models on large trees, with many mixture components, showing that both numerical and tree parameters are indeed identifiable in these models when all trees are the same. This provides a theoretical justification for some current empirical studies, and indicates that extensions to even more mixture components should be theoretically well behaved. We also extend our results to certain mixtures on different trees, using the same algebraic techniques.

  12. Cloud Based Metalearning System for Predictive Modeling of Biomedical Data

    Directory of Open Access Journals (Sweden)

    Milan Vukićević

    2014-01-01

    Full Text Available Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data.

  13. Posterior Predictive Model Checking in Bayesian Networks

    Science.gov (United States)

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  14. Creep Rupture Life Prediction Based on Analysis of Large Creep Deformation

    Directory of Open Access Journals (Sweden)

    YE Wenming

    2016-08-01

    Full Text Available A creep rupture life prediction method for high temperature component was proposed. The method was based on a true stress-strain elastoplastic creep constitutive model and the large deformation finite element analysis method. This method firstly used the high-temperature tensile stress-strain curve expressed by true stress and strain and the creep curve to build materials' elastoplastic and creep constitutive model respectively, then used the large deformation finite element method to calculate the deformation response of high temperature component under a given load curve, finally the creep rupture life was determined according to the change trend of the responsive curve.The method was verified by durable test of TC11 titanium alloy notched specimens under 500 ℃, and was compared with the three creep rupture life prediction methods based on the small deformation analysis. Results show that the proposed method can accurately predict the high temperature creep response and long-term life of TC11 notched specimens, and the accuracy is better than that of the methods based on the average effective stress of notch ligament, the bone point stress and the fracture strain of the key point, which are all based on small deformation finite element analysis.

  15. A Course in... Model Predictive Control.

    Science.gov (United States)

    Arkun, Yaman; And Others

    1988-01-01

    Describes a graduate engineering course which specializes in model predictive control. Lists course outline and scope. Discusses some specific topics and teaching methods. Suggests final projects for the students. (MVL)

  16. Interior Noise Predictions in the Preliminary Design of the Large Civil Tiltrotor (LCTR2)

    Science.gov (United States)

    Grosveld, Ferdinand W.; Cabell, Randolph H.; Boyd, David D.

    2013-01-01

    A prediction scheme was established to compute sound pressure levels in the interior of a simplified cabin model of the second generation Large Civil Tiltrotor (LCTR2) during cruise conditions, while being excited by turbulent boundary layer flow over the fuselage, or by tiltrotor blade loading and thickness noise. Finite element models of the cabin structure, interior acoustic space, and acoustically absorbent (poro-elastic) materials in the fuselage were generated and combined into a coupled structural-acoustic model. Fluctuating power spectral densities were computed according to the Efimtsov turbulent boundary layer excitation model. Noise associated with the tiltrotor blades was predicted in the time domain as fluctuating surface pressures and converted to power spectral densities at the fuselage skin finite element nodes. A hybrid finite element (FE) approach was used to compute the low frequency acoustic cabin response over the frequency range 6-141 Hz with a 1 Hz bandwidth, and the Statistical Energy Analysis (SEA) approach was used to predict the interior noise for the 125-8000 Hz one-third octave bands.

  17. Large animal models for stem cell therapy.

    Science.gov (United States)

    Harding, John; Roberts, R Michael; Mirochnitchenko, Oleg

    2013-03-28

    The field of regenerative medicine is approaching translation to clinical practice, and significant safety concerns and knowledge gaps have become clear as clinical practitioners are considering the potential risks and benefits of cell-based therapy. It is necessary to understand the full spectrum of stem cell actions and preclinical evidence for safety and therapeutic efficacy. The role of animal models for gaining this information has increased substantially. There is an urgent need for novel animal models to expand the range of current studies, most of which have been conducted in rodents. Extant models are providing important information but have limitations for a variety of disease categories and can have different size and physiology relative to humans. These differences can preclude the ability to reproduce the results of animal-based preclinical studies in human trials. Larger animal species, such as rabbits, dogs, pigs, sheep, goats, and non-human primates, are better predictors of responses in humans than are rodents, but in each case it will be necessary to choose the best model for a specific application. There is a wide spectrum of potential stem cell-based products that can be used for regenerative medicine, including embryonic and induced pluripotent stem cells, somatic stem cells, and differentiated cellular progeny. The state of knowledge and availability of these cells from large animals vary among species. In most cases, significant effort is required for establishing and characterizing cell lines, comparing behavior to human analogs, and testing potential applications. Stem cell-based therapies present significant safety challenges, which cannot be addressed by traditional procedures and require the development of new protocols and test systems, for which the rigorous use of larger animal species more closely resembling human behavior will be required. In this article, we discuss the current status and challenges of and several major directions

  18. Equivalency and unbiasedness of grey prediction models

    Institute of Scientific and Technical Information of China (English)

    Bo Zeng; Chuan Li; Guo Chen; Xianjun Long

    2015-01-01

    In order to deeply research the structure discrepancy and modeling mechanism among different grey prediction mo-dels, the equivalence and unbiasedness of grey prediction mo-dels are analyzed and verified. The results show that al the grey prediction models that are strictly derived from x(0)(k) +az(1)(k) = b have the identical model structure and simulation precision. Moreover, the unbiased simulation for the homoge-neous exponential sequence can be accomplished. However, the models derived from dx(1)/dt+ax(1) =b are only close to those derived from x(0)(k)+az(1)(k)=b provided that|a|has to satisfy|a| < 0.1; neither could the unbiased simulation for the homoge-neous exponential sequence be achieved. The above conclusions are proved and verified through some theorems and examples.

  19. Improved engine wall models for Large Eddy Simulation (LES)

    Science.gov (United States)

    Plengsaard, Chalearmpol

    Improved wall models for Large Eddy Simulation (LES) are presented in this research. The classical Werner-Wengle (WW) wall shear stress model is used along with near-wall sub-grid scale viscosity. A sub-grid scale turbulent kinetic energy is employed in a model for the eddy viscosity. To gain better heat flux results, a modified classical variable-density wall heat transfer model is also used. Because no experimental wall shear stress results are available in engines, the fully turbulent developed flow in a square duct is chosen to validate the new wall models. The model constants in the new wall models are set to 0.01 and 0.8, respectively and are kept constant throughout the investigation. The resulting time- and spatially-averaged velocity and temperature wall functions from the new wall models match well with the law-of-the-wall experimental data at Re = 50,000. In order to study the effect of hot air impinging walls, jet impingement on a flat plate is also tested with the new wall models. The jet Reynolds number is equal to 21,000 and a fixed jet-to-plate spacing of H/D = 2.0. As predicted by the new wall models, the time-averaged skin friction coefficient agrees well with experimental data, while the computed Nusselt number agrees fairly well when r/D > 2.0. Additionally, the model is validated using experimental data from a Caterpillar engine operated with conventional diesel combustion. Sixteen different operating engine conditions are simulated. The majority of the predicted heat flux results from each thermocouple location follow similar trends when compared with experimental data. The magnitude of peak heat fluxes as predicted by the new wall models is in the range of typical measured values in diesel combustion, while most heat flux results from previous LES wall models are over-predicted. The new wall models generate more accurate predictions and agree better with experimental data.

  20. Risk terrain modeling predicts child maltreatment.

    Science.gov (United States)

    Daley, Dyann; Bachmann, Michael; Bachmann, Brittany A; Pedigo, Christian; Bui, Minh-Thuy; Coffman, Jamye

    2016-12-01

    As indicated by research on the long-term effects of adverse childhood experiences (ACEs), maltreatment has far-reaching consequences for affected children. Effective prevention measures have been elusive, partly due to difficulty in identifying vulnerable children before they are harmed. This study employs Risk Terrain Modeling (RTM), an analysis of the cumulative effect of environmental factors thought to be conducive for child maltreatment, to create a highly accurate prediction model for future substantiated child maltreatment cases in the City of Fort Worth, Texas. The model is superior to commonly used hotspot predictions and more beneficial in aiding prevention efforts in a number of ways: 1) it identifies the highest risk areas for future instances of child maltreatment with improved precision and accuracy; 2) it aids the prioritization of risk-mitigating efforts by informing about the relative importance of the most significant contributing risk factors; 3) since predictions are modeled as a function of easily obtainable data, practitioners do not have to undergo the difficult process of obtaining official child maltreatment data to apply it; 4) the inclusion of a multitude of environmental risk factors creates a more robust model with higher predictive validity; and, 5) the model does not rely on a retrospective examination of past instances of child maltreatment, but adapts predictions to changing environmental conditions. The present study introduces and examines the predictive power of this new tool to aid prevention efforts seeking to improve the safety, health, and wellbeing of vulnerable children.

  1. Model for Predicting End User Web Page Response Time

    CERN Document Server

    Nagarajan, Sathya Narayanan

    2012-01-01

    Perceived responsiveness of a web page is one of the most important and least understood metrics of web page design, and is critical for attracting and maintaining a large audience. Web pages can be designed to meet performance SLAs early in the product lifecycle if there is a way to predict the apparent responsiveness of a particular page layout. Response time of a web page is largely influenced by page layout and various network characteristics. Since the network characteristics vary widely from country to country, accurately modeling and predicting the perceived responsiveness of a web page from the end user's perspective has traditionally proven very difficult. We propose a model for predicting end user web page response time based on web page, network, browser download and browser rendering characteristics. We start by understanding the key parameters that affect perceived response time. We then model each of these parameters individually using experimental tests and statistical techniques. Finally, we d...

  2. Forced synchronization of large-scale circulation to increase predictability of surface states

    Science.gov (United States)

    Shen, Mao-Lin; Keenlyside, Noel; Selten, Frank; Wiegerinck, Wim; Duane, Gregory

    2016-04-01

    Numerical models are key tools in the projection of the future climate change. The lack of perfect initial condition and perfect knowledge of the laws of physics, as well as inherent chaotic behavior limit predictions. Conceptually, the atmospheric variables can be decomposed into a predictable component (signal) and an unpredictable component (noise). In ensemble prediction the anomaly of ensemble mean is regarded as the signal and the ensemble spread the noise. Naturally the prediction skill will be higher if the signal-to-noise ratio (SNR) is larger in the initial conditions. We run two ensemble experiments in order to explore a way to reduce the SNR of surface winds and temperature. One ensemble experiment is AGCM with prescribing sea surface temperature (SST); the other is AGCM with both prescribing SST and nudging the high-level temperature and winds to ERA-Interim. Each ensemble has 30 members. Larger SNR is expected and found over the tropical ocean in the first experiment because the tropical circulation is associated with the convection and the associated surface wind convergence as these are to a large extent driven by the SST. However, small SNR is found over high latitude ocean and land surface due to the chaotic and non-synchronized atmosphere states. In the second experiment the higher level temperature and winds are forced to be synchronized (nudged to reanalysis) and hence a larger SNR of surface winds and temperature is expected. Furthermore, different nudging coefficients are also tested in order to understand the limitation of both synchronization of large-scale circulation and the surface states. These experiments will be useful for the developing strategies to synchronize the 3-D states of atmospheric models that can be later used to build a super model.

  3. Non-Standard Models, Solar Neutrinos, and Large \\theta_{13}

    CERN Document Server

    Bonventre, R; Klein, J R; Gann, G D Orebi; Seibert, S; Wasalski, O

    2013-01-01

    Solar neutrino experiments have yet to see directly the transition region between matter-enhanced and vacuum oscillations. The transition region is particularly sensitive to models of non-standard neutrino interactions and propagation. We examine several such non-standard models, which predict a lower-energy transition region and a flatter survival probability for the ^{8}B solar neutrinos than the standard large-mixing angle (LMA) model. We find that while some of the non-standard models provide a better fit to the solar neutrino data set, the large measured value of \\theta_{13} and the size of the experimental uncertainties lead to a low statistical significance for these fits. We have also examined whether simple changes to the solar density profile can lead to a flatter ^{8}B survival probability than the LMA prediction, but find that this is not the case for reasonable changes. We conclude that the data in this critical region is still too poor to determine whether any of these models, or LMA, is the bes...

  4. Property predictions using microstructural modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.G. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)]. E-mail: wangk2@rpi.edu; Guo, Z. [Sente Software Ltd., Surrey Technology Centre, 40 Occam Road, Guildford GU2 7YG (United Kingdom); Sha, W. [Metals Research Group, School of Civil Engineering, Architecture and Planning, The Queen' s University of Belfast, Belfast BT7 1NN (United Kingdom); Glicksman, M.E. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States); Rajan, K. [Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, CII 9219, 110 8th Street, Troy, NY 12180-3590 (United States)

    2005-07-15

    Precipitation hardening in an Fe-12Ni-6Mn maraging steel during overaging is quantified. First, applying our recent kinetic model of coarsening [Phys. Rev. E, 69 (2004) 061507], and incorporating the Ashby-Orowan relationship, we link quantifiable aspects of the microstructures of these steels to their mechanical properties, including especially the hardness. Specifically, hardness measurements allow calculation of the precipitate size as a function of time and temperature through the Ashby-Orowan relationship. Second, calculated precipitate sizes and thermodynamic data determined with Thermo-Calc[copyright] are used with our recent kinetic coarsening model to extract diffusion coefficients during overaging from hardness measurements. Finally, employing more accurate diffusion parameters, we determined the hardness of these alloys independently from theory, and found agreement with experimental hardness data. Diffusion coefficients determined during overaging of these steels are notably higher than those found during the aging - an observation suggesting that precipitate growth during aging and precipitate coarsening during overaging are not controlled by the same diffusion mechanism.

  5. Models for short term malaria prediction in Sri Lanka

    Directory of Open Access Journals (Sweden)

    Galappaththy Gawrie NL

    2008-05-01

    Full Text Available Abstract Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed.

  6. Development of large Area Covering Height Model

    Science.gov (United States)

    Jacobsen, K.

    2014-04-01

    Height information is a basic part of topographic mapping. Only in special areas frequent update of height models is required, usually the update cycle is quite lower as for horizontal map information. Some height models are available free of charge in the internet; for commercial height models a fee has to be paid. Mostly digital surface models (DSM) with the height of the visible surface are given and not the bare ground height, as required for standard mapping. Nevertheless by filtering of DSM, digital terrain models (DTM) with the height of the bare ground can be generated with the exception of dense forest areas where no height of the bare ground is available. These height models may be better as the DTM of some survey administrations. In addition several DTM from national survey administrations are classified, so as alternative the commercial or free of charge available information from internet can be used. The widely used SRTM DSM is available also as ACE-2 GDEM corrected by altimeter data for systematic height errors caused by vegetation and orientation errors. But the ACE-2 GDEM did not respect neighbourhood information. With the worldwide covering TanDEM-X height model, distributed starting 2014 by Airbus Defence and Space (former ASTRIUM) as WorldDEM, higher level of details and accuracy is reached as with other large area covering height models. At first the raw-version of WorldDEM will be available, followed by an edited version and finally as WorldDEM-DTM a height model of the bare ground. With 12 m spacing and a relative standard deviation of 1.2 m within an area of 1° x 1° an accuracy and resolution level is reached, satisfying also for larger map scales. For limited areas with the HDEM also a height model with 6 m spacing and a relative vertical accuracy of 0.5 m can be generated on demand. By bathymetric LiDAR and stereo images also the height of the sea floor can be determined if the water has satisfying transparency. Another method of getting

  7. Zebrafish whole-adult-organism chemogenomics for large-scale predictive and discovery chemical biology.

    Directory of Open Access Journals (Sweden)

    Siew Hong Lam

    2008-07-01

    Full Text Available The ability to perform large-scale, expression-based chemogenomics on whole adult organisms, as in invertebrate models (worm and fly, is highly desirable for a vertebrate model but its feasibility and potential has not been demonstrated. We performed expression-based chemogenomics on the whole adult organism of a vertebrate model, the zebrafish, and demonstrated its potential for large-scale predictive and discovery chemical biology. Focusing on two classes of compounds with wide implications to human health, polycyclic (halogenated aromatic hydrocarbons [P(HAHs] and estrogenic compounds (ECs, we generated robust prediction models that can discriminate compounds of the same class from those of different classes in two large independent experiments. The robust expression signatures led to the identification of biomarkers for potent aryl hydrocarbon receptor (AHR and estrogen receptor (ER agonists, respectively, and were validated in multiple targeted tissues. Knowledge-based data mining of human homologs of zebrafish genes revealed highly conserved chemical-induced biological responses/effects, health risks, and novel biological insights associated with AHR and ER that could be inferred to humans. Thus, our study presents an effective, high-throughput strategy of capturing molecular snapshots of chemical-induced biological states of a whole adult vertebrate that provides information on biomarkers of effects, deregulated signaling pathways, and possible affected biological functions, perturbed physiological systems, and increased health risks. These findings place zebrafish in a strategic position to bridge the wide gap between cell-based and rodent models in chemogenomics research and applications, especially in preclinical drug discovery and toxicology.

  8. Modeling and Prediction Using Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Juhl, Rune; Møller, Jan Kloppenborg; Jørgensen, John Bagterp

    2016-01-01

    Pharmacokinetic/pharmakodynamic (PK/PD) modeling for a single subject is most often performed using nonlinear models based on deterministic ordinary differential equations (ODEs), and the variation between subjects in a population of subjects is described using a population (mixed effects) setup...... that describes the variation between subjects. The ODE setup implies that the variation for a single subject is described by a single parameter (or vector), namely the variance (covariance) of the residuals. Furthermore the prediction of the states is given as the solution to the ODEs and hence assumed...... deterministic and can predict the future perfectly. A more realistic approach would be to allow for randomness in the model due to e.g., the model be too simple or errors in input. We describe a modeling and prediction setup which better reflects reality and suggests stochastic differential equations (SDEs...

  9. Precision Plate Plan View Pattern Predictive Model

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yang; YANG Quan; HE An-rui; WANG Xiao-chen; ZHANG Yun

    2011-01-01

    According to the rolling features of plate mill, a 3D elastic-plastic FEM (finite element model) based on full restart method of ANSYS/LS-DYNA was established to study the inhomogeneous plastic deformation of multipass plate rolling. By analyzing the simulation results, the difference of head and tail ends predictive models was found and modified. According to the numerical simulation results of 120 different kinds of conditions, precision plate plan view pattern predictive model was established. Based on these models, the sizing MAS (mizushima automatic plan view pattern control system) method was designed and used on a 2 800 mm plate mill. Comparing the rolled plates with and without PVPP (plan view pattern predictive) model, the reduced width deviation indicates that the olate !olan view Dattern predictive model is preeise.

  10. Modeling, Prediction, and Control of Heating Temperature for Tube Billet

    Directory of Open Access Journals (Sweden)

    Yachun Mao

    2015-01-01

    Full Text Available Annular furnaces have multivariate, nonlinear, large time lag, and cross coupling characteristics. The prediction and control of the exit temperature of a tube billet are important but difficult. We establish a prediction model for the final temperature of a tube billet through OS-ELM-DRPLS method. We address the complex production characteristics, integrate the advantages of PLS and ELM algorithms in establishing linear and nonlinear models, and consider model update and data lag. Based on the proposed model, we design a prediction control algorithm for tube billet temperature. The algorithm is validated using the practical production data of Baosteel Co., Ltd. Results show that the model achieves the precision required in industrial applications. The temperature of the tube billet can be controlled within the required temperature range through compensation control method.

  11. NBC Hazard Prediction Model Capability Analysis

    Science.gov (United States)

    1999-09-01

    Puff( SCIPUFF ) Model Verification and Evaluation Study, Air Resources Laboratory, NOAA, May 1998. Based on the NOAA review, the VLSTRACK developers...TO SUBSTANTIAL DIFFERENCES IN PREDICTIONS HPAC uses a transport and dispersion (T&D) model called SCIPUFF and an associated mean wind field model... SCIPUFF is a model for atmospheric dispersion that uses the Gaussian puff method - an arbitrary time-dependent concentration field is represented

  12. Mathematical formulation to predict the harmonics of the superconducting Large Hadron Collider magnets

    Directory of Open Access Journals (Sweden)

    Nicholas Sammut

    2006-01-01

    Full Text Available CERN is currently assembling the LHC (Large Hadron Collider that will accelerate and bring in collision 7 TeV protons for high energy physics. Such a superconducting magnet-based accelerator can be controlled only when the field errors of production and installation of all magnetic elements are known to the required accuracy. The ideal way to compensate the field errors obviously is to have direct diagnostics on the beam. For the LHC, however, a system solely based on beam feedback may be too demanding. The present baseline for the LHC control system hence requires an accurate forecast of the magnetic field and the multipole field errors to reduce the burden on the beam-based feedback. The field model is the core of this magnetic prediction system, that we call the field description for the LHC (FIDEL. The model will provide the forecast of the magnetic field at a given time, magnet operating current, magnet ramp rate, magnet temperature, and magnet powering history. The model is based on the identification and physical decomposition of the effects that contribute to the total field in the magnet aperture of the LHC dipoles. Each effect is quantified using data obtained from series measurements, and modeled theoretically or empirically depending on the complexity of the physical phenomena involved. This paper presents the developments of the new finely tuned magnetic field model and, using the data accumulated through series tests to date, evaluates its accuracy and predictive capabilities over a sector of the machine.

  13. Uncertainty quantification for large-scale ocean circulation predictions.

    Energy Technology Data Exchange (ETDEWEB)

    Safta, Cosmin; Debusschere, Bert J.; Najm, Habib N.; Sargsyan, Khachik

    2010-09-01

    Uncertainty quantificatio in climate models is challenged by the sparsity of the available climate data due to the high computational cost of the model runs. Another feature that prevents classical uncertainty analyses from being easily applicable is the bifurcative behavior in the climate data with respect to certain parameters. A typical example is the Meridional Overturning Circulation in the Atlantic Ocean. The maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO{sub 2} forcing. We develop a methodology that performs uncertainty quantificatio in the presence of limited data that have discontinuous character. Our approach is two-fold. First we detect the discontinuity location with a Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve location in presence of arbitrarily distributed input parameter values. Furthermore, we developed a spectral approach that relies on Polynomial Chaos (PC) expansions on each sides of the discontinuity curve leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification and propagation. The methodology is tested on synthetic examples of discontinuous data with adjustable sharpness and structure.

  14. Corporate prediction models, ratios or regression analysis?

    NARCIS (Netherlands)

    Bijnen, E.J.; Wijn, M.F.C.M.

    1994-01-01

    The models developed in the literature with respect to the prediction of a company s failure are based on ratios. It has been shown before that these models should be rejected on theoretical grounds. Our study of industrial companies in the Netherlands shows that the ratios which are used in

  15. Modelling Chemical Reasoning to Predict Reactions

    CERN Document Server

    Segler, Marwin H S

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outperforms a rule-based expert system in the reaction prediction task for 180,000 randomly selected binary reactions. We show that our data-driven model generalises even beyond known reaction types, and is thus capable of effectively (re-) discovering novel transformations (even including transition-metal catalysed reactions). Our model enables computers to infer hypotheses about reactivity and reactions by only considering the intrinsic local structure of the graph, and because each single reaction prediction is typically ac...

  16. Modeling of large area hot embossing

    CERN Document Server

    Worgull, M; Marcotte, J -P; Hétu, J -F; Heckele, M

    2008-01-01

    Today, hot embossing and injection molding belong to the established plastic molding processes in microengineering. Based on experimental findings, a variety of microstructures have been replicated so far using the processes. However, with increasing requirements regarding the embossing surface and the simultaneous decrease of the structure size down into the nanorange, increasing know-how is needed to adapt hot embossing to industrial standards. To reach this objective, a German-Canadian cooperation project has been launched to study hot embossing theoretically by a process simulation and experimentally. The present publication shall report about the first results of the simulation - the modeling and simulation of large area replication based on an eight inch microstructured mold.

  17. Precise methods for conducted EMI modeling,analysis,and prediction

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0―10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  18. Precise methods for conducted EMI modeling,analysis, and prediction

    Institute of Scientific and Technical Information of China (English)

    MA WeiMing; ZHAO ZhiHua; MENG Jin; PAN QiJun; ZHANG Lei

    2008-01-01

    Focusing on the state-of-the-art conducted EMI prediction, this paper presents a noise source lumped circuit modeling and identification method, an EMI modeling method based on multiple slope approximation of switching transitions, and dou-ble Fourier integral method modeling PWM conversion units to achieve an accurate modeling of EMI noise source. Meanwhile, a new sensitivity analysis method, a general coupling model for steel ground loops, and a partial element equivalent circuit method are proposed to identify and characterize conducted EMI coupling paths. The EMI noise and propagation modeling provide an accurate prediction of conducted EMI in the entire frequency range (0-10 MHz) with good practicability and generality. Finally a new measurement approach is presented to identify the surface current of large dimensional metal shell. The proposed analytical modeling methodology is verified by experimental results.

  19. Models for blisks with large blends and small mistuning

    Science.gov (United States)

    Tang, Weihan; Epureanu, Bogdan I.; Filippi, Sergio

    2017-03-01

    Small deviations of the structural properties of individual sectors of blisks, referred to as mistuning, can lead to localization of vibration energy and drastically increased forced responses. Similar phenomena are observed in blisks with large damages or repair blends. Such deviations are best studied statistically because they are random. In the absence of cyclic symmetry, the computational cost to predict the vibration behavior of blisks becomes prohibitively high. That has lead to the development of various reduced-order models (ROMs). Existing approaches are either for small mistuning, or are computationally expensive and thus not effective for statistical analysis. This paper discusses a reduced-order modeling method for blisks with both large and small mistuning, which requires low computational effort. This method utilizes the pristine, rogue and interface modal expansion (PRIME) method to model large blends. PRIME uses only sector-level cyclic modes strategically combined together to create a reduction basis which yields ROMs that efficiently and accurately model large mistuning. To model small mistuning, nodal energy weighted transformation (NEWT) is integrated with PRIME, resulting in N-PRIME, which requires only sector-level calculations to create a ROM which captures both small and large mistuning with minimized computational effort. The combined effects of large blends and small mistuning are studied using N-PRIME for a dual flow path system and for a conventional blisk. The accuracy of the N-PRIME method is validated against full-order finite element analyses for both natural and forced response computations, including displacement amplitudes and surface stresses. Results reveal that N-PRIME is capable of accurately predicting the dynamics of a blisk with severely large mistuning, along with small random mistuning throughout each sector. Also, N-PRIME can accurately capture modes with highly localized motions. A statistical analysis is performed to

  20. Evaluation of CASP8 model quality predictions

    KAUST Repository

    Cozzetto, Domenico

    2009-01-01

    The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.

  1. A simple formula for insertion loss prediction of large acoustical enclosures using statistical energy analysis method

    Directory of Open Access Journals (Sweden)

    Kim Hyun-Sil

    2014-12-01

    Full Text Available Insertion loss prediction of large acoustical enclosures using Statistical Energy Analysis (SEA method is presented. The SEA model consists of three elements: sound field inside the enclosure, vibration energy of the enclosure panel, and sound field outside the enclosure. It is assumed that the space surrounding the enclosure is sufficiently large so that there is no energy flow from the outside to the wall panel or to air cavity inside the enclosure. The comparison of the predicted insertion loss to the measured data for typical large acoustical enclosures shows good agreements. It is found that if the critical frequency of the wall panel falls above the frequency region of interest, insertion loss is dominated by the sound transmission loss of the wall panel and averaged sound absorption coefficient inside the enclosure. However, if the critical frequency of the wall panel falls into the frequency region of interest, acoustic power from the sound radiation by the wall panel must be added to the acoustic power from transmission through the panel.

  2. Synthesizing Planetary Nebulae for Large Scale Surveys: Predictions for LSST

    Science.gov (United States)

    Vejar, George; Montez, Rodolfo; Morris, Margaret; Stassun, Keivan G.

    2017-01-01

    The short-lived planetary nebula (PN) phase of stellar evolution is characterized by a hot central star and a bright, ionized, nebula. The PN phase forms after a low- to intermediate-mass star stops burning hydrogen in its core, ascends the asymptotic giant branch, and expels its outer layers of material into space. The exposed hot core produces ionizing UV photons and a fast stellar wind that sweeps up the surrounding material into a dense shell of ionized gas known as the PN. This fleeting stage of stellar evolution provides insight into rare atomic processes and the nucleosynthesis of elements in stars. The inherent brightness of the PNe allow them to be used to obtain distances to nearby stellar systems via the PN luminosity function and as kinematic tracers in other galaxies. However, the prevalence of non-spherical morphologies of PNe challenge the current paradigm of PN formation. The role of binarity in the shaping of the PN has recently gained traction ultimately suggesting single stars might not form PN. Searches for binary central stars have increased the binary fraction but the current PN sample is incomplete. Future wide-field, multi-epoch surveys like the Large Synoptic Survey Telescope (LSST) can impact studies of PNe and improve our understanding of their origin and formation. Using a suite of Cloudy radiative transfer calculations, we study the detectability of PNe in the proposed LSST multiband observations. We compare our synthetic PNe to common sources (stars, galaxies, quasars) and establish discrimination techniques. Finally, we discuss follow-up strategies to verify new LSST-discovered PNe and use limiting distances to estimate the potential sample of PNe enabled by LSST.

  3. Large-Scale Tests of the DGP Model

    CERN Document Server

    Song, Y S; Hu, W; Song, Yong-Seon; Sawicki, Ignacy; Hu, Wayne

    2006-01-01

    The self-accelerating braneworld model (DGP) can be tested from measurements of the expansion history of the universe and the formation of structure. Current constraints on the expansion history from supernova luminosity distances, the CMB, and the Hubble constant exclude the simplest flat DGP model at about 3sigma. The best-fit open DGP model is, however, only a marginally poorer fit to the data than flat LCDM. Its substantially different expansion history raises structure formation challenges for the model. A dark-energy model with the same expansion history would predict a highly significant discrepancy with the baryon oscillation measurement due the high Hubble constant required and a large enhancement of CMB anisotropies at the lowest multipoles due to the ISW effect. For the DGP model to satisfy these constraints new gravitational phenomena would have to appear at the non-linear and cross-over scales respectively. A prediction of the DGP expansion history in a region where the phenomenology is well unde...

  4. Genetic models of homosexuality: generating testable predictions

    OpenAIRE

    Gavrilets, Sergey; Rice, William R.

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality inclu...

  5. Wind farm production prediction - The Zephyr model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Giebel, G. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Madsen, H. [IMM (DTU), Kgs. Lyngby (Denmark); Nielsen, T.S. [IMM (DTU), Kgs. Lyngby (Denmark); Joergensen, J.U. [Danish Meteorologisk Inst., Copenhagen (Denmark); Lauersen, L. [Danish Meteorologisk Inst., Copenhagen (Denmark); Toefting, J. [Elsam, Fredericia (DK); Christensen, H.S. [Eltra, Fredericia (Denmark); Bjerge, C. [SEAS, Haslev (Denmark)

    2002-06-01

    This report describes a project - funded by the Danish Ministry of Energy and the Environment - which developed a next generation prediction system called Zephyr. The Zephyr system is a merging between two state-of-the-art prediction systems: Prediktor of Risoe National Laboratory and WPPT of IMM at the Danish Technical University. The numerical weather predictions were generated by DMI's HIRLAM model. Due to technical difficulties programming the system, only the computational core and a very simple version of the originally very complex system were developed. The project partners were: Risoe, DMU, DMI, Elsam, Eltra, Elkraft System, SEAS and E2. (au)

  6. Life Prediction of Large Lithium-Ion Battery Packs with Active and Passive Balancing

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Ying [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Smith, Kandler A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zane, Regan [Utah State University; Anderson, Dyche [Ford Motor Company

    2017-07-03

    Lithium-ion battery packs take a major part of large-scale stationary energy storage systems. One challenge in reducing battery pack cost is to reduce pack size without compromising pack service performance and lifespan. Prognostic life model can be a powerful tool to handle the state of health (SOH) estimate and enable active life balancing strategy to reduce cell imbalance and extend pack life. This work proposed a life model using both empirical and physical-based approaches. The life model described the compounding effect of different degradations on the entire cell with an empirical model. Then its lower-level submodels considered the complex physical links between testing statistics (state of charge level, C-rate level, duty cycles, etc.) and the degradation reaction rates with respect to specific aging mechanisms. The hybrid approach made the life model generic, robust and stable regardless of battery chemistry and application usage. The model was validated with a custom pack with both passive and active balancing systems implemented, which created four different aging paths in the pack. The life model successfully captured the aging trajectories of all four paths. The life model prediction errors on capacity fade and resistance growth were within +/-3% and +/-5% of the experiment measurements.

  7. Design and modelling of innovative machinery systems for large ships

    DEFF Research Database (Denmark)

    Larsen, Ulrik

    Eighty percent of the growing global merchandise trade is transported by sea. The shipping industry is required to reduce the pollution and increase the energy efficiency of ships in the near future. There is a relatively large potential for approaching these requirements by implementing waste heat...... parameters for marine WHR. Using this mentioned methodology, regression models are derived for the prediction of the maximum obtainable thermal efficiency of ORCs. A unique configuration of the Kalina cycle, the Split-cycle, is analysed to evaluate the fullest potential of the Kalina cycle for the purpose...

  8. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    , among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which....... The simulation results show that the gray and non-gray calculations of the same oxy-fuel WSGGM make distinctly different predictions in the wall radiative heat transfer, incident radiative flux, radiative source, gas temperature and species profiles. In relative to the non-gray implementation, the gray...

  9. Physics-Informed Machine Learning for Predictive Turbulence Modeling: A Priori Assessment of Prediction Confidence

    CERN Document Server

    Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-01-01

    Although Reynolds-Averaged Navier-Stokes (RANS) equations are still the dominant tool for engineering design and analysis applications involving turbulent flows, standard RANS models are known to be unreliable in many flows of engineering relevance, including flows with separation, strong pressure gradients or mean flow curvature. With increasing amounts of 3-dimensional experimental data and high fidelity simulation data from Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS), data-driven turbulence modeling has become a promising approach to increase the predictive capability of RANS simulations. Recently, a data-driven turbulence modeling approach via machine learning has been proposed to predict the Reynolds stress anisotropy of a given flow based on high fidelity data from closely related flows. In this work, the closeness of different flows is investigated to assess the prediction confidence a priori. Specifically, the Mahalanobis distance and the kernel density estimation (KDE) technique...

  10. Nonconvex model predictive control for commercial refrigeration

    Science.gov (United States)

    Gybel Hovgaard, Tobias; Boyd, Stephen; Larsen, Lars F. S.; Bagterp Jørgensen, John

    2013-08-01

    We consider the control of a commercial multi-zone refrigeration system, consisting of several cooling units that share a common compressor, and is used to cool multiple areas or rooms. In each time period we choose cooling capacity to each unit and a common evaporation temperature. The goal is to minimise the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost function, however, is nonconvex due to the temperature dependence of thermodynamic efficiency. To handle this nonconvexity we propose a sequential convex optimisation method, which typically converges in fewer than 5 or so iterations. We employ a fast convex quadratic programming solver to carry out the iterations, which is more than fast enough to run in real time. We demonstrate our method on a realistic model, with a full year simulation and 15-minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost savings, on the order of 30%, compared to a standard thermostat-based control system. Perhaps more important, we see that the method exhibits sophisticated response to real-time variations in electricity prices. This demand response is critical to help balance real-time uncertainties in generation capacity associated with large penetration of intermittent renewable energy sources in a future smart grid.

  11. Mathematical models for predicting indoor air quality from smoking activity.

    OpenAIRE

    Ott, W R

    1999-01-01

    Much progress has been made over four decades in developing, testing, and evaluating the performance of mathematical models for predicting pollutant concentrations from smoking in indoor settings. Although largely overlooked by the regulatory community, these models provide regulators and risk assessors with practical tools for quantitatively estimating the exposure level that people receive indoors for a given level of smoking activity. This article reviews the development of the mass balanc...

  12. Do land parameters matter in large-scale hydrological modelling?

    Science.gov (United States)

    Gudmundsson, Lukas; Seneviratne, Sonia I.

    2013-04-01

    Many of the most pending issues in large-scale hydrology are concerned with predicting hydrological variability at ungauged locations. However, current-generation hydrological and land surface models that are used for their estimation suffer from large uncertainties. These models rely on mathematical approximations of the physical system as well as on mapped values of land parameters (e.g. topography, soil types, land cover) to predict hydrological variables (e.g. evapotranspiration, soil moisture, stream flow) as a function of atmospheric forcing (e.g. precipitation, temperature, humidity). Despite considerable progress in recent years, it remains unclear whether better estimates of land parameters can improve predictions - or - if a refinement of model physics is necessary. To approach this question we suggest scrutinizing our perception of hydrological systems by confronting it with the radical assumption that hydrological variability at any location in space depends on past and present atmospheric forcing only, and not on location-specific land parameters. This so called "Constant Land Parameter Hypothesis (CLPH)" assumes that variables like runoff can be predicted without taking location specific factors such as topography or soil types into account. We demonstrate, using a modern statistical tool, that monthly runoff in Europe can be skilfully estimated using atmospheric forcing alone, without accounting for locally varying land parameters. The resulting runoff estimates are used to benchmark state-of-the-art process models. These are found to have inferior performance, despite their explicit process representation, which accounts for locally varying land parameters. This suggests that progress in the theory of hydrological systems is likely to yield larger improvements in model performance than more precise land parameter estimates. The results also question the current modelling paradigm that is dominated by the attempt to account for locally varying land

  13. Large Scale, High Resolution, Mantle Dynamics Modeling

    Science.gov (United States)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  14. Statistical characteristics of irreversible predictability time in regional ocean models

    Directory of Open Access Journals (Sweden)

    P. C. Chu

    2005-01-01

    Full Text Available Probabilistic aspects of regional ocean model predictability is analyzed using the probability density function (PDF of the irreversible predictability time (IPT (called τ-PDF computed from an unconstrained ensemble of stochastic perturbations in initial conditions, winds, and open boundary conditions. Two-attractors (a chaotic attractor and a small-amplitude stable limit cycle are found in the wind-driven circulation. Relationship between attractor's residence time and IPT determines the τ-PDF for the short (up to several weeks and intermediate (up to two months predictions. The τ-PDF is usually non-Gaussian but not multi-modal for red-noise perturbations in initial conditions and perturbations in the wind and open boundary conditions. Bifurcation of τ-PDF occurs as the tolerance level varies. Generally, extremely successful predictions (corresponding to the τ-PDF's tail toward large IPT domain are not outliers and share the same statistics as a whole ensemble of predictions.

  15. Prediction of Canopy Heights over a Large Region Using Heterogeneous Lidar Datasets: Efficacy and Challenges

    Directory of Open Access Journals (Sweden)

    Ranjith Gopalakrishnan

    2015-08-01

    Full Text Available Generating accurate and unbiased wall-to-wall canopy height maps from airborne lidar data for large regions is useful to forest scientists and natural resource managers. However, mapping large areas often involves using lidar data from different projects, with varying acquisition parameters. In this work, we address the important question of whether one can accurately model canopy heights over large areas of the Southeastern US using a very heterogeneous dataset of small-footprint, discrete-return airborne lidar data (with 76 separate lidar projects. A unique aspect of this effort is the use of nationally uniform and extensive field data (~1800 forested plots from the Forest Inventory and Analysis (FIA program of the US Forest Service. Preliminary results are quite promising: Over all lidar projects, we observe a good correlation between the 85th percentile of lidar heights and field-measured height (r = 0.85. We construct a linear regression model to predict subplot-level dominant tree heights from distributional lidar metrics (R2 = 0.74, RMSE = 3.0 m, n = 1755. We also identify and quantify the importance of several factors (like heterogeneity of vegetation, point density, the predominance of hardwoods or softwoods, the average height of the forest stand, slope of the plot, and average scan angle of lidar acquisition that influence the efficacy of predicting canopy heights from lidar data. For example, a subset of plots (coefficient of variation of vegetation heights <0.2 significantly reduces the RMSE of our model from 3.0–2.4 m (~20% reduction. We conclude that when all these elements are factored into consideration, combining data from disparate lidar projects does not preclude robust estimation of canopy heights.

  16. Predictive model for segmented poly(urea

    Directory of Open Access Journals (Sweden)

    Frankl P.

    2012-08-01

    Full Text Available Segmented poly(urea has been shown to be of significant benefit in protecting vehicles from blast and impact and there have been several experimental studies to determine the mechanisms by which this protective function might occur. One suggested route is by mechanical activation of the glass transition. In order to enable design of protective structures using this material a constitutive model and equation of state are needed for numerical simulation hydrocodes. Determination of such a predictive model may also help elucidate the beneficial mechanisms that occur in polyurea during high rate loading. The tool deployed to do this has been Group Interaction Modelling (GIM – a mean field technique that has been shown to predict the mechanical and physical properties of polymers from their structure alone. The structure of polyurea has been used to characterise the parameters in the GIM scheme without recourse to experimental data and the equation of state and constitutive model predicts response over a wide range of temperatures and strain rates. The shock Hugoniot has been predicted and validated against existing data. Mechanical response in tensile tests has also been predicted and validated.

  17. PREDICTIVE CAPACITY OF ARCH FAMILY MODELS

    Directory of Open Access Journals (Sweden)

    Raphael Silveira Amaro

    2016-03-01

    Full Text Available In the last decades, a remarkable number of models, variants from the Autoregressive Conditional Heteroscedastic family, have been developed and empirically tested, making extremely complex the process of choosing a particular model. This research aim to compare the predictive capacity, using the Model Confidence Set procedure, than five conditional heteroskedasticity models, considering eight different statistical probability distributions. The financial series which were used refers to the log-return series of the Bovespa index and the Dow Jones Industrial Index in the period between 27 October 2008 and 30 December 2014. The empirical evidences showed that, in general, competing models have a great homogeneity to make predictions, either for a stock market of a developed country or for a stock market of a developing country. An equivalent result can be inferred for the statistical probability distributions that were used.

  18. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    Science.gov (United States)

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Using connectome-based predictive modeling to predict individual behavior from brain connectivity.

    Science.gov (United States)

    Shen, Xilin; Finn, Emily S; Scheinost, Dustin; Rosenberg, Monica D; Chun, Marvin M; Papademetris, Xenophon; Constable, R Todd

    2017-03-01

    Neuroimaging is a fast-developing research area in which anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale data sets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: (i) feature selection, (ii) feature summarization, (iii) model building, and (iv) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a considerable amount of the variance in these measures. It has been demonstrated that the CPM protocol performs as well as or better than many of the existing approaches in brain-behavior prediction. As CPM focuses on linear modeling and a purely data-driven approach, neuroscientists with limited or no experience in machine learning or optimization will find it easy to implement these protocols. Depending on the volume of data to be processed, the protocol can take 10-100 min for model building, 1-48 h for permutation testing, and 10-20 min for visualization of results.

  20. Outcome Prediction in Mathematical Models of Immune Response to Infection.

    Directory of Open Access Journals (Sweden)

    Manuel Mai

    Full Text Available Clinicians need to predict patient outcomes with high accuracy as early as possible after disease inception. In this manuscript, we show that patient-to-patient variability sets a fundamental limit on outcome prediction accuracy for a general class of mathematical models for the immune response to infection. However, accuracy can be increased at the expense of delayed prognosis. We investigate several systems of ordinary differential equations (ODEs that model the host immune response to a pathogen load. Advantages of systems of ODEs for investigating the immune response to infection include the ability to collect data on large numbers of 'virtual patients', each with a given set of model parameters, and obtain many time points during the course of the infection. We implement patient-to-patient variability v in the ODE models by randomly selecting the model parameters from distributions with coefficients of variation v that are centered on physiological values. We use logistic regression with one-versus-all classification to predict the discrete steady-state outcomes of the system. We find that the prediction algorithm achieves near 100% accuracy for v = 0, and the accuracy decreases with increasing v for all ODE models studied. The fact that multiple steady-state outcomes can be obtained for a given initial condition, i.e. the basins of attraction overlap in the space of initial conditions, limits the prediction accuracy for v > 0. Increasing the elapsed time of the variables used to train and test the classifier, increases the prediction accuracy, while adding explicit external noise to the ODE models decreases the prediction accuracy. Our results quantify the competition between early prognosis and high prediction accuracy that is frequently encountered by clinicians.

  1. Resin infusion of large composite structures modeling and manufacturing process

    Energy Technology Data Exchange (ETDEWEB)

    Loos, A.C. [Michigan State Univ., Dept. of Mechanical Engineering, East Lansing, MI (United States)

    2006-07-01

    The resin infusion processes resin transfer molding (RTM), resin film infusion (RFI) and vacuum assisted resin transfer molding (VARTM) are cost effective techniques for the fabrication of complex shaped composite structures. The dry fibrous preform is placed in the mold, consolidated, resin impregnated and cured in a single step process. The fibrous performs are often constructed near net shape using highly automated textile processes such as knitting, weaving and braiding. In this paper, the infusion processes RTM, RFI and VARTM are discussed along with the advantages of each technique compared with traditional composite fabrication methods such as prepreg tape lay up and autoclave cure. The large number of processing variables and the complex material behavior during infiltration and cure make experimental optimization of the infusion processes costly and inefficient. Numerical models have been developed which can be used to simulate the resin infusion processes. The model formulation and solution procedures for the VARTM process are presented. A VARTM process simulation of a carbon fiber preform was presented to demonstrate the type of information that can be generated by the model and to compare the model predictions with experimental measurements. Overall, the predicted flow front positions, resin pressures and preform thicknesses agree well with the measured values. The results of the simulation show the potential cost and performance benefits that can be realized by using a simulation model as part of the development process. (au)

  2. Modeling The Large Scale Bias of Neutral Hydrogen

    CERN Document Server

    Marin, Felipe; Seo, Hee-Jong; Vallinotto, Alberto

    2009-01-01

    We present analytical estimates of the large scale bias of neutral Hydrogen (HI) based on the Halo Occupation Distribution formalism. We use a simple, non-parametric model which monotonically relates the total mass of a halo with its HI mass at zero redshift; for earlier times we assume limiting models for the HI density parameter evolution, consistent with the data presently available, as well as two main scenarios for the evolution of our HI mass - Halo mass relation. We find that both the linear and the first non-linear bias terms exhibit a remarkable evolution with redshift, regardless of the specific limiting model assumed for the HI evolution. These analytical predictions are then shown to be consistent with measurements performed on the Millennium Simulation. Additionally, we show that this strong bias evolution does not sensibly affect the measurement of the HI Power Spectrum.

  3. Model Predictive Control for Smart Energy Systems

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus

    load shifting capabilities of the units that adapts to the given price predictions. We furthermore evaluated control performance in terms of economic savings for different control strategies and forecasts. Chapter 5 describes and compares the proposed large-scale Aggregator control strategies....... Aggregators are assumed to play an important role in the future Smart Grid and coordinate a large portfolio of units. The developed economic MPC controllers interfaces each unit directly to an Aggregator. We developed several MPC-based aggregation strategies that coordinates the global behavior of a portfolio...

  4. Predicting the evolution of large cholera outbreaks: lessons learnt from the Haiti case study

    Science.gov (United States)

    Bertuzzo, Enrico; Mari, Lorenzo; Righetto, Lorenzo; Knox, Allyn; Finger, Flavio; Casagrandi, Renato; Gatto, Marino; Rodriguez-Iturbe, Ignacio; Rinaldo, Andrea

    2013-04-01

    Mathematical models can provide key insights into the course of an ongoing epidemic, potentially aiding real-time emergency management in allocating health care resources and possibly anticipating the impact of alternative interventions. Spatially explicit models of waterborne disease are made routinely possible by widespread data mapping of hydrology, road network, population distribution, and sanitation. Here, we study the ex-post reliability of predictions of the ongoing Haiti cholera outbreak. Our model consists of a set of dynamical equations (SIR-like, i.e. subdivided into the compartments of Susceptible, Infected and Recovered individuals) describing a connected network of human communities where the infection results from the exposure to excess concentrations of pathogens in the water, which are, in turn, driven by hydrologic transport through waterways and by mobility of susceptible and infected individuals. Following the evidence of a clear correlation between rainfall events and cholera resurgence, we test a new mechanism explicitly accounting for rainfall as a driver of enhanced disease transmission by washout of open-air defecation sites or cesspool overflows. A general model for Haitian epidemic cholera and the related uncertainty is thus proposed and applied to the dataset of reported cases now available. The model allows us to draw predictions on longer-term epidemic cholera in Haiti from multi-season Monte Carlo runs, carried out up to January 2014 by using a multivariate Poisson rainfall generator, with parameters varying in space and time. Lessons learned and open issues are discussed and placed in perspective. We conclude that, despite differences in methods that can be tested through model-guided field validation, mathematical modeling of large-scale outbreaks emerges as an essential component of future cholera epidemic control.

  5. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  6. Calibrated predictions for multivariate competing risks models.

    Science.gov (United States)

    Gorfine, Malka; Hsu, Li; Zucker, David M; Parmigiani, Giovanni

    2014-04-01

    Prediction models for time-to-event data play a prominent role in assessing the individual risk of a disease, such as cancer. Accurate disease prediction models provide an efficient tool for identifying individuals at high risk, and provide the groundwork for estimating the population burden and cost of disease and for developing patient care guidelines. We focus on risk prediction of a disease in which family history is an important risk factor that reflects inherited genetic susceptibility, shared environment, and common behavior patterns. In this work family history is accommodated using frailty models, with the main novel feature being allowing for competing risks, such as other diseases or mortality. We show through a simulation study that naively treating competing risks as independent right censoring events results in non-calibrated predictions, with the expected number of events overestimated. Discrimination performance is not affected by ignoring competing risks. Our proposed prediction methodologies correctly account for competing events, are very well calibrated, and easy to implement.

  7. A Large Deformation Model for the Elastic Moduli of Two-dimensional Cellular Materials

    Institute of Scientific and Technical Information of China (English)

    HU Guoming; WAN Hui; ZHANG Youlin; BAO Wujun

    2006-01-01

    We developed a large deformation model for predicting the elastic moduli of two-dimensional cellular materials. This large deformation model was based on the large deflection of the inclined members of the cells of cellular materials. The deflection of the inclined member, the strain of the representative structure and the elastic moduli of two-dimensional cellular materials were expressed using incomplete elliptic integrals. The experimental results show that these elastic moduli are no longer constant at large deformation, but vary significantly with the strain. A comparison was made between this large deformation model and the small deformation model proposed by Gibson and Ashby.

  8. Modelling language evolution: Examples and predictions.

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  9. Modelling language evolution: Examples and predictions

    Science.gov (United States)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  10. Dynamic globularization prediction during cogging process of large size TC11 titanium alloy billet with lamellar structure

    Institute of Scientific and Technical Information of China (English)

    Hong-wu SONG; Shi-hong ZHANG; Ming CHENG

    2014-01-01

    The flow behavior and dynamic globularization of TC11 titanium alloy during subtransus deformation are investigated through hot compression tests. A constitutive model is established based on physical-based hardening model and phenomenological softening model. And based on the recrystallization mechanisms of globularization, the Avrami type kinetics model is established for prediction of globularization fraction and globularized grain size under large strain subtransus deformation of TC11 alloy. As the preliminary application of the previous results, the cogging process of large size TC11 alloy billet is simulated. Based on subroutine development of the DEFORM software, the coupled simulation of one fire cogging process is developed. It shows that the predicted results are in good agreement with the experimental results in forging load and microstructure characteristic, which validates the reliability of the developed FEM subroutine models.

  11. Global Solar Dynamo Models: Simulations and Predictions

    Indian Academy of Sciences (India)

    Mausumi Dikpati; Peter A. Gilman

    2008-03-01

    Flux-transport type solar dynamos have achieved considerable success in correctly simulating many solar cycle features, and are now being used for prediction of solar cycle timing and amplitude.We first define flux-transport dynamos and demonstrate how they work. The essential added ingredient in this class of models is meridional circulation, which governs the dynamo period and also plays a crucial role in determining the Sun’s memory about its past magnetic fields.We show that flux-transport dynamo models can explain many key features of solar cycles. Then we show that a predictive tool can be built from this class of dynamo that can be used to predict mean solar cycle features by assimilating magnetic field data from previous cycles.

  12. Model Predictive Control of Sewer Networks

    Science.gov (United States)

    Pedersen, Einar B.; Herbertsson, Hannes R.; Niemann, Henrik; Poulsen, Niels K.; Falk, Anne K. V.

    2017-01-01

    The developments in solutions for management of urban drainage are of vital importance, as the amount of sewer water from urban areas continues to increase due to the increase of the world’s population and the change in the climate conditions. How a sewer network is structured, monitored and controlled have thus become essential factors for effcient performance of waste water treatment plants. This paper examines methods for simplified modelling and controlling a sewer network. A practical approach to the problem is used by analysing simplified design model, which is based on the Barcelona benchmark model. Due to the inherent constraints the applied approach is based on Model Predictive Control.

  13. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  14. Circulating Osteopontin and Prediction of Hepatocellular Carcinoma Development in a Large European Population.

    Science.gov (United States)

    Duarte-Salles, Talita; Misra, Sandeep; Stepien, Magdalena; Plymoth, Amelie; Muller, David; Overvad, Kim; Olsen, Anja; Tjønneland, Anne; Baglietto, Laura; Severi, Gianluca; Boutron-Ruault, Marie-Christine; Turzanski-Fortner, Renee; Kaaks, Rudolf; Boeing, Heiner; Aleksandrova, Krasimira; Trichopoulou, Antonia; Lagiou, Pagona; Bamia, Christina; Pala, Valeria; Palli, Domenico; Mattiello, Amalia; Tumino, Rosario; Naccarati, Alessio; Bueno-de-Mesquita, H B As; Peeters, Petra H; Weiderpass, Elisabete; Quirós, J Ramón; Agudo, Antonio; Sánchez-Cantalejo, Emilio; Ardanaz, Eva; Gavrila, Diana; Dorronsoro, Miren; Werner, Mårten; Hemmingsson, Oskar; Ohlsson, Bodil; Sjöberg, Klas; Wareham, Nicholas J; Khaw, Kay-Tee; Bradbury, Kathryn E; Gunter, Marc J; Cross, Amanda J; Riboli, Elio; Jenab, Mazda; Hainaut, Pierre; Beretta, Laura

    2016-09-01

    We previously identified osteopontin (OPN) as a promising marker for the early detection of hepatocellular carcinoma (HCC). In this study, we investigated the association between prediagnostic circulating OPN levels and HCC incidence in a large population-based cohort. A nested case-control study was conducted within the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort. During a mean follow-up of 4.8 years, 100 HCC cases were identified. Each case was matched to two controls and OPN levels were measured in baseline plasma samples. Viral hepatitis, liver function, and α-fetoprotein (AFP) tests were also conducted. Conditional logistic regression models were used to calculate multivariable odds ratio (OR) and 95% confidence intervals (95% CI) for OPN levels in relation to HCC. Receiver operating characteristics curves were constructed to determine the discriminatory accuracy of OPN alone or in combination with other liver biomarkers in the prediction of HCC. OPN levels were positively associated with HCC risk (per 10% increment, ORmultivariable = 1.30; 95% CI, 1.14-1.48). The association was stronger among cases diagnosed within 2 years of follow-up. Adding liver function tests to OPN improved the discriminatory performance for subjects who developed HCC (AUC = 0.86). For cases diagnosed within 2 years, the combination of OPN and AFP was best able to predict HCC risk (AUC = 0.88). The best predictive model for HCC in this low-risk population is OPN in combination with liver function tests. Within 2 years of diagnosis, the combination of OPN and AFP best predicted HCC development, suggesting that measuring OPN and AFP could identify high-risk groups independently of a liver disease diagnosis. Cancer Prev Res; 9(9); 758-65. ©2016 AACR.

  15. Modelling Chemical Reasoning to Predict Reactions

    OpenAIRE

    Segler, Marwin H. S.; Waller, Mark P.

    2016-01-01

    The ability to reason beyond established knowledge allows Organic Chemists to solve synthetic problems and to invent novel transformations. Here, we propose a model which mimics chemical reasoning and formalises reaction prediction as finding missing links in a knowledge graph. We have constructed a knowledge graph containing 14.4 million molecules and 8.2 million binary reactions, which represents the bulk of all chemical reactions ever published in the scientific literature. Our model outpe...

  16. Predictive Modeling of the CDRA 4BMS

    Science.gov (United States)

    Coker, Robert; Knox, James

    2016-01-01

    Fully predictive models of the Four Bed Molecular Sieve of the Carbon Dioxide Removal Assembly on the International Space Station are being developed. This virtual laboratory will be used to help reduce mass, power, and volume requirements for future missions. In this paper we describe current and planned modeling developments in the area of carbon dioxide removal to support future crewed Mars missions as well as the resolution of anomalies observed in the ISS CDRA.

  17. Raman Model Predicting Hardness of Covalent Crystals

    OpenAIRE

    Zhou, Xiang-Feng; Qian, Quang-Rui; Sun, Jian; Tian, Yongjun; Wang, Hui-Tian

    2009-01-01

    Based on the fact that both hardness and vibrational Raman spectrum depend on the intrinsic property of chemical bonds, we propose a new theoretical model for predicting hardness of a covalent crystal. The quantitative relationship between hardness and vibrational Raman frequencies deduced from the typical zincblende covalent crystals is validated to be also applicable for the complex multicomponent crystals. This model enables us to nondestructively and indirectly characterize the hardness o...

  18. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts

  19. Unreachable Setpoints in Model Predictive Control

    DEFF Research Database (Denmark)

    Rawlings, James B.; Bonné, Dennis; Jørgensen, John Bagterp

    2008-01-01

    steady state is established for terminal constraint model predictive control (MPC). The region of attraction is the steerable set. Existing analysis methods for closed-loop properties of MPC are not applicable to this new formulation, and a new analysis method is developed. It is shown how to extend...

  20. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  1. Prediction modelling for population conviction data

    NARCIS (Netherlands)

    Tollenaar, N.

    2017-01-01

    In this thesis, the possibilities of using prediction models for judicial penal case data are investigated. The development and refinement of a risk taxation scale based on these data is discussed. When false positives are weighted equally severe as false negatives, 70% can be classified correctly.

  2. A Predictive Model for MSSW Student Success

    Science.gov (United States)

    Napier, Angela Michele

    2011-01-01

    This study tested a hypothetical model for predicting both graduate GPA and graduation of University of Louisville Kent School of Social Work Master of Science in Social Work (MSSW) students entering the program during the 2001-2005 school years. The preexisting characteristics of demographics, academic preparedness and culture shock along with…

  3. A revised prediction model for natural conception

    NARCIS (Netherlands)

    Bensdorp, A.J.; Steeg, J.W. van der; Steures, P.; Habbema, J.D.; Hompes, P.G.; Bossuyt, P.M.; Veen, F. van der; Mol, B.W.; Eijkemans, M.J.; Kremer, J.A.M.; et al.,

    2017-01-01

    One of the aims in reproductive medicine is to differentiate between couples that have favourable chances of conceiving naturally and those that do not. Since the development of the prediction model of Hunault, characteristics of the subfertile population have changed. The objective of this analysis

  4. Distributed Model Predictive Control via Dual Decomposition

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Stoustrup, Jakob; Andersen, Palle

    2014-01-01

    This chapter presents dual decomposition as a means to coordinate a number of subsystems coupled by state and input constraints. Each subsystem is equipped with a local model predictive controller while a centralized entity manages the subsystems via prices associated with the coupling constraints...

  5. Predictive Modelling of Mycotoxins in Cereals

    NARCIS (Netherlands)

    Fels, van der H.J.; Liu, C.

    2015-01-01

    In dit artikel worden de samenvattingen van de presentaties tijdens de 30e bijeenkomst van de Werkgroep Fusarium weergegeven. De onderwerpen zijn: Predictive Modelling of Mycotoxins in Cereals.; Microbial degradation of DON.; Exposure to green leaf volatiles primes wheat against FHB but boosts produ

  6. Leptogenesis in minimal predictive seesaw models

    CERN Document Server

    Björkeroth, Fredrik; Varzielas, Ivo de Medeiros; King, Stephen F

    2015-01-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\

  7. Dynamic Loads and Wake Prediction for Large Wind Turbines Based on Free Wake Method

    Institute of Scientific and Technical Information of China (English)

    Cao Jiufa; Wang Tongguang; Long Hui; Ke Shitang; Xu Bofeng

    2015-01-01

    With large scale wind turbines ,the issue of aerodynamic elastic response is even more significant on dy-namic behaviour of the system .Unsteady free vortex wake method is proposed to calculate the shape of wake and aerodynamic load .Considering the effect of aerodynamic load ,inertial load and gravity load ,the decoupling dy-namic equations are established by using finite element method in conjunction of the modal method and equations are solved numerically by Newmark approach .Finally ,the numerical simulation of a large scale wind turbine is performed through coupling the free vortex wake modelling with structural modelling .The results show that this coupling model can predict the flexible wind turbine dynamic characteristics effectively and efficiently .Under the influence of the gravitational force ,the dynamic response of flapwise direction contributes to the dynamic behavior of edgewise direction under the operational condition of steady wind speed .The difference in dynamic response be-tween the flexible and rigid wind turbines manifests when the aerodynamics/structure coupling effect is of signifi-cance in both wind turbine design and performance calculation .

  8. Identification and Prediction of Large Pedestrian Flow in Urban Areas Based on a Hybrid Detection Approach

    Directory of Open Access Journals (Sweden)

    Kaisheng Zhang

    2016-12-01

    Full Text Available Recently, population density has grown quickly with the increasing acceleration of urbanization. At the same time, overcrowded situations are more likely to occur in populous urban areas, increasing the risk of accidents. This paper proposes a synthetic approach to recognize and identify the large pedestrian flow. In particular, a hybrid pedestrian flow detection model was constructed by analyzing real data from major mobile phone operators in China, including information from smartphones and base stations (BS. With the hybrid model, the Log Distance Path Loss (LDPL model was used to estimate the pedestrian density from raw network data, and retrieve information with the Gaussian Progress (GP through supervised learning. Temporal-spatial prediction of the pedestrian data was carried out with Machine Learning (ML approaches. Finally, a case study of a real Central Business District (CBD scenario in Shanghai, China using records of millions of cell phone users was conducted. The results showed that the new approach significantly increases the utility and capacity of the mobile network. A more reasonable overcrowding detection and alert system can be developed to improve safety in subway lines and other hotspot landmark areas, such as the Bundle, People’s Square or Disneyland, where a large passenger flow generally exists.

  9. Caries risk assessment models in caries prediction

    Directory of Open Access Journals (Sweden)

    Amila Zukanović

    2013-11-01

    Full Text Available Objective. The aim of this research was to assess the efficiency of different multifactor models in caries prediction. Material and methods. Data from the questionnaire and objective examination of 109 examinees was entered into the Cariogram, Previser and Caries-Risk Assessment Tool (CAT multifactor risk assessment models. Caries risk was assessed with the help of all three models for each patient, classifying them as low, medium or high-risk patients. The development of new caries lesions over a period of three years [Decay Missing Filled Tooth (DMFT increment = difference between Decay Missing Filled Tooth Surface (DMFTS index at baseline and follow up], provided for examination of the predictive capacity concerning different multifactor models. Results. The data gathered showed that different multifactor risk assessment models give significantly different results (Friedman test: Chi square = 100.073, p=0.000. Cariogram is the model which identified the majority of examinees as medium risk patients (70%. The other two models were more radical in risk assessment, giving more unfavorable risk –profiles for patients. In only 12% of the patients did the three multifactor models assess the risk in the same way. Previser and CAT gave the same results in 63% of cases – the Wilcoxon test showed that there is no statistically significant difference in caries risk assessment between these two models (Z = -1.805, p=0.071. Conclusions. Evaluation of three different multifactor caries risk assessment models (Cariogram, PreViser and CAT showed that only the Cariogram can successfully predict new caries development in 12-year-old Bosnian children.

  10. Disease prediction models and operational readiness.

    Directory of Open Access Journals (Sweden)

    Courtney D Corley

    Full Text Available The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011. We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4, spatial (26, ecological niche (28, diagnostic or clinical (6, spread or response (9, and reviews (3. The model parameters (e.g., etiology, climatic, spatial, cultural and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological were recorded and reviewed. A component of this review is the identification of verification and validation (V&V methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology

  11. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...

  12. ENSO Prediction using Vector Autoregressive Models

    Science.gov (United States)

    Chapman, D. R.; Cane, M. A.; Henderson, N.; Lee, D.; Chen, C.

    2013-12-01

    A recent comparison (Barnston et al, 2012 BAMS) shows the ENSO forecasting skill of dynamical models now exceeds that of statistical models, but the best statistical models are comparable to all but the very best dynamical models. In this comparison the leading statistical model is the one based on the Empirical Model Reduction (EMR) method. Here we report on experiments with multilevel Vector Autoregressive models using only sea surface temperatures (SSTs) as predictors. VAR(L) models generalizes Linear Inverse Models (LIM), which are a VAR(1) method, as well as multilevel univariate autoregressive models. Optimal forecast skill is achieved using 12 to 14 months of prior state information (i.e 12-14 levels), which allows SSTs alone to capture the effects of other variables such as heat content as well as seasonality. The use of multiple levels allows the model advancing one month at a time to perform at least as well for a 6 month forecast as a model constructed to explicitly forecast 6 months ahead. We infer that the multilevel model has fully captured the linear dynamics (cf. Penland and Magorian, 1993 J. Climate). Finally, while VAR(L) is equivalent to L-level EMR, we show in a 150 year cross validated assessment that we can increase forecast skill by improving on the EMR initialization procedure. The greatest benefit of this change is in allowing the prediction to make effective use of information over many more months.

  13. Electrostatic ion thrusters - towards predictive modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kalentev, O.; Matyash, K.; Duras, J.; Lueskow, K.F.; Schneider, R. [Ernst-Moritz-Arndt Universitaet Greifswald, D-17489 (Germany); Koch, N. [Technische Hochschule Nuernberg Georg Simon Ohm, Kesslerplatz 12, D-90489 Nuernberg (Germany); Schirra, M. [Thales Electronic Systems GmbH, Soeflinger Strasse 100, D-89077 Ulm (Germany)

    2014-02-15

    The development of electrostatic ion thrusters so far has mainly been based on empirical and qualitative know-how, and on evolutionary iteration steps. This resulted in considerable effort regarding prototype design, construction and testing and therefore in significant development and qualification costs and high time demands. For future developments it is anticipated to implement simulation tools which allow for quantitative prediction of ion thruster performance, long-term behavior and space craft interaction prior to hardware design and construction. Based on integrated numerical models combining self-consistent kinetic plasma models with plasma-wall interaction modules a new quality in the description of electrostatic thrusters can be reached. These open the perspective for predictive modeling in this field. This paper reviews the application of a set of predictive numerical modeling tools on an ion thruster model of the HEMP-T (High Efficiency Multi-stage Plasma Thruster) type patented by Thales Electron Devices GmbH. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  14. Gas explosion prediction using CFD models

    Energy Technology Data Exchange (ETDEWEB)

    Niemann-Delius, C.; Okafor, E. [RWTH Aachen Univ. (Germany); Buhrow, C. [TU Bergakademie Freiberg Univ. (Germany)

    2006-07-15

    A number of CFD models are currently available to model gaseous explosions in complex geometries. Some of these tools allow the representation of complex environments within hydrocarbon production plants. In certain explosion scenarios, a correction is usually made for the presence of buildings and other complexities by using crude approximations to obtain realistic estimates of explosion behaviour as can be found when predicting the strength of blast waves resulting from initial explosions. With the advance of computational technology, and greater availability of computing power, computational fluid dynamics (CFD) tools are becoming increasingly available for solving such a wide range of explosion problems. A CFD-based explosion code - FLACS can, for instance, be confidently used to understand the impact of blast overpressures in a plant environment consisting of obstacles such as buildings, structures, and pipes. With its porosity concept representing geometry details smaller than the grid, FLACS can represent geometry well, even when using coarse grid resolutions. The performance of FLACS has been evaluated using a wide range of field data. In the present paper, the concept of computational fluid dynamics (CFD) and its application to gas explosion prediction is presented. Furthermore, the predictive capabilities of CFD-based gaseous explosion simulators are demonstrated using FLACS. Details about the FLACS-code, some extensions made to FLACS, model validation exercises, application, and some results from blast load prediction within an industrial facility are presented. (orig.)

  15. Genetic models of homosexuality: generating testable predictions.

    Science.gov (United States)

    Gavrilets, Sergey; Rice, William R

    2006-12-22

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism.

  16. A Study On Distributed Model Predictive Consensus

    CERN Document Server

    Keviczky, Tamas

    2008-01-01

    We investigate convergence properties of a proposed distributed model predictive control (DMPC) scheme, where agents negotiate to compute an optimal consensus point using an incremental subgradient method based on primal decomposition as described in Johansson et al. [2006, 2007]. The objective of the distributed control strategy is to agree upon and achieve an optimal common output value for a group of agents in the presence of constraints on the agent dynamics using local predictive controllers. Stability analysis using a receding horizon implementation of the distributed optimal consensus scheme is performed. Conditions are given under which convergence can be obtained even if the negotiations do not reach full consensus.

  17. Applicative limitations of sediment transport on predictive modeling in geomorphology

    Institute of Scientific and Technical Information of China (English)

    WEIXiang; LIZhanbin

    2004-01-01

    Sources of uncertainty or error that arise in attempting to scale up the results of laboratory-scale sediment transport studies for predictive modeling of geomorphic systems include: (i) model imperfection, (ii) omission of important processes, (iii) lack of knowledge of initial conditions, (iv) sensitivity to initial conditions, (v) unresolved heterogeneity, (vi) occurrence of external forcing, and (vii) inapplicability of the factor of safety concept. Sources of uncertainty that are unimportant or that can be controlled at small scales and over short time scales become important in large-scale applications and over long time scales. Control and repeatability, hallmarks of laboratory-scale experiments, are usually lacking at the large scales characteristic of geomorphology. Heterogeneity is an important concomitant of size, and tends to make large systems unique. Uniqueness implies that prediction cannot be based upon first-principles quantitative modeling alone, but must be a function of system history as well. Periodic data collection, feedback, and model updating are essential where site-specific prediction is required.

  18. NONLINEAR MODEL PREDICTIVE CONTROL OF CHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    R. G. SILVA

    1999-03-01

    Full Text Available A new algorithm for model predictive control is presented. The algorithm utilizes a simultaneous solution and optimization strategy to solve the model's differential equations. The equations are discretized by equidistant collocation, and along with the algebraic model equations are included as constraints in a nonlinear programming (NLP problem. This algorithm is compared with the algorithm that uses orthogonal collocation on finite elements. The equidistant collocation algorithm results in simpler equations, providing a decrease in computation time for the control moves. Simulation results are presented and show a satisfactory performance of this algorithm.

  19. The Large Scale Machine Learning in an Artificial Society: Prediction of the Ebola Outbreak in Beijing

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2015-01-01

    Full Text Available Ebola virus disease (EVD distinguishes its feature as high infectivity and mortality. Thus, it is urgent for governments to draw up emergency plans against Ebola. However, it is hard to predict the possible epidemic situations in practice. Luckily, in recent years, computational experiments based on artificial society appeared, providing a new approach to study the propagation of EVD and analyze the corresponding interventions. Therefore, the rationality of artificial society is the key to the accuracy and reliability of experiment results. Individuals’ behaviors along with travel mode directly affect the propagation among individuals. Firstly, artificial Beijing is reconstructed based on geodemographics and machine learning is involved to optimize individuals’ behaviors. Meanwhile, Ebola course model and propagation model are built, according to the parameters in West Africa. Subsequently, propagation mechanism of EVD is analyzed, epidemic scenario is predicted, and corresponding interventions are presented. Finally, by simulating the emergency responses of Chinese government, the conclusion is finally drawn that Ebola is impossible to outbreak in large scale in the city of Beijing.

  20. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  1. Neuro-fuzzy modeling in bankruptcy prediction

    Directory of Open Access Journals (Sweden)

    Vlachos D.

    2003-01-01

    Full Text Available For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm, combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.

  2. The Next Page Access Prediction Using Makov Model

    Directory of Open Access Journals (Sweden)

    Deepti Razdan

    2011-09-01

    Full Text Available Predicting the next page to be accessed by the Webusers has attracted a large amount of research. In this paper, anew web usage mining approach is proposed to predict next pageaccess. It is proposed to identify similar access patterns from weblog using K-mean clustering and then Markov model is used forprediction for next page accesses. The tightness of clusters isimproved by setting similarity threshold while forming clusters.In traditional recommendation models, clustering by nonsequentialdata decreases recommendation accuracy. In thispaper involve incorporating clustering with low order markovmodel which can improve the prediction accuracy. The main areaof research in this paper is pre processing and identification ofuseful patterns from web data using mining techniques with thehelp of open source software.

  3. Mathematical models for predicting indoor air quality from smoking activity.

    Science.gov (United States)

    Ott, W R

    1999-05-01

    Much progress has been made over four decades in developing, testing, and evaluating the performance of mathematical models for predicting pollutant concentrations from smoking in indoor settings. Although largely overlooked by the regulatory community, these models provide regulators and risk assessors with practical tools for quantitatively estimating the exposure level that people receive indoors for a given level of smoking activity. This article reviews the development of the mass balance model and its application to predicting indoor pollutant concentrations from cigarette smoke and derives the time-averaged version of the model from the basic laws of conservation of mass. A simple table is provided of computed respirable particulate concentrations for any indoor location for which the active smoking count, volume, and concentration decay rate (deposition rate combined with air exchange rate) are known. Using the indoor ventilatory air exchange rate causes slightly higher indoor concentrations and therefore errs on the side of protecting health, since it excludes particle deposition effects, whereas using the observed particle decay rate gives a more accurate prediction of indoor concentrations. This table permits easy comparisons of indoor concentrations with air quality guidelines and indoor standards for different combinations of active smoking counts and air exchange rates. The published literature on mathematical models of environmental tobacco smoke also is reviewed and indicates that these models generally give good agreement between predicted concentrations and actual indoor measurements.

  4. Pressure prediction model for compression garment design.

    Science.gov (United States)

    Leung, W Y; Yuen, D W; Ng, Sun Pui; Shi, S Q

    2010-01-01

    Based on the application of Laplace's law to compression garments, an equation for predicting garment pressure, incorporating the body circumference, the cross-sectional area of fabric, applied strain (as a function of reduction factor), and its corresponding Young's modulus, is developed. Design procedures are presented to predict garment pressure using the aforementioned parameters for clinical applications. Compression garments have been widely used in treating burning scars. Fabricating a compression garment with a required pressure is important in the healing process. A systematic and scientific design method can enable the occupational therapist and compression garments' manufacturer to custom-make a compression garment with a specific pressure. The objectives of this study are 1) to develop a pressure prediction model incorporating different design factors to estimate the pressure exerted by the compression garments before fabrication; and 2) to propose more design procedures in clinical applications. Three kinds of fabrics cut at different bias angles were tested under uniaxial tension, as were samples made in a double-layered structure. Sets of nonlinear force-extension data were obtained for calculating the predicted pressure. Using the value at 0° bias angle as reference, the Young's modulus can vary by as much as 29% for fabric type P11117, 43% for fabric type PN2170, and even 360% for fabric type AP85120 at a reduction factor of 20%. When comparing the predicted pressure calculated from the single-layered and double-layered fabrics, the double-layered construction provides a larger range of target pressure at a particular strain. The anisotropic and nonlinear behaviors of the fabrics have thus been determined. Compression garments can be methodically designed by the proposed analytical pressure prediction model.

  5. Evaluation of Penalized and Nonpenalized Methods for Disease Prediction with Large-Scale Genetic Data

    Directory of Open Access Journals (Sweden)

    Sungho Won

    2015-01-01

    Full Text Available Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called “large P and small N” problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.

  6. Statistical assessment of predictive modeling uncertainty

    Science.gov (United States)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  7. SAS-macros for estimation and prediction in an model of the electricity consumption

    DEFF Research Database (Denmark)

    1998-01-01

    SAS-macros for estimation and prediction in an model of the electricity consumption'' is a large collection of SAS-macros for handling a model of the electricity consumption in the Eastern Denmark. The macros are installed at Elkraft, Ballerup.......SAS-macros for estimation and prediction in an model of the electricity consumption'' is a large collection of SAS-macros for handling a model of the electricity consumption in the Eastern Denmark. The macros are installed at Elkraft, Ballerup....

  8. Seasonal Predictability in a Model Atmosphere.

    Science.gov (United States)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  9. Charge transport model to predict intrinsic reliability for dielectric materials

    Energy Technology Data Exchange (ETDEWEB)

    Ogden, Sean P. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States); Borja, Juan; Plawsky, Joel L., E-mail: plawsky@rpi.edu; Gill, William N. [Howard P. Isermann Department of Chemical and Biological Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Lu, T.-M. [Department of Physics, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Yeap, Kong Boon [GLOBALFOUNDRIES, 400 Stonebreak Rd. Ext., Malta, New York 12020 (United States)

    2015-09-28

    Several lifetime models, mostly empirical in nature, are used to predict reliability for low-k dielectrics used in integrated circuits. There is a dispute over which model provides the most accurate prediction for device lifetime at operating conditions. As a result, there is a need to transition from the use of these largely empirical models to one built entirely on theory. Therefore, a charge transport model was developed to predict the device lifetime of low-k interconnect systems. The model is based on electron transport and donor-type defect formation. Breakdown occurs when a critical defect concentration accumulates, resulting in electron tunneling and the emptying of positively charged traps. The enhanced local electric field lowers the barrier for electron injection into the dielectric, causing a positive feedforward failure. The charge transport model is able to replicate experimental I-V and I-t curves, capturing the current decay at early stress times and the rapid current increase at failure. The model is based on field-driven and current-driven failure mechanisms and uses a minimal number of parameters. All the parameters have some theoretical basis or have been measured experimentally and are not directly used to fit the slope of the time-to-failure versus applied field curve. Despite this simplicity, the model is able to accurately predict device lifetime for three different sources of experimental data. The simulation's predictions at low fields and very long lifetimes show that the use of a single empirical model can lead to inaccuracies in device reliability.

  10. In silico modeling to predict drug-induced phospholipidosis

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sydney S.; Kim, Jae S.; Valerio, Luis G., E-mail: luis.valerio@fda.hhs.gov; Sadrieh, Nakissa

    2013-06-01

    Drug-induced phospholipidosis (DIPL) is a preclinical finding during pharmaceutical drug development that has implications on the course of drug development and regulatory safety review. A principal characteristic of drugs inducing DIPL is known to be a cationic amphiphilic structure. This provides evidence for a structure-based explanation and opportunity to analyze properties and structures of drugs with the histopathologic findings for DIPL. In previous work from the FDA, in silico quantitative structure–activity relationship (QSAR) modeling using machine learning approaches has shown promise with a large dataset of drugs but included unconfirmed data as well. In this study, we report the construction and validation of a battery of complementary in silico QSAR models using the FDA's updated database on phospholipidosis, new algorithms and predictive technologies, and in particular, we address high performance with a high-confidence dataset. The results of our modeling for DIPL include rigorous external validation tests showing 80–81% concordance. Furthermore, the predictive performance characteristics include models with high sensitivity and specificity, in most cases above ≥ 80% leading to desired high negative and positive predictivity. These models are intended to be utilized for regulatory toxicology applied science needs in screening new drugs for DIPL. - Highlights: • New in silico models for predicting drug-induced phospholipidosis (DIPL) are described. • The training set data in the models is derived from the FDA's phospholipidosis database. • We find excellent predictivity values of the models based on external validation. • The models can support drug screening and regulatory decision-making on DIPL.

  11. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  12. Progress and Current Challenges in Modeling Large RNAs.

    Science.gov (United States)

    Somarowthu, Srinivas

    2016-02-27

    Recent breakthroughs in next-generation sequencing technologies have led to the discovery of several classes of non-coding RNAs (ncRNAs). It is now apparent that RNA molecules are not only just carriers of genetic information but also key players in many cellular processes. While there has been a rapid increase in the number of ncRNA sequences deposited in various databases over the past decade, the biological functions of these ncRNAs are largely not well understood. Similar to proteins, RNA molecules carry out a function by forming specific three-dimensional structures. Understanding the function of a particular RNA therefore requires a detailed knowledge of its structure. However, determining experimental structures of RNA is extremely challenging. In fact, RNA-only structures represent just 1% of the total structures deposited in the PDB. Thus, computational methods that predict three-dimensional RNA structures are in high demand. Computational models can provide valuable insights into structure-function relationships in ncRNAs and can aid in the development of functional hypotheses and experimental designs. In recent years, a set of diverse RNA structure prediction tools have become available, which differ in computational time, input data and accuracy. This review discusses the recent progress and challenges in RNA structure prediction methods.

  13. A kinetic model for predicting biodegradation.

    Science.gov (United States)

    Dimitrov, S; Pavlov, T; Nedelcheva, D; Reuschenbach, P; Silvani, M; Bias, R; Comber, M; Low, L; Lee, C; Parkerton, T; Mekenyan, O

    2007-01-01

    Biodegradation plays a key role in the environmental risk assessment of organic chemicals. The need to assess biodegradability of a chemical for regulatory purposes supports the development of a model for predicting the extent of biodegradation at different time frames, in particular the extent of ultimate biodegradation within a '10 day window' criterion as well as estimating biodegradation half-lives. Conceptually this implies expressing the rate of catabolic transformations as a function of time. An attempt to correlate the kinetics of biodegradation with molecular structure of chemicals is presented. A simplified biodegradation kinetic model was formulated by combining the probabilistic approach of the original formulation of the CATABOL model with the assumption of first order kinetics of catabolic transformations. Nonlinear regression analysis was used to fit the model parameters to OECD 301F biodegradation kinetic data for a set of 208 chemicals. The new model allows the prediction of biodegradation multi-pathways, primary and ultimate half-lives and simulation of related kinetic biodegradation parameters such as biological oxygen demand (BOD), carbon dioxide production, and the nature and amount of metabolites as a function of time. The model may also be used for evaluating the OECD ready biodegradability potential of a chemical within the '10-day window' criterion.

  14. Disease Prediction Models and Operational Readiness

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Courtney D.; Pullum, Laura L.; Hartley, David M.; Benedum, Corey M.; Noonan, Christine F.; Rabinowitz, Peter M.; Lancaster, Mary J.

    2014-03-19

    INTRODUCTION: The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. One of the primary goals of this research was to characterize the viability of biosurveillance models to provide operationally relevant information for decision makers to identify areas for future research. Two critical characteristics differentiate this work from other infectious disease modeling reviews. First, we reviewed models that attempted to predict the disease event, not merely its transmission dynamics. Second, we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). Methods: We searched dozens of commercial and government databases and harvested Google search results for eligible models utilizing terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche-modeling, The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. This returned 13,767 webpages and 12,152 citations. After de-duplication and removal of extraneous material, a core collection of 6,503 items was established and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. Next, PNNL’s IN-SPIRE visual analytics software was used to cross-correlate these publications with the definition for a biosurveillance model resulting in the selection of 54 documents that matched the criteria resulting Ten of these documents, However, dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens rather than predicting disease events. As a result, we systematically reviewed 44 papers and the

  15. Nonlinear model predictive control theory and algorithms

    CERN Document Server

    Grüne, Lars

    2017-01-01

    This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness. An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine—the core of any nonlinear model predictive controller—works. Accompanying software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the possibilities and limitations of NMPC. T...

  16. Boolean network model predicts knockout mutant phenotypes of fission yeast.

    Directory of Open Access Journals (Sweden)

    Maria I Davidich

    Full Text Available BOOLEAN NETWORKS (OR: networks of switches are extremely simple mathematical models of biochemical signaling networks. Under certain circumstances, Boolean networks, despite their simplicity, are capable of predicting dynamical activation patterns of gene regulatory networks in living cells. For example, the temporal sequence of cell cycle activation patterns in yeasts S. pombe and S. cerevisiae are faithfully reproduced by Boolean network models. An interesting question is whether this simple model class could also predict a more complex cellular phenomenology as, for example, the cell cycle dynamics under various knockout mutants instead of the wild type dynamics, only. Here we show that a Boolean network model for the cell cycle control network of yeast S. pombe correctly predicts viability of a large number of known mutants. So far this had been left to the more detailed differential equation models of the biochemical kinetics of the yeast cell cycle network and was commonly thought to be out of reach for models as simplistic as Boolean networks. The new results support our vision that Boolean networks may complement other mathematical models in systems biology to a larger extent than expected so far, and may fill a gap where simplicity of the model and a preference for an overall dynamical blueprint of cellular regulation, instead of biochemical details, are in the focus.

  17. Boolean Network Model Predicts Knockout Mutant Phenotypes of Fission Yeast

    Science.gov (United States)

    Davidich, Maria I.; Bornholdt, Stefan

    2013-01-01

    Boolean networks (or: networks of switches) are extremely simple mathematical models of biochemical signaling networks. Under certain circumstances, Boolean networks, despite their simplicity, are capable of predicting dynamical activation patterns of gene regulatory networks in living cells. For example, the temporal sequence of cell cycle activation patterns in yeasts S. pombe and S. cerevisiae are faithfully reproduced by Boolean network models. An interesting question is whether this simple model class could also predict a more complex cellular phenomenology as, for example, the cell cycle dynamics under various knockout mutants instead of the wild type dynamics, only. Here we show that a Boolean network model for the cell cycle control network of yeast S. pombe correctly predicts viability of a large number of known mutants. So far this had been left to the more detailed differential equation models of the biochemical kinetics of the yeast cell cycle network and was commonly thought to be out of reach for models as simplistic as Boolean networks. The new results support our vision that Boolean networks may complement other mathematical models in systems biology to a larger extent than expected so far, and may fill a gap where simplicity of the model and a preference for an overall dynamical blueprint of cellular regulation, instead of biochemical details, are in the focus. PMID:24069138

  18. Lepton Flavor Violation in Predictive SUSY-GUT Models

    Energy Technology Data Exchange (ETDEWEB)

    Albright, Carl H.; /Northern Illinois U. /Fermilab; Chen, Mu-Chun; /UC, Irvine

    2008-02-01

    There have been many theoretical models constructed which aim to explain the neutrino masses and mixing patterns. While many of the models will be eliminated once more accurate determinations of the mixing parameters, especially sin{sup 2} 2{theta}{sub 13}, are obtained, charged lepton flavor violation (LFV) experiments are able to differentiate even further among the models. In this paper, they investigate various rare LFV processes, such as {ell}{sub i} {yields} {ell}{sub j} + {gamma} and {mu} - e conversion, in five predictive SUSY SO(10) models and their allowed soft SUSY breaking parameter space in the constrained minimal SUSY standard model (CMSSM). Utilizing the WMAP dark matter constraints, they obtain lower bounds on the branching ratios of these rare processes and find that at least three of the five models they consider give rise to predictions for {mu} {yields} e + {gamma} that will be tested by the MEG collaboration at PSI. in addition, the next generation {mu} - e conversion experiment has sensitivity to the predictions of all five models, making it an even more robust way to test these models. While generic studies have emphasized the dependence of the branching ratios of these rare processes on the reactor neutrino angle, {theta}{sub 13}, and the mass of the heaviest right-handed neutrino, M{sub 3}, they find very massive M{sub 3} is more significant than large {theta}{sub 13} in leading to branching ratios near to the present upper limits.

  19. Predictive Modeling in Actinide Chemistry and Catalysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-16

    These are slides from a presentation on predictive modeling in actinide chemistry and catalysis. The following topics are covered in these slides: Structures, bonding, and reactivity (bonding can be quantified by optical probes and theory, and electronic structures and reaction mechanisms of actinide complexes); Magnetic resonance properties (transition metal catalysts with multi-nuclear centers, and NMR/EPR parameters); Moving to more complex systems (surface chemistry of nanomaterials, and interactions of ligands with nanoparticles); Path forward and conclusions.

  20. PEEX Modelling Platform for Seamless Environmental Prediction

    Science.gov (United States)

    Baklanov, Alexander; Mahura, Alexander; Arnold, Stephen; Makkonen, Risto; Petäjä, Tuukka; Kerminen, Veli-Matti; Lappalainen, Hanna K.; Ezau, Igor; Nuterman, Roman; Zhang, Wen; Penenko, Alexey; Gordov, Evgeny; Zilitinkevich, Sergej; Kulmala, Markku

    2017-04-01

    The Pan-Eurasian EXperiment (PEEX) is a multidisciplinary, multi-scale research programme stared in 2012 and aimed at resolving the major uncertainties in Earth System Science and global sustainability issues concerning the Arctic and boreal Northern Eurasian regions and in China. Such challenges include climate change, air quality, biodiversity loss, chemicalization, food supply, and the use of natural resources by mining, industry, energy production and transport. The research infrastructure introduces the current state of the art modeling platform and observation systems in the Pan-Eurasian region and presents the future baselines for the coherent and coordinated research infrastructures in the PEEX domain. The PEEX modeling Platform is characterized by a complex seamless integrated Earth System Modeling (ESM) approach, in combination with specific models of different processes and elements of the system, acting on different temporal and spatial scales. The ensemble approach is taken to the integration of modeling results from different models, participants and countries. PEEX utilizes the full potential of a hierarchy of models: scenario analysis, inverse modeling, and modeling based on measurement needs and processes. The models are validated and constrained by available in-situ and remote sensing data of various spatial and temporal scales using data assimilation and top-down modeling. The analyses of the anticipated large volumes of data produced by available models and sensors will be supported by a dedicated virtual research environment developed for these purposes.

  1. Probabilistic prediction models for aggregate quarry siting

    Science.gov (United States)

    Robinson, G.R.; Larkins, P.M.

    2007-01-01

    Weights-of-evidence (WofE) and logistic regression techniques were used in a GIS framework to predict the spatial likelihood (prospectivity) of crushed-stone aggregate quarry development. The joint conditional probability models, based on geology, transportation network, and population density variables, were defined using quarry location and time of development data for the New England States, North Carolina, and South Carolina, USA. The Quarry Operation models describe the distribution of active aggregate quarries, independent of the date of opening. The New Quarry models describe the distribution of aggregate quarries when they open. Because of the small number of new quarries developed in the study areas during the last decade, independent New Quarry models have low parameter estimate reliability. The performance of parameter estimates derived for Quarry Operation models, defined by a larger number of active quarries in the study areas, were tested and evaluated to predict the spatial likelihood of new quarry development. Population density conditions at the time of new quarry development were used to modify the population density variable in the Quarry Operation models to apply to new quarry development sites. The Quarry Operation parameters derived for the New England study area, Carolina study area, and the combined New England and Carolina study areas were all similar in magnitude and relative strength. The Quarry Operation model parameters, using the modified population density variables, were found to be a good predictor of new quarry locations. Both the aggregate industry and the land management community can use the model approach to target areas for more detailed site evaluation for quarry location. The models can be revised easily to reflect actual or anticipated changes in transportation and population features. ?? International Association for Mathematical Geology 2007.

  2. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing s...... as it pinpoints which decisions to be concerned about when the goal is to predict footbridge response. The studies involve estimating footbridge responses using Monte-Carlo simulations and focus is on estimating vertical structural response to single person loading....

  3. Nonconvex Model Predictive Control for Commercial Refrigeration

    DEFF Research Database (Denmark)

    Hovgaard, Tobias Gybel; Larsen, Lars F.S.; Jørgensen, John Bagterp

    2013-01-01

    is to minimize the total energy cost, using real-time electricity prices, while obeying temperature constraints on the zones. We propose a variation on model predictive control to achieve this goal. When the right variables are used, the dynamics of the system are linear, and the constraints are convex. The cost...... the iterations, which is more than fast enough to run in real-time. We demonstrate our method on a realistic model, with a full year simulation and 15 minute time periods, using historical electricity prices and weather data, as well as random variations in thermal load. These simulations show substantial cost...

  4. Comparing Sediment Yield Predictions from Different Hydrologic Modeling Schemes

    Science.gov (United States)

    Dahl, T. A.; Kendall, A. D.; Hyndman, D. W.

    2015-12-01

    Sediment yield, or the delivery of sediment from the landscape to a river, is a difficult process to accurately model. It is primarily a function of hydrology and climate, but influenced by landcover and the underlying soils. These additional factors make it much more difficult to accurately model than water flow alone. It is not intuitive what impact different hydrologic modeling schemes may have on the prediction of sediment yield. Here, two implementations of the Modified Universal Soil Loss Equation (MUSLE) are compared to examine the effects of hydrologic model choice. Both the Soil and Water Assessment Tool (SWAT) and the Landscape Hydrology Model (LHM) utilize the MUSLE for calculating sediment yield. SWAT is a lumped parameter hydrologic model developed by the USDA, which is commonly used for predicting sediment yield. LHM is a fully distributed hydrologic model developed primarily for integrated surface and groundwater studies at the watershed to regional scale. SWAT and LHM models were developed and tested for two large, adjacent watersheds in the Great Lakes region; the Maumee River and the St. Joseph River. The models were run using a variety of single model and ensemble downscaled climate change scenarios from the Coupled Model Intercomparison Project 5 (CMIP5). The initial results of this comparison are discussed here.

  5. Predictive In Vivo Models for Oncology.

    Science.gov (United States)

    Behrens, Diana; Rolff, Jana; Hoffmann, Jens

    2016-01-01

    Experimental oncology research and preclinical drug development both substantially require specific, clinically relevant in vitro and in vivo tumor models. The increasing knowledge about the heterogeneity of cancer requested a substantial restructuring of the test systems for the different stages of development. To be able to cope with the complexity of the disease, larger panels of patient-derived tumor models have to be implemented and extensively characterized. Together with individual genetically engineered tumor models and supported by core functions for expression profiling and data analysis, an integrated discovery process has been generated for predictive and personalized drug development.Improved “humanized” mouse models should help to overcome current limitations given by xenogeneic barrier between humans and mice. Establishment of a functional human immune system and a corresponding human microenvironment in laboratory animals will strongly support further research.Drug discovery, systems biology, and translational research are moving closer together to address all the new hallmarks of cancer, increase the success rate of drug development, and increase the predictive value of preclinical models.

  6. Constructing predictive models of human running.

    Science.gov (United States)

    Maus, Horst-Moritz; Revzen, Shai; Guckenheimer, John; Ludwig, Christian; Reger, Johann; Seyfarth, Andre

    2015-02-06

    Running is an essential mode of human locomotion, during which ballistic aerial phases alternate with phases when a single foot contacts the ground. The spring-loaded inverted pendulum (SLIP) provides a starting point for modelling running, and generates ground reaction forces that resemble those of the centre of mass (CoM) of a human runner. Here, we show that while SLIP reproduces within-step kinematics of the CoM in three dimensions, it fails to reproduce stability and predict future motions. We construct SLIP control models using data-driven Floquet analysis, and show how these models may be used to obtain predictive models of human running with six additional states comprising the position and velocity of the swing-leg ankle. Our methods are general, and may be applied to any rhythmic physical system. We provide an approach for identifying an event-driven linear controller that approximates an observed stabilization strategy, and for producing a reduced-state model which closely recovers the observed dynamics. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  7. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  8. Online Prediction Under Model Uncertainty via Dynamic Model Averaging: Application to a Cold Rolling Mill.

    Science.gov (United States)

    Raftery, Adrian E; Kárný, Miroslav; Ettler, Pavel

    2010-02-01

    We consider the problem of online prediction when it is uncertain what the best prediction model to use is. We develop a method called Dynamic Model Averaging (DMA) in which a state space model for the parameters of each model is combined with a Markov chain model for the correct model. This allows the "correct" model to vary over time. The state space and Markov chain models are both specified in terms of forgetting, leading to a highly parsimonious representation. As a special case, when the model and parameters do not change, DMA is a recursive implementation of standard Bayesian model averaging, which we call recursive model averaging. The method is applied to the problem of predicting the output strip thickness for a cold rolling mill, where the output is measured with a time delay. We found that when only a small number of physically motivated models were considered and one was clearly best, the method quickly converged to the best model, and the cost of model uncertainty was small; indeed DMA performed slightly better than the best physical model. When model uncertainty and the number of models considered were large, our method ensured that the penalty for model uncertainty was small. At the beginning of the process, when control is most difficult, we found that DMA over a large model space led to better predictions than the single best performing physically motivated model. We also applied the method to several simulated examples, and found that it recovered both constant and time-varying regression parameters and model specifications quite well.

  9. A Preliminary Evaluation of Season-ahead Flood Prediction Conditioned on Large-scale Climate Drivers

    Science.gov (United States)

    Lee, Donghoon; Ward, Philip; Block, Paul

    2016-04-01

    Globally, flood disasters lead all natural hazards in terms of impacts on society, causing billions of dollars of damages each year. Typically, short-term forecasts emphasize immediate emergency actions, longer-range forecasts, on the order of months to seasons, however, can compliment short-term forecasts by focusing on disaster preparedness. In this study, the inter-annual variability of large-scale climate drivers (e.g. ENSO) is investigated to understand the prospects for skillful season-ahead flood prediction globally using PCR-GLOBWB modeled simulations. For example, global gridded correlations between discharge and Nino 3.4 are calculated, with notably strong correlations in the northwestern (-0.4~-0.6) and the southeastern (0.4~0.6) United States, and the Amazon river basin (-0.6~-0.8). Coupled interactions from multiple, simultaneous climate drivers are also evaluated. Skillful prediction has the potential to estimate season-ahead flood probabilities, flood extent, damages, and eventually integrate into early warning systems. This global approach is especially attractive for areas with limited observations and/or little capacity to develop early warning flood systems.

  10. Predictive modeling by the cerebellum improves proprioception.

    Science.gov (United States)

    Bhanpuri, Nasir H; Okamura, Allison M; Bastian, Amy J

    2013-09-04

    Because sensation is delayed, real-time movement control requires not just sensing, but also predicting limb position, a function hypothesized for the cerebellum. Such cerebellar predictions could contribute to perception of limb position (i.e., proprioception), particularly when a person actively moves the limb. Here we show that human cerebellar patients have proprioceptive deficits compared with controls during active movement, but not when the arm is moved passively. Furthermore, when healthy subjects move in a force field with unpredictable dynamics, they have active proprioceptive deficits similar to cerebellar patients. Therefore, muscle activity alone is likely insufficient to enhance proprioception and predictability (i.e., an internal model of the body and environment) is important for active movement to benefit proprioception. We conclude that cerebellar patients have an active proprioceptive deficit consistent with disrupted movement prediction rather than an inability to generally enhance peripheral proprioceptive signals during action and suggest that active proprioceptive deficits should be considered a fundamental cerebellar impairment of clinical importance.

  11. The unified minimal supersymmetric model with large Yukawa couplings

    CERN Document Server

    Rattazzi, Riccardo

    1996-01-01

    The consequences of assuming the third-generation Yukawa couplings are all large and comparable are studied in the context of the minimal supersymmetric extension of the standard model. General aspects of the RG evolution of the parameters, theoretical constraints needed to ensure proper electroweak symmetry breaking, and experimental and cosmological bounds on low-energy parameters are presented. We also present complete and exact semi-analytic solutions to the 1-loop RG equations. Focusing on SU(5) or SO(10) unification, we analyze the relationship between the top and bottom masses and the superspectrum, and the phenomenological implications of the GUT conditions on scalar masses. Future experimental measurements of the superspectrum and of the strong coupling will distinguish between various GUT-scale scenarios. And if present experimental knowledge is to be accounted for most naturally, a particular set of predictions is singled out.

  12. Large animal models for vaccine development and testing.

    Science.gov (United States)

    Gerdts, Volker; Wilson, Heather L; Meurens, Francois; van Drunen Littel-van den Hurk, Sylvia; Wilson, Don; Walker, Stewart; Wheler, Colette; Townsend, Hugh; Potter, Andrew A

    2015-01-01

    The development of human vaccines continues to rely on the use of animals for research. Regulatory authorities require novel vaccine candidates to undergo preclinical assessment in animal models before being permitted to enter the clinical phase in human subjects. Substantial progress has been made in recent years in reducing and replacing the number of animals used for preclinical vaccine research through the use of bioinformatics and computational biology to design new vaccine candidates. However, the ultimate goal of a new vaccine is to instruct the immune system to elicit an effective immune response against the pathogen of interest, and no alternatives to live animal use currently exist for evaluation of this response. Studies identifying the mechanisms of immune protection; determining the optimal route and formulation of vaccines; establishing the duration and onset of immunity, as well as the safety and efficacy of new vaccines, must be performed in a living system. Importantly, no single animal model provides all the information required for advancing a new vaccine through the preclinical stage, and research over the last two decades has highlighted that large animals more accurately predict vaccine outcome in humans than do other models. Here we review the advantages and disadvantages of large animal models for human vaccine development and demonstrate that much of the success in bringing a new vaccine to market depends on choosing the most appropriate animal model for preclinical testing. © The Author 2015. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  13. RNA-Puzzles Round II: assessment of RNA structure prediction programs applied to three large RNA structures.

    Science.gov (United States)

    Miao, Zhichao; Adamiak, Ryszard W; Blanchet, Marc-Frédérick; Boniecki, Michal; Bujnicki, Janusz M; Chen, Shi-Jie; Cheng, Clarence; Chojnowski, Grzegorz; Chou, Fang-Chieh; Cordero, Pablo; Cruz, José Almeida; Ferré-D'Amaré, Adrian R; Das, Rhiju; Ding, Feng; Dokholyan, Nikolay V; Dunin-Horkawicz, Stanislaw; Kladwang, Wipapat; Krokhotin, Andrey; Lach, Grzegorz; Magnus, Marcin; Major, François; Mann, Thomas H; Masquida, Benoît; Matelska, Dorota; Meyer, Mélanie; Peselis, Alla; Popenda, Mariusz; Purzycka, Katarzyna J; Serganov, Alexander; Stasiewicz, Juliusz; Szachniuk, Marta; Tandon, Arpit; Tian, Siqi; Wang, Jian; Xiao, Yi; Xu, Xiaojun; Zhang, Jinwei; Zhao, Peinan; Zok, Tomasz; Westhof, Eric

    2015-06-01

    This paper is a report of a second round of RNA-Puzzles, a collective and blind experiment in three-dimensional (3D) RNA structure prediction. Three puzzles, Puzzles 5, 6, and 10, represented sequences of three large RNA structures with limited or no homology with previously solved RNA molecules. A lariat-capping ribozyme, as well as riboswitches complexed to adenosylcobalamin and tRNA, were predicted by seven groups using RNAComposer, ModeRNA/SimRNA, Vfold, Rosetta, DMD, MC-Fold, 3dRNA, and AMBER refinement. Some groups derived models using data from state-of-the-art chemical-mapping methods (SHAPE, DMS, CMCT, and mutate-and-map). The comparisons between the predictions and the three subsequently released crystallographic structures, solved at diffraction resolutions of 2.5-3.2 Å, were carried out automatically using various sets of quality indicators. The comparisons clearly demonstrate the state of present-day de novo prediction abilities as well as the limitations of these state-of-the-art methods. All of the best prediction models have similar topologies to the native structures, which suggests that computational methods for RNA structure prediction can already provide useful structural information for biological problems. However, the prediction accuracy for non-Watson-Crick interactions, key to proper folding of RNAs, is low and some predicted models had high Clash Scores. These two difficulties point to some of the continuing bottlenecks in RNA structure prediction. All submitted models are available for download at http://ahsoka.u-strasbg.fr/rnapuzzles/.

  14. RNA-Puzzles Round II: assessment of RNA structure prediction programs applied to three large RNA structures

    Science.gov (United States)

    Miao, Zhichao; Adamiak, Ryszard W.; Blanchet, Marc-Frédérick; Boniecki, Michal; Bujnicki, Janusz M.; Chen, Shi-Jie; Cheng, Clarence; Chojnowski, Grzegorz; Chou, Fang-Chieh; Cordero, Pablo; Cruz, José Almeida; Ferré-D'Amaré, Adrian R.; Das, Rhiju; Ding, Feng; Dokholyan, Nikolay V.; Dunin-Horkawicz, Stanislaw; Kladwang, Wipapat; Krokhotin, Andrey; Lach, Grzegorz; Magnus, Marcin; Major, François; Mann, Thomas H.; Masquida, Benoît; Matelska, Dorota; Meyer, Mélanie; Peselis, Alla; Popenda, Mariusz; Purzycka, Katarzyna J.; Serganov, Alexander; Stasiewicz, Juliusz; Szachniuk, Marta; Tandon, Arpit; Tian, Siqi; Wang, Jian; Xiao, Yi; Xu, Xiaojun; Zhang, Jinwei; Zhao, Peinan; Zok, Tomasz; Westhof, Eric

    2015-01-01

    This paper is a report of a second round of RNA-Puzzles, a collective and blind experiment in three-dimensional (3D) RNA structure prediction. Three puzzles, Puzzles 5, 6, and 10, represented sequences of three large RNA structures with limited or no homology with previously solved RNA molecules. A lariat-capping ribozyme, as well as riboswitches complexed to adenosylcobalamin and tRNA, were predicted by seven groups using RNAComposer, ModeRNA/SimRNA, Vfold, Rosetta, DMD, MC-Fold, 3dRNA, and AMBER refinement. Some groups derived models using data from state-of-the-art chemical-mapping methods (SHAPE, DMS, CMCT, and mutate-and-map). The comparisons between the predictions and the three subsequently released crystallographic structures, solved at diffraction resolutions of 2.5–3.2 Å, were carried out automatically using various sets of quality indicators. The comparisons clearly demonstrate the state of present-day de novo prediction abilities as well as the limitations of these state-of-the-art methods. All of the best prediction models have similar topologies to the native structures, which suggests that computational methods for RNA structure prediction can already provide useful structural information for biological problems. However, the prediction accuracy for non-Watson–Crick interactions, key to proper folding of RNAs, is low and some predicted models had high Clash Scores. These two difficulties point to some of the continuing bottlenecks in RNA structure prediction. All submitted models are available for download at http://ahsoka.u-strasbg.fr/rnapuzzles/. PMID:25883046

  15. Gamma-Ray Pulsars Models and Predictions

    CERN Document Server

    Harding, A K

    2001-01-01

    Pulsed emission from gamma-ray pulsars originates inside the magnetosphere, from radiation by charged particles accelerated near the magnetic poles or in the outer gaps. In polar cap models, the high energy spectrum is cut off by magnetic pair production above an energy that is dependent on the local magnetic field strength. While most young pulsars with surface fields in the range B = 10^{12} - 10^{13} G are expected to have high energy cutoffs around several GeV, the gamma-ray spectra of old pulsars having lower surface fields may extend to 50 GeV. Although the gamma-ray emission of older pulsars is weaker, detecting pulsed emission at high energies from nearby sources would be an important confirmation of polar cap models. Outer gap models predict more gradual high-energy turnovers at around 10 GeV, but also predict an inverse Compton component extending to TeV energies. Detection of pulsed TeV emission, which would not survive attenuation at the polar caps, is thus an important test of outer gap models. N...

  16. Artificial Neural Network Model for Predicting Compressive

    Directory of Open Access Journals (Sweden)

    Salim T. Yousif

    2013-05-01

    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  17. Ground Motion Prediction Models for Caucasus Region

    Science.gov (United States)

    Jorjiashvili, Nato; Godoladze, Tea; Tvaradze, Nino; Tumanova, Nino

    2016-04-01

    Ground motion prediction models (GMPMs) relate ground motion intensity measures to variables describing earthquake source, path, and site effects. Estimation of expected ground motion is a fundamental earthquake hazard assessment. The most commonly used parameter for attenuation relation is peak ground acceleration or spectral acceleration because this parameter gives useful information for Seismic Hazard Assessment. Since 2003 development of Georgian Digital Seismic Network has started. In this study new GMP models are obtained based on new data from Georgian seismic network and also from neighboring countries. Estimation of models is obtained by classical, statistical way, regression analysis. In this study site ground conditions are additionally considered because the same earthquake recorded at the same distance may cause different damage according to ground conditions. Empirical ground-motion prediction models (GMPMs) require adjustment to make them appropriate for site-specific scenarios. However, the process of making such adjustments remains a challenge. This work presents a holistic framework for the development of a peak ground acceleration (PGA) or spectral acceleration (SA) GMPE that is easily adjustable to different seismological conditions and does not suffer from the practical problems associated with adjustments in the response spectral domain.

  18. Modeling and Prediction of Krueger Device Noise

    Science.gov (United States)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    This paper presents the development of a noise prediction model for aircraft Krueger flap devices that are considered as alternatives to leading edge slotted slats. The prediction model decomposes the total Krueger noise into four components, generated by the unsteady flows, respectively, in the cove under the pressure side surface of the Krueger, in the gap between the Krueger trailing edge and the main wing, around the brackets supporting the Krueger device, and around the cavity on the lower side of the main wing. For each noise component, the modeling follows a physics-based approach that aims at capturing the dominant noise-generating features in the flow and developing correlations between the noise and the flow parameters that control the noise generation processes. The far field noise is modeled using each of the four noise component's respective spectral functions, far field directivities, Mach number dependencies, component amplitudes, and other parametric trends. Preliminary validations are carried out by using small scale experimental data, and two applications are discussed; one for conventional aircraft and the other for advanced configurations. The former focuses on the parametric trends of Krueger noise on design parameters, while the latter reveals its importance in relation to other airframe noise components.

  19. A generative model for predicting terrorist incidents

    Science.gov (United States)

    Verma, Dinesh C.; Verma, Archit; Felmlee, Diane; Pearson, Gavin; Whitaker, Roger

    2017-05-01

    A major concern in coalition peace-support operations is the incidence of terrorist activity. In this paper, we propose a generative model for the occurrence of the terrorist incidents, and illustrate that an increase in diversity, as measured by the number of different social groups to which that an individual belongs, is inversely correlated with the likelihood of a terrorist incident in the society. A generative model is one that can predict the likelihood of events in new contexts, as opposed to statistical models which are used to predict the future incidents based on the history of the incidents in an existing context. Generative models can be useful in planning for persistent Information Surveillance and Reconnaissance (ISR) since they allow an estimation of regions in the theater of operation where terrorist incidents may arise, and thus can be used to better allocate the assignment and deployment of ISR assets. In this paper, we present a taxonomy of terrorist incidents, identify factors related to occurrence of terrorist incidents, and provide a mathematical analysis calculating the likelihood of occurrence of terrorist incidents in three common real-life scenarios arising in peace-keeping operations

  20. Long-Term Prediction of Large Earthquakes: When Does Quasi-Periodic Behavior Occur?

    Science.gov (United States)

    Sykes, L. R.

    2003-12-01

    every great earthquake. The 2002 Working Group on large earthquakes in the San Francisco Bay region followed Ellsworth et al. (1999) in adopting much larger values of CV for several critical fault segments and underestimating their likelihood of rupture in the next 30 years. The Working Group also gives considerable weight to a Poisson model, which is in conflict with both renewal processes involving slow stress accumulation and with values of CV near 0.2. The failure of the Parkfield prediction has greatly influenced views in the U.S. about long-term forecasts. The model of the repeated breaking of a single asperity is incorrect since past Parkfield shocks of about magnitude 6 likely did not rupture the same part of the San Andreas fault.

  1. Validating the Runoff from the PRECIS Model Using a Large-Scale Routing Model

    Institute of Scientific and Technical Information of China (English)

    CAO Lijuan; DONG Wenjie; XU Yinlong; ZHANG Yong; Michael SPARROW

    2007-01-01

    The streamflow over the Yellow River basin is simulated using the PRECIS (Providing REgional Climates for Impacts Studies) regional climate model driven by 15-year (1979-1993) ECMWF reanalysis data as the initial and lateral boundary conditions and an off-line large-scale routing model (LRM). The LRM uses physical catchment and river channel information and allows streamflow to be predicted for large continental rivers with a 1°× 1° spatial resolution. The results show that the PRECIS model can reproduce the general southeast to northwest gradient distribution of the precipitation over the Yellow River basin. The PRECISLRM model combination has the capability to simulate the seasonal and annual streamflow over the Yellow River basin. The simulated streamflow is generally coincident with the naturalized streamflow both in timing and in magnitude.

  2. Optimal feedback scheduling of model predictive controllers

    Institute of Scientific and Technical Information of China (English)

    Pingfang ZHOU; Jianying XIE; Xiaolong DENG

    2006-01-01

    Model predictive control (MPC) could not be reliably applied to real-time control systems because its computation time is not well defined. Implemented as anytime algorithm, MPC task allows computation time to be traded for control performance, thus obtaining the predictability in time. Optimal feedback scheduling (FS-CBS) of a set of MPC tasks is presented to maximize the global control performance subject to limited processor time. Each MPC task is assigned with a constant bandwidth server (CBS), whose reserved processor time is adjusted dynamically. The constraints in the FSCBS guarantee scheduler of the total task set and stability of each component. The FS-CBS is shown robust against the variation of execution time of MPC tasks at runtime. Simulation results illustrate its effectiveness.

  3. Objective calibration of numerical weather prediction models

    Science.gov (United States)

    Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.

    2017-07-01

    Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.

  4. Reionization on Large Scales IV: Predictions for the 21 cm signal incorporating the light cone effect

    CERN Document Server

    La Plante, Paul; Natarajan, Aravind; Peterson, Jeffrey B; Trac, Hy; Cen, Renyue; Loeb, Abraham

    2013-01-01

    We present predictions for the 21 cm brightness temperature power spectrum during the Epoch of Reionization (EoR). We discuss the implications of the "light cone" effect, which incorporates evolution of the 21 cm brightness temperature along the line of sight. Using a novel method calibrated against radiation-hydrodynamic simulations, we model the neutral hydrogen density field and 21 cm signal in large volumes (L = 2 Gpc/h). The inclusion of the light cone effect leads to a relative increase of 2-3 orders of magnitude in the 21 cm signal power spectrum on large scales (k < 0.1 h/Mpc). When we modify the power spectrum to more closely reflect real-world measurement capabilities, we find that the light cone effect leads to a relative decrease of order unity at all scales. The light cone effect also introduces an anisotropy parallel to the line of sight. By decomposing the 3D power spectrum into components perpendicular and parallel to the line of sight, we find that parallel modes contribute about an order ...

  5. Prediction models from CAD models of 3D objects

    Science.gov (United States)

    Camps, Octavia I.

    1992-11-01

    In this paper we present a probabilistic prediction based approach for CAD-based object recognition. Given a CAD model of an object, the PREMIO system combines techniques of analytic graphics and physical models of lights and sensors to predict how features of the object will appear in images. In nearly 4,000 experiments on analytically-generated and real images, we show that in a semi-controlled environment, predicting the detectability of features of the image can successfully guide a search procedure to make informed choices of model and image features in its search for correspondences that can be used to hypothesize the pose of the object. Furthermore, we provide a rigorous experimental protocol that can be used to determine the optimal number of correspondences to seek so that the probability of failing to find a pose and of finding an inaccurate pose are minimized.

  6. Model predictive control of MSMPR crystallizers

    Science.gov (United States)

    Moldoványi, Nóra; Lakatos, Béla G.; Szeifert, Ferenc

    2005-02-01

    A multi-input-multi-output (MIMO) control problem of isothermal continuous crystallizers is addressed in order to create an adequate model-based control system. The moment equation model of mixed suspension, mixed product removal (MSMPR) crystallizers that forms a dynamical system is used, the state of which is represented by the vector of six variables: the first four leading moments of the crystal size, solute concentration and solvent concentration. Hence, the time evolution of the system occurs in a bounded region of the six-dimensional phase space. The controlled variables are the mean size of the grain; the crystal size-distribution and the manipulated variables are the input concentration of the solute and the flow rate. The controllability and observability as well as the coupling between the inputs and the outputs was analyzed by simulation using the linearized model. It is shown that the crystallizer is a nonlinear MIMO system with strong coupling between the state variables. Considering the possibilities of the model reduction, a third-order model was found quite adequate for the model estimation in model predictive control (MPC). The mean crystal size and the variance of the size distribution can be nearly separately controlled by the residence time and the inlet solute concentration, respectively. By seeding, the controllability of the crystallizer increases significantly, and the overshoots and the oscillations become smaller. The results of the controlling study have shown that the linear MPC is an adaptable and feasible controller of continuous crystallizers.

  7. An Anisotropic Hardening Model for Springback Prediction

    Science.gov (United States)

    Zeng, Danielle; Xia, Z. Cedric

    2005-08-01

    As more Advanced High-Strength Steels (AHSS) are heavily used for automotive body structures and closures panels, accurate springback prediction for these components becomes more challenging because of their rapid hardening characteristics and ability to sustain even higher stresses. In this paper, a modified Mroz hardening model is proposed to capture realistic Bauschinger effect at reverse loading, such as when material passes through die radii or drawbead during sheet metal forming process. This model accounts for material anisotropic yield surface and nonlinear isotropic/kinematic hardening behavior. Material tension/compression test data are used to accurately represent Bauschinger effect. The effectiveness of the model is demonstrated by comparison of numerical and experimental springback results for a DP600 straight U-channel test.

  8. Thermal Storage Power Balancing with Model Predictive Control

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Poulsen, Niels Kjølstad; Madsen, Henrik

    2013-01-01

    The method described in this paper balances power production and consumption with a large number of thermal loads. Linear controllers are used for the loads to track a temperature set point, while Model Predictive Control (MPC) and model estimation of the load behavior are used for coordination....... The total power consumption of all loads is controlled indirectly through a real-time price. The MPC incorporates forecasts of the power production and disturbances that influence the loads, e.g. time-varying weather forecasts, in order to react ahead of time. A simulation scenario demonstrates...

  9. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Directory of Open Access Journals (Sweden)

    Göran Ståhl

    2016-02-01

    Full Text Available This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where models play a core role: model-assisted, model-based, and hybrid estimation. The first two are well known, whereas the third has only recently been introduced in forest surveys. Hybrid inference mixes designbased and model-based inference, since it relies on a probability sample of auxiliary data and a model predicting the target variable from the auxiliary data..We review studies on large-area forest surveys based on model-assisted, modelbased, and hybrid estimation, and discuss advantages and disadvantages of the approaches. We conclude that no general recommendations can be made about whether model-assisted, model-based, or hybrid estimation should be preferred. The choice depends on the objective of the survey and the possibilities to acquire appropriate field and remotely sensed data. We also conclude that modelling approaches can only be successfully applied for estimating target variables such as growing stock volume or biomass, which are adequately related to commonly available remotely sensed data, and thus purely field based surveys remain important for several important forest parameters. Keywords: Design-based inference, Model-assisted estimation, Model-based inference, Hybrid inference, National forest inventory, Remote sensing, Sampling

  10. Improved residue contact prediction using support vector machines and a large feature set

    Directory of Open Access Journals (Sweden)

    Baldi Pierre

    2007-04-01

    Full Text Available Abstract Background Predicting protein residue-residue contacts is an important 2D prediction task. It is useful for ab initio structure prediction and understanding protein folding. In spite of steady progress over the past decade, contact prediction remains still largely unsolved. Results Here we develop a new contact map predictor (SVMcon that uses support vector machines to predict medium- and long-range contacts. SVMcon integrates profiles, secondary structure, relative solvent accessibility, contact potentials, and other useful features. On the same test data set, SVMcon's accuracy is 4% higher than the latest version of the CMAPpro contact map predictor. SVMcon recently participated in the seventh edition of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7 experiment and was evaluated along with seven other contact map predictors. SVMcon was ranked as one of the top predictors, yielding the second best coverage and accuracy for contacts with sequence separation >= 12 on 13 de novo domains. Conclusion We describe SVMcon, a new contact map predictor that uses SVMs and a large set of informative features. SVMcon yields good performance on medium- to long-range contact predictions and can be modularly incorporated into a structure prediction pipeline.

  11. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  12. Ontology-based tools to expedite predictive model construction.

    Science.gov (United States)

    Haug, Peter; Holmen, John; Wu, Xinzi; Mynam, Kumar; Ebert, Matthew; Ferraro, Jeffrey

    2014-01-01

    Large amounts of medical data are collected electronically during the course of caring for patients using modern medical information systems. This data presents an opportunity to develop clinically useful tools through data mining and observational research studies. However, the work necessary to make sense of this data and to integrate it into a research initiative can require substantial effort from medical experts as well as from experts in medical terminology, data extraction, and data analysis. This slows the process of medical research. To reduce the effort required for the construction of computable, diagnostic predictive models, we have developed a system that hybridizes a medical ontology with a large clinical data warehouse. Here we describe components of this system designed to automate the development of preliminary diagnostic models and to provide visual clues that can assist the researcher in planning for further analysis of the data behind these models.

  13. Advances in large-scale crop modeling

    Science.gov (United States)

    Scholze, Marko; Bondeau, Alberte; Ewert, Frank; Kucharik, Chris; Priess, Jörg; Smith, Pascalle

    Intensified human activity and a growing population have changed the climate and the land biosphere. One of the most widely recognized human perturbations is the emission of carbon dioxide (C02) by fossil fuel burning and land-use change. As the terrestrial biosphere is an active player in the global carbon cycle, changes in land use feed back to the climate of the Earth through regulation of the content of atmospheric CO2, the most important greenhouse gas,and changing albedo (e.g., energy partitioning).Recently, the climate modeling community has started to develop more complex Earthsystem models that include marine and terrestrial biogeochemical processes in addition to the representation of atmospheric and oceanic circulation. However, most terrestrial biosphere models simulate only natural, or so-called potential, vegetation and do not account for managed ecosystems such as croplands and pastures, which make up nearly one-third of the Earth's land surface.

  14. Flood management: prediction of microbial contamination in large-scale floods in urban environments.

    Science.gov (United States)

    Taylor, Jonathon; Lai, Ka Man; Davies, Mike; Clifton, David; Ridley, Ian; Biddulph, Phillip

    2011-07-01

    With a changing climate and increased urbanisation, the occurrence and the impact of flooding is expected to increase significantly. Floods can bring pathogens into homes and cause lingering damp and microbial growth in buildings, with the level of growth and persistence dependent on the volume and chemical and biological content of the flood water, the properties of the contaminating microbes, and the surrounding environmental conditions, including the restoration time and methods, the heat and moisture transport properties of the envelope design, and the ability of the construction material to sustain the microbial growth. The public health risk will depend on the interaction of these complex processes and the vulnerability and susceptibility of occupants in the affected areas. After the 2007 floods in the UK, the Pitt review noted that there is lack of relevant scientific evidence and consistency with regard to the management and treatment of flooded homes, which not only put the local population at risk but also caused unnecessary delays in the restoration effort. Understanding the drying behaviour of flooded buildings in the UK building stock under different scenarios, and the ability of microbial contaminants to grow, persist, and produce toxins within these buildings can help inform recovery efforts. To contribute to future flood management, this paper proposes the use of building simulations and biological models to predict the risk of microbial contamination in typical UK buildings. We review the state of the art with regard to biological contamination following flooding, relevant building simulation, simulation-linked microbial modelling, and current practical considerations in flood remediation. Using the city of London as an example, a methodology is proposed that uses GIS as a platform to integrate drying models and microbial risk models with the local building stock and flood models. The integrated tool will help local governments, health authorities

  15. Constraining models with a large scalar multiplet

    CERN Document Server

    Earl, Kevin; Logan, Heather E; Pilkington, Terry

    2013-01-01

    Models in which the Higgs sector is extended by a single electroweak scalar multiplet X can possess an accidental global U(1) symmetry at the renormalizable level if X has isospin T greater or equal to 2. We show that all such U(1)-symmetric models are excluded by the interplay of the cosmological relic density of the lightest (neutral) component of X and its direct detection cross section via Z exchange. The sole exception is the T=2 multiplet, whose lightest member decays on a few-day to few-year timescale via a Planck-suppressed dimension-5 operator.

  16. Simple predictions from multifield inflationary models.

    Science.gov (United States)

    Easther, Richard; Frazer, Jonathan; Peiris, Hiranya V; Price, Layne C

    2014-04-25

    We explore whether multifield inflationary models make unambiguous predictions for fundamental cosmological observables. Focusing on N-quadratic inflation, we numerically evaluate the full perturbation equations for models with 2, 3, and O(100) fields, using several distinct methods for specifying the initial values of the background fields. All scenarios are highly predictive, with the probability distribution functions of the cosmological observables becoming more sharply peaked as N increases. For N=100 fields, 95% of our Monte Carlo samples fall in the ranges ns∈(0.9455,0.9534), α∈(-9.741,-7.047)×10-4, r∈(0.1445,0.1449), and riso∈(0.02137,3.510)×10-3 for the spectral index, running, tensor-to-scalar ratio, and isocurvature-to-adiabatic ratio, respectively. The expected amplitude of isocurvature perturbations grows with N, raising the possibility that many-field models may be sensitive to postinflationary physics and suggesting new avenues for testing these scenarios.

  17. COST MODEL FOR LARGE URBAN SCHOOLS.

    Science.gov (United States)

    O'BRIEN, RICHARD J.

    THIS DOCUMENT CONTAINS A COST SUBMODEL OF AN URBAN EDUCATIONAL SYSTEM. THIS MODEL REQUIRES THAT PUPIL POPULATION AND PROPOSED SCHOOL BUILDING ARE KNOWN. THE COST ELEMENTS ARE--(1) CONSTRUCTION COSTS OF NEW PLANTS, (2) ACQUISITION AND DEVELOPMENT COSTS OF BUILDING SITES, (3) CURRENT OPERATING EXPENSES OF THE PROPOSED SCHOOL, (4) PUPIL…

  18. Predictions of models for environmental radiological assessment

    Energy Technology Data Exchange (ETDEWEB)

    Peres, Sueli da Silva; Lauria, Dejanira da Costa, E-mail: suelip@ird.gov.br, E-mail: dejanira@irg.gov.br [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Servico de Avaliacao de Impacto Ambiental, Rio de Janeiro, RJ (Brazil); Mahler, Claudio Fernando [Coppe. Instituto Alberto Luiz Coimbra de Pos-Graduacao e Pesquisa de Engenharia, Universidade Federal do Rio de Janeiro (UFRJ) - Programa de Engenharia Civil, RJ (Brazil)

    2011-07-01

    In the field of environmental impact assessment, models are used for estimating source term, environmental dispersion and transfer of radionuclides, exposure pathway, radiation dose and the risk for human beings Although it is recognized that the specific information of local data are important to improve the quality of the dose assessment results, in fact obtaining it can be very difficult and expensive. Sources of uncertainties are numerous, among which we can cite: the subjectivity of modelers, exposure scenarios and pathways, used codes and general parameters. The various models available utilize different mathematical approaches with different complexities that can result in different predictions. Thus, for the same inputs different models can produce very different outputs. This paper presents briefly the main advances in the field of environmental radiological assessment that aim to improve the reliability of the models used in the assessment of environmental radiological impact. The intercomparison exercise of model supplied incompatible results for {sup 137}Cs and {sup 60}Co, enhancing the need for developing reference methodologies for environmental radiological assessment that allow to confront dose estimations in a common comparison base. The results of the intercomparison exercise are present briefly. (author)

  19. Predicting Protein Secondary Structure with Markov Models

    DEFF Research Database (Denmark)

    Fischer, Paul; Larsen, Simon; Thomsen, Claus

    2004-01-01

    we are considering here, is to predict the secondary structure from the primary one. To this end we train a Markov model on training data and then use it to classify parts of unknown protein sequences as sheets, helices or coils. We show how to exploit the directional information contained......The primary structure of a protein is the sequence of its amino acids. The secondary structure describes structural properties of the molecule such as which parts of it form sheets, helices or coils. Spacial and other properties are described by the higher order structures. The classification task...

  20. A Modified Model Predictive Control Scheme

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Hu; Wen-Hua Chen

    2005-01-01

    In implementations of MPC (Model Predictive Control) schemes, two issues need to be addressed. One is how to enlarge the stability region as much as possible. The other is how to guarantee stability when a computational time limitation exists. In this paper, a modified MPC scheme for constrained linear systems is described. An offline LMI-based iteration process is introduced to expand the stability region. At the same time, a database of feasible control sequences is generated offline so that stability can still be guaranteed in the case of computational time limitations. Simulation results illustrate the effectiveness of this new approach.

  1. Hierarchical Model Predictive Control for Resource Distribution

    DEFF Research Database (Denmark)

    Bendtsen, Jan Dimon; Trangbæk, K; Stoustrup, Jakob

    2010-01-01

    This paper deals with hierarchichal model predictive control (MPC) of distributed systems. A three level hierachical approach is proposed, consisting of a high level MPC controller, a second level of so-called aggregators, controlled by an online MPC-like algorithm, and a lower level of autonomous...... facilitates plug-and-play addition of subsystems without redesign of any controllers. The method is supported by a number of simulations featuring a three-level smart-grid power control system for a small isolated power grid....

  2. Explicit model predictive control accuracy analysis

    OpenAIRE

    Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano

    2015-01-01

    Model Predictive Control (MPC) can efficiently control constrained systems in real-time applications. MPC feedback law for a linear system with linear inequality constraints can be explicitly computed off-line, which results in an off-line partition of the state space into non-overlapped convex regions, with affine control laws associated to each region of the partition. An actual implementation of this explicit MPC in low cost micro-controllers requires the data to be "quantized", i.e. repre...

  3. Which measures of time preference best predict outcomes? Evidence from a large-scale field experiment

    OpenAIRE

    Burks, Stephen V.; Carpenter, Jeffrey P.; Goette, Lorenz; Rustichini, Aldo

    2011-01-01

    Economists and psychologists have devised numerous instruments to measure time preferences and have generated a rich literature examining the extent to which time preferences predict important outcomes; however, we still do not know which measures work best. With the help of a large sample of non-student participants (truck driver trainees) and administrative data on outcomes, we gather four different time preference measures and test the extent to which they predict both on their own and whe...

  4. Critical conceptualism in environmental modeling and prediction.

    Science.gov (United States)

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river.

  5. Improving the local relevance of large scale water demand predictions: the way forward

    Science.gov (United States)

    Bernhard, Jeroen; Reynaud, Arnaud; de Roo, Ad

    2016-04-01

    use and water prices. Subsequently, econometric estimates allow us to make a monetary valuation of water and identify the dominant drivers of domestic and industrial water demand per country. Combined with socio-economic, demographic and climate scenarios we made predictions for future Europe. Since this is a first attempt we obtained mixed results between countries when it comes to data availability and therefore model uncertainty. For some countries we have been able to develop robust predictions based on vast amounts of data while some other countries proved more challenging. We do feel however, that large scale predictions based on regional data are the way forward to provide relevant scientific policy support. In order to improve on our work it is imperative to further expand our database of consistent regional data. We are looking forward to any kind of input and would be very interested in sharing our data to collaborate towards a better understanding of the water use system.

  6. Modeling and Control of Large Flexible Structures.

    Science.gov (United States)

    1984-07-31

    systems with hybrid (lumped and distributed) structure. * -3.Development of stabilizing control strategies for nonlinear distributed models, including...process, but much more needs to be done. el .It ;,, "..- ,. ,-,,o’,, .4. : ") Part I: :i: ’i" ’" Wierner-Hopf Methods for Design of Stabilizing ... Control Systems :: Z’" ..-- -~ . . . . .. . . . . . . ... . . . . .......- ~ .. . . S 5 * * .5 .. ** .*% - * 5*55 * . . . . % % ’ * . ’ % , . :.:. -A

  7. Modeling Social Influence in Large Populations

    Science.gov (United States)

    2010-07-13

    feature selection,” The Journal of Machine Learning Research, vol. 3, 2003, pp. 1157– 1182. I. Ajzen , “The theory of planned behavior ,” Organizational...ANSI Std Z39-18 Theory and Introduction 2 Theory : human collectivities are composed of individuals with different meaningful identities, and these...Intrinsically represent a window of time (e.g., before, after, or during a simulation event) Constructed via a theory to model translation of a

  8. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  9. Application of a predictive Bayesian model to environmental accounting.

    Science.gov (United States)

    Anex, R P; Englehardt, J D

    2001-03-30

    Environmental accounting techniques are intended to capture important environmental costs and benefits that are often overlooked in standard accounting practices. Environmental accounting methods themselves often ignore or inadequately represent large but highly uncertain environmental costs and costs conditioned by specific prior events. Use of a predictive Bayesian model is demonstrated for the assessment of such highly uncertain environmental and contingent costs. The predictive Bayesian approach presented generates probability distributions for the quantity of interest (rather than parameters thereof). A spreadsheet implementation of a previously proposed predictive Bayesian model, extended to represent contingent costs, is described and used to evaluate whether a firm should undertake an accelerated phase-out of its PCB containing transformers. Variability and uncertainty (due to lack of information) in transformer accident frequency and severity are assessed simultaneously using a combination of historical accident data, engineering model-based cost estimates, and subjective judgement. Model results are compared using several different risk measures. Use of the model for incorporation of environmental risk management into a company's overall risk management strategy is discussed.

  10. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    Science.gov (United States)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  11. A Predictive Model for Wind Farms Using Dynamic Mode Decomposition

    Science.gov (United States)

    Thomas, Vaughan; Meneveau, Charles; Gayme, Dennice

    2016-11-01

    In this work we extend traditional dynamic mode decomposition (DMD) to develop a linear predictive model for the time evolution of the velocity field for a multiple-turbine wind farm. Traditional DMD identifies a set of DMD modes which can be used to produce a linear system that approximates the dynamics of the original system. Typically, these DMD modes consist of those that both grow and decay, but in order to develop a predictive model we need a system that evolves along a manifold that neither grows nor decays. Here we modify the DMD calculation to build such a model. We then apply this method to three dimensional large eddy simulations (LES) of a multi-turbine wind farm. Our predictive wind farm model is initialized with a small time series of data independent of the original data used to create the system. When initialized in this manner our DMD based model can reproduce the subsequent time evolution of the velocity field over ten inter-turbine convective timescales with a gradual falloff in performance. This work is supported by the National Science Foundation (Grants ECCS-1230788 and OISE-1243482, the WINDINSPIRE project).

  12. A transport model for prediction of wildfire behavior

    Energy Technology Data Exchange (ETDEWEB)

    Linn, R.R.

    1997-07-01

    Wildfires are a threat to human life and property, yet they are an unavoidable part of nature. In the past people have tried to predict wildfire behavior through the use of point functional models but have been unsuccessful at adequately predicting the gross behavior of the broad spectrum of fires that occur in nature. The majority of previous models do not have self-determining propagation rates. The author uses a transport approach to represent this complicated problem and produce a model that utilizes a self-determining propagation rate. The transport approach allows one to represent a large number of environments including transition regions such as those with nonhomogeneous vegetation and terrain. Some of the most difficult features to treat are the imperfectly known boundary conditions and the fine scale structure that is unresolvable, such as the specific location of the fuel or the precise incoming winds. The author accounts for the microscopic details of a fire with macroscopic resolution by dividing quantities into mean and fluctuating parts similar to what is done in traditional turbulence modelling. The author develops a complicated model that includes the transport of multiple gas species, such as oxygen and volatile hydrocarbons, and tracks the depletion of various fuels and other stationary solids and liquids. From this model the author also forms a simplified local burning model with which he performs a number of simulations for the purpose of demonstrating the properties of a self-determining transport-based wildfire model.

  13. Development of a Generic Creep-Fatigue Life Prediction Model

    Science.gov (United States)

    Goswami, Tarun

    2002-01-01

    The objective of this research proposal is to further compile creep-fatigue data of steel alloys and superalloys used in military aircraft engines and/or rocket engines and to develop a statistical multivariate equation. The newly derived model will be a probabilistic fit to all the data compiled from various sources. Attempts will be made to procure the creep-fatigue data from NASA Glenn Research Center and other sources to further develop life prediction models for specific alloy groups. In a previous effort [1-3], a bank of creep-fatigue data has been compiled and tabulated under a range of known test parameters. These test parameters are called independent variables, namely; total strain range, strain rate, hold time, and temperature. The present research attempts to use these variables to develop a multivariate equation, which will be a probabilistic equation fitting a large database. The data predicted by the new model will be analyzed using the normal distribution fits, the closer the predicted lives are with the experimental lives (normal line 1 to 1 fit) the better the prediction. This will be evaluated in terms of a coefficient of correlation, R 2 as well. A multivariate equation developed earlier [3] has the following form, where S, R, T, and H have specific meaning discussed later.

  14. Robust Continuous-time Generalized Predictive Control for Large Time-delay System

    Institute of Scientific and Technical Information of China (English)

    WEI Huan; PAN Li-deng; ZHEN Xin-ping

    2008-01-01

    A simple delay-predictive continuous-time generalized predictive controller with filter (F - SDCGPC) is proposed. By using modified predictive output signal and cost function, the delay compensator is incorporated in the control law with observer structure, and a filter is added for enhancing robustness. The design of filter does not affect the nominal set-point response, and it is more flexible than the design of observer polynomial. The analysis and simulation results show that the F - SDCGPC has better robustness than the observer structure without filter when large time-delay error is considered.

  15. Optimality principles for model-based prediction of human gait.

    Science.gov (United States)

    Ackermann, Marko; van den Bogert, Antonie J

    2010-04-19

    Although humans have a large repertoire of potential movements, gait patterns tend to be stereotypical and appear to be selected according to optimality principles such as minimal energy. When applied to dynamic musculoskeletal models such optimality principles might be used to predict how a patient's gait adapts to mechanical interventions such as prosthetic devices or surgery. In this paper we study the effects of different performance criteria on predicted gait patterns using a 2D musculoskeletal model. The associated optimal control problem for a family of different cost functions was solved utilizing the direct collocation method. It was found that fatigue-like cost functions produced realistic gait, with stance phase knee flexion, as opposed to energy-related cost functions which avoided knee flexion during the stance phase. We conclude that fatigue minimization may be one of the primary optimality principles governing human gait.

  16. A Predictive Maintenance Model for Railway Tracks

    DEFF Research Database (Denmark)

    Li, Rui; Wen, Min; Salling, Kim Bang

    2015-01-01

    For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euro per km per year [1]. Aiming to reduce such maintenance expenditure, this paper...... presents a mathematical model based on Mixed Integer Programming (MIP) which is designed to optimize the predictive railway tamping activities for ballasted track for the time horizon up to four years. The objective function is setup to minimize the actual costs for the tamping machine (measured by time...... recovery on the track quality after tamping operation and (5) Tamping machine operation factors. A Danish railway track between Odense and Fredericia with 57.2 km of length is applied for a time period of two to four years in the proposed maintenance model. The total cost can be reduced with up to 50...

  17. Application of the conditional nonlinear optimal perturbation method to the predictability study of the Kuroshio large meander

    Science.gov (United States)

    Wang, Qiang; Mu, Mu; Dijkstra, Henk A.

    2012-01-01

    A reduced-gravity barotropic shallow-water model was used to simulate the Kuroshio path variations. The results show that the model was able to capture the essential features of these path variations. We used one simulation of the model as the reference state and investigated the effects of errors in model parameters on the prediction of the transition to the Kuroshio large meander (KLM) state using the conditional nonlinear optimal parameter perturbation (CNOP-P) method. Because of their relatively large uncertainties, three model parameters were considered: the interfacial friction coefficient, the wind-stress amplitude, and the lateral friction coefficient. We determined the CNOP-Ps optimized for each of these three parameters independently, and we optimized all three parameters simultaneously using the Spectral Projected Gradient 2 (SPG2) algorithm. Similarly, the impacts caused by errors in initial conditions were examined using the conditional nonlinear optimal initial perturbation (CNOP-I) method. Both the CNOP-I and CNOP-Ps can result in significant prediction errors of the KLM over a lead time of 240 days. But the prediction error caused by CNOP-I is greater than that caused by CNOP-P. The results of this study indicate not only that initial condition errors have greater effects on the prediction of the KLM than errors in model parameters but also that the latter cannot be ignored. Hence, to enhance the forecast skill of the KLM in this model, the initial conditions should first be improved, the model parameters should use the best possible estimates.

  18. Application of the Conditional Nonlinear Optimal Perturbation Method to the Predictability Study of the Kuroshio Large Meander

    Institute of Scientific and Technical Information of China (English)

    WANG Qiang; MU Mu; Henk A. DIJKSTRA

    2012-01-01

    A reduced-gravity barotropic shallow-water model was used to simulate the Kuroshio path variations.The results show that the model was able to capture the essential features of these path variations.We used one simulation of the model as the reference state and investigated the effects of errors in model parameters on the prediction of the transition to the Kuroshio large meander (KLM) state using the conditional nonlinear optimal parameter perturbation (CNOP-P) method.Because of their relatively large uncertainties,three model parameters were considcred:the interfacial friction coefficient,the wind-stress amplitude,and the lateral friction coefficient.We determined the CNOP-Ps optimized for each of these three parameters independently,and we optimized all three parameters simultaneously using the Spectral Projected Gradient 2 (SPG2) algorithm.Similarly,the impacts caused by errors in initial conditions were examined using the conditional nonlinear optimal initial perturbation (CNOP-I) method.Both the CNOP-I and CNOP-Ps can result in significant prediction errors of the KLM over a lead time of 240 days.But the prediction error caused by CNOP-I is greater than that caused by CNOP-P.The results of this study indicate not only that initial condition errors have greater effects on the prediction of the KLM than errors in model parameters but also that the latter cannot be ignored.Hence,to enhance the forecast skill of the KLM in this model,the initial conditions should first be improved,the model parameters should use the best possible estimates.

  19. A predictive fitness model for influenza

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael

    2014-03-01

    The seasonal human influenza A/H3N2 virus undergoes rapid evolution, which produces significant year-to-year sequence turnover in the population of circulating strains. Adaptive mutations respond to human immune challenge and occur primarily in antigenic epitopes, the antibody-binding domains of the viral surface protein haemagglutinin. Here we develop a fitness model for haemagglutinin that predicts the evolution of the viral population from one year to the next. Two factors are shown to determine the fitness of a strain: adaptive epitope changes and deleterious mutations outside the epitopes. We infer both fitness components for the strains circulating in a given year, using population-genetic data of all previous strains. From fitness and frequency of each strain, we predict the frequency of its descendent strains in the following year. This fitness model maps the adaptive history of influenza A and suggests a principled method for vaccine selection. Our results call for a more comprehensive epidemiology of influenza and other fast-evolving pathogens that integrates antigenic phenotypes with other viral functions coupled by genetic linkage.

  20. Predictive Model of Radiative Neutrino Masses

    CERN Document Server

    Babu, K S

    2013-01-01

    We present a simple and predictive model of radiative neutrino masses. It is a special case of the Zee model which introduces two Higgs doublets and a charged singlet. We impose a family-dependent Z_4 symmetry acting on the leptons, which reduces the number of parameters describing neutrino oscillations to four. A variety of predictions follow: The hierarchy of neutrino masses must be inverted; the lightest neutrino mass is extremely small and calculable; one of the neutrino mixing angles is determined in terms of the other two; the phase parameters take CP-conserving values with \\delta_{CP} = \\pi; and the effective mass in neutrinoless double beta decay lies in a narrow range, m_{\\beta \\beta} = (17.6 - 18.5) meV. The ratio of vacuum expectation values of the two Higgs doublets, tan\\beta, is determined to be either 1.9 or 0.19 from neutrino oscillation data. Flavor-conserving and flavor-changing couplings of the Higgs doublets are also determined from neutrino data. The non-standard neutral Higgs bosons, if t...

  1. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  2. Effect on Prediction when Modeling Covariates in Bayesian Nonparametric Models.

    Science.gov (United States)

    Cruz-Marcelo, Alejandro; Rosner, Gary L; Müller, Peter; Stewart, Clinton F

    2013-04-01

    In biomedical research, it is often of interest to characterize biologic processes giving rise to observations and to make predictions of future observations. Bayesian nonparametric methods provide a means for carrying out Bayesian inference making as few assumptions about restrictive parametric models as possible. There are several proposals in the literature for extending Bayesian nonparametric models to include dependence on covariates. Limited attention, however, has been directed to the following two aspects. In this article, we examine the effect on fitting and predictive performance of incorporating covariates in a class of Bayesian nonparametric models by one of two primary ways: either in the weights or in the locations of a discrete random probability measure. We show that different strategies for incorporating continuous covariates in Bayesian nonparametric models can result in big differences when used for prediction, even though they lead to otherwise similar posterior inferences. When one needs the predictive density, as in optimal design, and this density is a mixture, it is better to make the weights depend on the covariates. We demonstrate these points via a simulated data example and in an application in which one wants to determine the optimal dose of an anticancer drug used in pediatric oncology.

  3. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  4. Using Flume Experiments to Model Large Woody Debris Transport Dynamics

    Science.gov (United States)

    Braudrick, C. A.; Grant, G. E.

    2001-05-01

    In the last decade there has been increasing interest in quantifying the transport dynamics of large woody debris in a variety of stream types. We used flume experiments to test theoretical models of wood entrainment, transport, and deposition in streams. Because wood moves infrequently during high flows where direct measurement and observation can be difficult and dangerous. Flume experiments provide an excellent setting to study wood dynamics because channel types, flow, log size, and other parameters can be varied relatively easily and extensive data can be collected over a short time period. Our flume experiments verified theoretical model predictions that piece movement is dependent on the diameter of the log and its orientation in large rivers (where piece length is less than channel width). Piece length, often reported as the most important factor in determining piece movement in field studies, was not a factor in these simulated large channels. This is likely due to the importance of banks and vegetation on inhibiting log movement in the field, particularly for pieces longer than channel width. Logs are often at least partially lodged on the banks sometimes upstream of vegetation or other logs which anchors the piece, and increases the force required for entrainment. Rootwads also increased the flow depth required to move individual logs. By raising logs off the channel bed, rootwads decrease the buoyant and drag forces acting on the log. We also developed a theoretical model of wood transport and deposition based upon the ratios of the piece length to channel width, piece length to the radius of curvature of the channel, and piece diameter to water depth. In these experiments we noted that individual logs tend to move down the channel parallel to the channel margin, and deposited on the outside of bends, heads of shallow and exposed bars, and bar crossovers. Our theoretical model was not borne out by the experiments, likely because there were few potential

  5. Predicting aquifer response time for application in catchment modeling.

    Science.gov (United States)

    Walker, Glen R; Gilfedder, Mat; Dawes, Warrick R; Rassam, David W

    2015-01-01

    It is well established that changes in catchment land use can lead to significant impacts on water resources. Where land-use changes increase evapotranspiration there is a resultant decrease in groundwater recharge, which in turn decreases groundwater discharge to streams. The response time of changes in groundwater discharge to a change in recharge is a key aspect of predicting impacts of land-use change on catchment water yield. Predicting these impacts across the large catchments relevant to water resource planning can require the estimation of groundwater response times from hundreds of aquifers. At this scale, detailed site-specific measured data are often absent, and available spatial data are limited. While numerical models can be applied, there is little advantage if there are no detailed data to parameterize them. Simple analytical methods are useful in this situation, as they allow the variability in groundwater response to be incorporated into catchment hydrological models, with minimal modeling overhead. This paper describes an analytical model which has been developed to capture some of the features of real, sloping aquifer systems. The derived groundwater response timescale can be used to parameterize a groundwater discharge function, allowing groundwater response to be predicted in relation to different broad catchment characteristics at a level of complexity which matches the available data. The results from the analytical model are compared to published field data and numerical model results, and provide an approach with broad application to inform water resource planning in other large, data-scarce catchments. © 2014, CommonWealth of Australia. Groundwater © 2014, National Ground Water Association.

  6. Forced versus coupled dynamics in Earth system modelling and prediction

    Directory of Open Access Journals (Sweden)

    B. Knopf

    2005-01-01

    Full Text Available We compare coupled nonlinear climate models and their simplified forced counterparts with respect to predictability and phase space topology. Various types of uncertainty plague climate change simulation, which is, in turn, a crucial element of Earth System modelling. Since the currently preferred strategy for simulating the climate system, or the Earth System at large, is the coupling of sub-system modules (representing, e.g. atmosphere, oceans, global vegetation, this paper explicitly addresses the errors and indeterminacies generated by the coupling procedure. The focus is on a comparison of forced dynamics as opposed to fully, i.e. intrinsically, coupled dynamics. The former represents a particular type of simulation, where the time behaviour of one complex systems component is prescribed by data or some other external information source. Such a simplifying technique is often employed in Earth System models in order to save computing resources, in particular when massive model inter-comparisons need to be carried out. Our contribution to the debate is based on the investigation of two representative model examples, namely (i a low-dimensional coupled atmosphere-ocean simulator, and (ii a replica-like simulator embracing corresponding components.Whereas in general the forced version (ii is able to mimic its fully coupled counterpart (i, we show in this paper that for a considerable fraction of parameter- and state-space, the two approaches qualitatively differ. Here we take up a phenomenon concerning the predictability of coupled versus forced models that was reported earlier in this journal: the observation that the time series of the forced version display artificial predictive skill. We present an explanation in terms of nonlinear dynamical theory. In particular we observe an intermittent version of artificial predictive skill, which we call on-off synchronization, and trace it back to the appearance of unstable periodic orbits. We also

  7. Two criteria for evaluating risk prediction models.

    Science.gov (United States)

    Pfeiffer, R M; Gail, M H

    2011-09-01

    We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.

  8. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  9. Data-Driven Modeling and Prediction of Arctic Sea Ice

    Science.gov (United States)

    Kondrashov, Dmitri; Chekroun, Mickael; Ghil, Michael

    2016-04-01

    We present results of data-driven predictive analyses of sea ice over the main Arctic regions. Our approach relies on the Multilayer Stochastic Modeling (MSM) framework of Kondrashov, Chekroun and Ghil [Physica D, 2015] and it leads to probabilistic prognostic models of sea ice concentration (SIC) anomalies on seasonal time scales. This approach is applied to monthly time series of state-of-the-art data-adaptive decompositions of SIC and selected climate variables over the Arctic. We evaluate the predictive skill of MSM models by performing retrospective forecasts with "no-look ahead" for up to 6-months ahead. It will be shown in particular that the memory effects included intrinsically in the formulation of our non-Markovian MSM models allow for improvements of the prediction skill of large-amplitude SIC anomalies in certain Arctic regions on the one hand, and of September Sea Ice Extent, on the other. Further improvements allowed by the MSM framework will adopt a nonlinear formulation and explore next-generation data-adaptive decompositions, namely modification of Principal Oscillation Patterns (POPs) and rotated Multichannel Singular Spectrum Analysis (M-SSA).

  10. Economic decision making and the application of nonparametric prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  11. Influence of Deterministic Attachments for Large Unifying Hybrid Network Model

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Large unifying hybrid network model (LUHPM) introduced the deterministic mixing ratio fd on the basis of the harmonious unification hybrid preferential model, to describe the influence of deterministic attachment to the network topology characteristics,

  12. A statistically predictive model for future monsoon failure in India

    Science.gov (United States)

    Schewe, Jacob; Levermann, Anders

    2012-12-01

    Indian monsoon rainfall is vital for a large share of the world’s population. Both reliably projecting India’s future precipitation and unraveling abrupt cessations of monsoon rainfall found in paleorecords require improved understanding of its stability properties. While details of monsoon circulations and the associated rainfall are complex, full-season failure is dominated by large-scale positive feedbacks within the region. Here we find that in a comprehensive climate model, monsoon failure is possible but very rare under pre-industrial conditions, while under future warming it becomes much more frequent. We identify the fundamental intraseasonal feedbacks that are responsible for monsoon failure in the climate model, relate these to observational data, and build a statistically predictive model for such failure. This model provides a simple dynamical explanation for future changes in the frequency distribution of seasonal mean all-Indian rainfall. Forced only by global mean temperature and the strength of the Pacific Walker circulation in spring, it reproduces the trend as well as the multidecadal variability in the mean and skewness of the distribution, as found in the climate model. The approach offers an alternative perspective on large-scale monsoon variability as the result of internal instabilities modulated by pre-seasonal ambient climate conditions.

  13. Structural subgrid-scale modeling for large-eddy simulation:A review

    Institute of Scientific and Technical Information of China (English)

    Hao Lu; Christopher J Rutland

    2016-01-01

    Accurately modeling nonlinear interactions in turbulence is one of the key challenges for large-eddy simu-lation (LES) of turbulence. In this article, we review recent studies on structural subgrid scale modeling, focusing on evaluating how well these models predict the effects of small scales. The article discusses a priori and a posteriori test results. Other nonlinear models are briefly discussed, and future prospects are noted.

  14. Earthquake behaviour and large-event predictability in a sheared granular stick-slip system

    CERN Document Server

    Dalton, F; Dalton, Fergal; Corcoran, David

    2002-01-01

    We present results from a physical experiment which demonstrates that a sheared granular medium behaves in a manner analogous to earthquake activity. The device consists of an annular plate rotating over a granular medium in a stick-slip fashion. Previous observations by us include a bounded critical state with a power law distribution of event energy consistent with the Gutenberg-Richter law, here we also reveal stair-case seismicity, clustering, foreshocks, aftershocks and seismic quiescence. Subcritical and supercritical regimes have also been observed by us depending on the system configuration. We investigate the predictability of large events. Using the quiescence between `shock' events as an alarm condition, it is found that large events are respectively unpredictable, marginally predictable and highly predictable in the subcritical, critical and supercritical states.

  15. Gene prediction in metagenomic fragments: A large scale machine learning approach

    Directory of Open Access Journals (Sweden)

    Morgenstern Burkhard

    2008-04-01

    Full Text Available Abstract Background Metagenomics is an approach to the characterization of microbial genomes via the direct isolation of genomic sequences from the environment without prior cultivation. The amount of metagenomic sequence data is growing fast while computational methods for metagenome analysis are still in their infancy. In contrast to genomic sequences of single species, which can usually be assembled and analyzed by many available methods, a large proportion of metagenome data remains as unassembled anonymous sequencing reads. One of the aims of all metagenomic sequencing projects is the identification of novel genes. Short length, for example, Sanger sequencing yields on average 700 bp fragments, and unknown phylogenetic origin of most fragments require approaches to gene prediction that are different from the currently available methods for genomes of single species. In particular, the large size of metagenomic samples requires fast and accurate methods with small numbers of false positive predictions. Results We introduce a novel gene prediction algorithm for metagenomic fragments based on a two-stage machine learning approach. In the first stage, we use linear discriminants for monocodon usage, dicodon usage and translation initiation sites to extract features from DNA sequences. In the second stage, an artificial neural network combines these features with open reading frame length and fragment GC-content to compute the probability that this open reading frame encodes a protein. This probability is used for the classification and scoring of gene candidates. With large scale training, our method provides fast single fragment predictions with good sensitivity and specificity on artificially fragmented genomic DNA. Additionally, this method is able to predict translation initiation sites accurately and distinguishes complete from incomplete genes with high reliability. Conclusion Large scale machine learning methods are well-suited for gene

  16. EXO-ZODI MODELING FOR THE LARGE BINOCULAR TELESCOPE INTERFEROMETER

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, Grant M.; Wyatt, Mark C.; Panić, Olja; Shannon, Andrew [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Bailey, Vanessa; Defrère, Denis; Hinz, Philip M.; Rieke, George H.; Skemer, Andrew J.; Su, Katherine Y. L. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Bryden, Geoffrey; Mennesson, Bertrand; Morales, Farisa; Serabyn, Eugene [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States); Danchi, William C.; Roberge, Aki; Stapelfeldt, Karl R. [NASA Goddard Space Flight Center, Exoplanets and Stellar Astrophysics, Code 667, Greenbelt, MD 20771 (United States); Haniff, Chris [Cavendish Laboratory, University of Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Lebreton, Jérémy [Infrared Processing and Analysis Center, MS 100-22, California Institute of Technology, 770 South Wilson Avenue, Pasadena, CA 91125 (United States); Millan-Gabet, Rafael [NASA Exoplanet Science Institute, California Institute of Technology, 770 South Wilson Avenue, Pasadena, CA 91125 (United States); and others

    2015-02-01

    Habitable zone dust levels are a key unknown that must be understood to ensure the success of future space missions to image Earth analogs around nearby stars. Current detection limits are several orders of magnitude above the level of the solar system's zodiacal cloud, so characterization of the brightness distribution of exo-zodi down to much fainter levels is needed. To this end, the Large Binocular Telescope Interferometer (LBTI) will detect thermal emission from habitable zone exo-zodi a few times brighter than solar system levels. Here we present a modeling framework for interpreting LBTI observations, which yields dust levels from detections and upper limits that are then converted into predictions and upper limits for the scattered light surface brightness. We apply this model to the HOSTS survey sample of nearby stars; assuming a null depth uncertainty of 10{sup –4} the LBTI will be sensitive to dust a few times above the solar system level around Sun-like stars, and to even lower dust levels for more massive stars.

  17. Estimating the magnitude of prediction uncertainties for the APLE model

    Science.gov (United States)

    Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study, we conduct an uncertainty analysis for the Annual P ...

  18. Predictive modeling of low solubility semiconductor alloys

    Science.gov (United States)

    Rodriguez, Garrett V.; Millunchick, Joanna M.

    2016-09-01

    GaAsBi is of great interest for applications in high efficiency optoelectronic devices due to its highly tunable bandgap. However, the experimental growth of high Bi content films has proven difficult. Here, we model GaAsBi film growth using a kinetic Monte Carlo simulation that explicitly takes cation and anion reactions into account. The unique behavior of Bi droplets is explored, and a sharp decrease in Bi content upon Bi droplet formation is demonstrated. The high mobility of simulated Bi droplets on GaAsBi surfaces is shown to produce phase separated Ga-Bi droplets as well as depressions on the film surface. A phase diagram for a range of growth rates that predicts both Bi content and droplet formation is presented to guide the experimental growth of high Bi content GaAsBi films.

  19. Leptogenesis in minimal predictive seesaw models

    Science.gov (United States)

    Björkeroth, Fredrik; de Anda, Francisco J.; de Medeiros Varzielas, Ivo; King, Stephen F.

    2015-10-01

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to ( ν e , ν μ , ν τ ) proportional to (0, 1, 1) and (1, n, n - 2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A 4 vacuum alignment provides the required Yukawa structures with n = 3, while a {{Z}}_9 symmetry fixes the relatives phase to be a ninth root of unity.

  20. Non-gaussian Test Models for Prediction and State Estimation with Model Errors

    Institute of Scientific and Technical Information of China (English)

    Michal BRANICKI; Nan CHEN; Andrew J.MAJDA

    2013-01-01

    Turbulent dynamical systems involve dynamics with both a large dimensional phase space and a large number of positive Lyapunov exponents.Such systems are ubiquitous in applications in contemporary science and engineering where the statistical ensemble prediction and the real time filtering/state estimation are needed despite the underlying complexity of the system.Statistically exactly solvable test models have a crucial role to provide firm mathematical underpinning or new algorithms for vastly more complex scientific phenomena.Here,a class of statistically exactly solvable non-Gaussian test models is introduced,where a generalized Feynman-Kac formulation reduces the exact behavior of conditional statistical moments to the solution to inhomogeneous Fokker-Planck equations modified by linear lower order coupling and source terms.This procedure is applied to a test model with hidden instabilities and is combined with information theory to address two important issues in the contemporary statistical prediction of turbulent dynamical systems:the coarse-gained ensemble prediction in a perfect model and the improving long range forecasting in imperfect models.The models discussed here should be useful for many other applications and algorithms for the real time prediction and the state estimation.

  1. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  2. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  3. Flux balance analysis of plant metabolism: the effect of biomass composition and model structure on model predictions

    Directory of Open Access Journals (Sweden)

    Huili eYuan

    2016-04-01

    Full Text Available The biomass composition represented in constraint-based metabolic models is a key component for predicting cellular metabolism using flux balance analysis (FBA. Despite major advances in analytical technologies, it is often challenging to obtain a detailed composition of all major biomass components experimentally. Studies examining the influence of the biomass composition on the predictions of metabolic models have so far mostly been done on models of microorganisms. Little is known about the impact of varying biomass composition on flux prediction in FBA models of plants, whose metabolism is very versatile and complex because of the presence of multiple subcellular compartments. Also, the published metabolic models of plants differ in size and complexity. In this study, we examined the sensitivity of the predicted fluxes of plant metabolic models to biomass composition and model structure. These questions were addressed by evaluating the sensitivity of predictions of growth rates and central carbon metabolic fluxes to varying biomass compositions in three different genome-/large-scale metabolic models of Arabidopsis thaliana. Our results showed that fluxes through the central carbon metabolism were robust to changes in biomass composition. Nevertheless, comparisons between the predictions from three models using identical modelling constraints and objective function showed that model predictions were sensitive to the structure of the models, highlighting large discrepancies between the published models.

  4. Predicting diabetic nephropathy using a multifactorial genetic model.

    Directory of Open Access Journals (Sweden)

    Ilana Blech

    Full Text Available AIMS: The tendency to develop diabetic nephropathy is, in part, genetically determined, however this genetic risk is largely undefined. In this proof-of-concept study, we tested the hypothesis that combined analysis of multiple genetic variants can improve prediction. METHODS: Based on previous reports, we selected 27 SNPs in 15 genes from metabolic pathways involved in the pathogenesis of diabetic nephropathy and genotyped them in 1274 Ashkenazi or Sephardic Jewish patients with Type 1 or Type 2 diabetes of >10 years duration. A logistic regression model was built using a backward selection algorithm and SNPs nominally associated with nephropathy in our population. The model was validated by using random "training" (75% and "test" (25% subgroups of the original population and by applying the model to an independent dataset of 848 Ashkenazi patients. RESULTS: The logistic model based on 5 SNPs in 5 genes (HSPG2, NOS3, ADIPOR2, AGER, and CCL5 and 5 conventional variables (age, sex, ethnicity, diabetes type and duration, and allowing for all possible two-way interactions, predicted nephropathy in our initial population (C-statistic = 0.672 better than a model based on conventional variables only (C = 0.569. In the independent replication dataset, although the C-statistic of the genetic model decreased (0.576, it remained highly associated with diabetic nephropathy (χ(2 = 17.79, p<0.0001. In the replication dataset, the model based on conventional variables only was not associated with nephropathy (χ(2 = 3.2673, p = 0.07. CONCLUSION: In this proof-of-concept study, we developed and validated a genetic model in the Ashkenazi/Sephardic population predicting nephropathy more effectively than a similarly constructed non-genetic model. Further testing is required to determine if this modeling approach, using an optimally selected panel of genetic markers, can provide clinically useful prediction and if generic models can be

  5. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  6. Large-scale prediction of microRNA-disease associations by combinatorial prioritization algorithm

    Science.gov (United States)

    Yu, Hua; Chen, Xiaojun; Lu, Lu

    2017-03-01

    Identification of the associations between microRNA molecules and human diseases from large-scale heterogeneous biological data is an important step for understanding the pathogenesis of diseases in microRNA level. However, experimental verification of microRNA-disease associations is expensive and time-consuming. To overcome the drawbacks of conventional experimental methods, we presented a combinatorial prioritization algorithm to predict the microRNA-disease associations. Importantly, our method can be used to predict microRNAs (diseases) associated with the diseases (microRNAs) without the known associated microRNAs (diseases). The predictive performance of our proposed approach was evaluated and verified by the internal cross-validations and external independent validations based on standard association datasets. The results demonstrate that our proposed method achieves the impressive performance for predicting the microRNA-disease association with the Area Under receiver operation characteristic Curve (AUC), 86.93%, which is indeed outperform the previous prediction methods. Particularly, we observed that the ensemble-based method by integrating the predictions of multiple algorithms can give more reliable and robust prediction than the single algorithm, with the AUC score improved to 92.26%. We applied our combinatorial prioritization algorithm to lung neoplasms and breast neoplasms, and revealed their top 30 microRNA candidates, which are in consistent with the published literatures and databases.

  7. Large-scale prediction of microRNA-disease associations by combinatorial prioritization algorithm

    Science.gov (United States)

    Yu, Hua; Chen, Xiaojun; Lu, Lu

    2017-01-01

    Identification of the associations between microRNA molecules and human diseases from large-scale heterogeneous biological data is an important step for understanding the pathogenesis of diseases in microRNA level. However, experimental verification of microRNA-disease associations is expensive and time-consuming. To overcome the drawbacks of conventional experimental methods, we presented a combinatorial prioritization algorithm to predict the microRNA-disease associations. Importantly, our method can be used to predict microRNAs (diseases) associated with the diseases (microRNAs) without the known associated microRNAs (diseases). The predictive performance of our proposed approach was evaluated and verified by the internal cross-validations and external independent validations based on standard association datasets. The results demonstrate that our proposed method achieves the impressive performance for predicting the microRNA-disease association with the Area Under receiver operation characteristic Curve (AUC), 86.93%, which is indeed outperform the previous prediction methods. Particularly, we observed that the ensemble-based method by integrating the predictions of multiple algorithms can give more reliable and robust prediction than the single algorithm, with the AUC score improved to 92.26%. We applied our combinatorial prioritization algorithm to lung neoplasms and breast neoplasms, and revealed their top 30 microRNA candidates, which are in consistent with the published literatures and databases. PMID:28317855

  8. Quantitative Prediction of Beef Quality Using Visible and NIR Spectroscopy with Large Data Samples Under Industry Conditions

    Science.gov (United States)

    Qiao, T.; Ren, J.; Craigie, C.; Zabalza, J.; Maltin, Ch.; Marshall, S.

    2015-03-01

    It is well known that the eating quality of beef has a significant influence on the repurchase behavior of consumers. There are several key factors that affect the perception of quality, including color, tenderness, juiciness, and flavor. To support consumer repurchase choices, there is a need for an objective measurement of quality that could be applied to meat prior to its sale. Objective approaches such as offered by spectral technologies may be useful, but the analytical algorithms used remain to be optimized. For visible and near infrared (VISNIR) spectroscopy, Partial Least Squares Regression (PLSR) is a widely used technique for meat related quality modeling and prediction. In this paper, a Support Vector Machine (SVM) based machine learning approach is presented to predict beef eating quality traits. Although SVM has been successfully used in various disciplines, it has not been applied extensively to the analysis of meat quality parameters. To this end, the performance of PLSR and SVM as tools for the analysis of meat tenderness is evaluated, using a large dataset acquired under industrial conditions. The spectral dataset was collected using VISNIR spectroscopy with the wavelength ranging from 350 to 1800 nm on 234 beef M. longissimus thoracis steaks from heifers, steers, and young bulls. As the dimensionality with the VISNIR data is very high (over 1600 spectral bands), the Principal Component Analysis (PCA) technique was applied for feature extraction and data reduction. The extracted principal components (less than 100) were then used for data modeling and prediction. The prediction results showed that SVM has a greater potential to predict beef eating quality than PLSR, especially for the prediction of tenderness. The infl uence of animal gender on beef quality prediction was also investigated, and it was found that beef quality traits were predicted most accurately in beef from young bulls.

  9. COGNITIVE MODELS OF PREDICTION THE DEVELOPMENT OF A DIVERSIFIED CORPORATION

    Directory of Open Access Journals (Sweden)

    Baranovskaya T. P.

    2016-10-01

    Full Text Available The application of classical forecasting methods applied to a diversified corporation faces some certain difficulties, due to its economic nature. Unlike other businesses, diversified corporations are characterized by multidimensional arrays of data with a high degree of distortion and fragmentation of information due to the cumulative effect of the incompleteness and distortion of accounting information from the enterprises in it. Under these conditions, the applied methods and tools must have high resolution and work effectively with large databases with incomplete information, ensure the correct common comparable quantitative processing of the heterogeneous nature of the factors measured in different units. It is therefore necessary to select or develop some methods that can work with complex poorly formalized tasks. This fact substantiates the relevance of the problem of developing models, methods and tools for solving the problem of forecasting the development of diversified corporations. This is the subject of this work, which makes it relevant. The work aims to: 1 analyze the forecasting methods to justify the choice of system-cognitive analysis as one of the effective methods for the prediction of semi-structured tasks; 2 to adapt and develop the method of systemic-cognitive analysis for forecasting of dynamics of development of the corporation subject to the scenario approach; 3 to develop predictive model scenarios of changes in basic economic indicators of development of the corporation and to assess their credibility; 4 determine the analytical form of the dependence between past and future scenarios of various economic indicators; 5 develop analytical models weighing predictable scenarios, taking into account all prediction results with positive levels of similarity, to increase the level of reliability of forecasts; 6 to develop a calculation procedure to assess the strength of influence on the corporation (sensitivity of its

  10. Predicting life satisfaction of the Angolan elderly: a structural model.

    Science.gov (United States)

    Gutiérrez, M; Tomás, J M; Galiana, L; Sancho, P; Cebrià, M A

    2013-01-01

    Satisfaction with life is of particular interest in the study of old age well-being because it has arisen as an important component of old age. A considerable amount of research has been done to explain life satisfaction in the elderly, and there is growing empirical evidence on best predictors of life satisfaction. This research evaluates the predictive power of some aging process variables, on Angolan elderly people's life satisfaction, while including perceived health into the model. Data for this research come from a cross-sectional survey of elderly people living in the capital of Angola, Luanda. A total of 1003 Angolan elderly were surveyed on socio-demographic information, perceived health, active engagement, generativity, and life satisfaction. A Multiple Indicators Multiple Causes model was built to test variables' predictive power on life satisfaction. The estimated theoretical model fitted the data well. The main predictors were those related to active engagement with others. Perceived health also had a significant and positive effect on life satisfaction. Several processes together may predict life satisfaction in the elderly population of Angola, and the variance accounted for it is large enough to be considered relevant. The key factor associated to life satisfaction seems to be active engagement with others.

  11. Improving active space telescope wavefront control using predictive thermal modeling

    Science.gov (United States)

    Gersh-Range, Jessica; Perrin, Marshall D.

    2015-01-01

    Active control algorithms for space telescopes are less mature than those for large ground telescopes due to differences in the wavefront control problems. Active wavefront control for space telescopes at L2, such as the James Webb Space Telescope (JWST), requires weighing control costs against the benefits of correcting wavefront perturbations that are a predictable byproduct of the observing schedule, which is known and determined in advance. To improve the control algorithms for these telescopes, we have developed a model that calculates the temperature and wavefront evolution during a hypothetical mission, assuming the dominant wavefront perturbations are due to changes in the spacecraft attitude with respect to the sun. Using this model, we show that the wavefront can be controlled passively by introducing scheduling constraints that limit the allowable attitudes for an observation based on the observation duration and the mean telescope temperature. We also describe the implementation of a predictive controller designed to prevent the wavefront error (WFE) from exceeding a desired threshold. This controller outperforms simpler algorithms even with substantial model error, achieving a lower WFE without requiring significantly more corrections. Consequently, predictive wavefront control based on known spacecraft attitude plans is a promising approach for JWST and other future active space observatories.

  12. Modeling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surface sources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room. Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image source modelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  13. Modeling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surface sources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room. Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image source modelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  14. Comparing model predictions for ecosystem-based management

    DEFF Research Database (Denmark)

    Jacobsen, Nis Sand; Essington, Timothy E.; Andersen, Ken Haste

    2016-01-01

    Ecosystem modeling is becoming an integral part of fisheries management, but there is a need to identify differences between predictions derived from models employed for scientific and management purposes. Here, we compared two models: a biomass-based food-web model (Ecopath with Ecosim (Ew......E)) and a size-structured fish community model. The models were compared with respect to predicted ecological consequences of fishing to identify commonalities and differences in model predictions for the California Current fish community. We compared the models regarding direct and indirect responses to fishing...... on one or more species. The size-based model predicted a higher fishing mortality needed to reach maximum sustainable yield than EwE for most species. The size-based model also predicted stronger top-down effects of predator removals than EwE. In contrast, EwE predicted stronger bottom-up effects...

  15. Doppler assessment of hepatic venous waves for predicting large varices in cirrhotic patients

    Directory of Open Access Journals (Sweden)

    Thomas Joseph

    2011-01-01

    Full Text Available Background/Aim: Color Doppler examination of changes in hepatic venous waveforms is being evaluated as a means of prediction of severity of portal hypertension and presence of esophageal varices. Normal hepatic venous waveform shows a triphasic pattern. In cirrhosis, this pattern changes to a biphasic or monophasic pattern. We aimed to study the sensitivity of loss of normal hepatic venous waveforms in predicting large varices in a cross-sectional analysis. Materials and Methods: All patients, admitted or attending the outpatient department, with a diagnosis of cirrhosis were included in the study. All patients were subjected to oesophagogastroduodenoscopy and Color Doppler examination, and waveform patterns in hepatic vein were recorded. The sensitivity and specificity of changes in waveform in detecting large varices were studied. Results : A total of 51 cases were examined. Triphasic waves were seen in 4 (7.8% cases, biphasic in 26 (51% cases, and monophasic in 21 (41.2% cases. Small varices were seen in 30 (58.8% cases and large varices in 21 (41.2% cases. The sensitivity of loss of the triphasic wave pattern in detecting significant varices (Grade 3 or 4 was very high (95.23% and negative predictive value was also high (75%. Severity of liver disease as indicated by Child-Pugh and MELD scores did not correlate with changes in hepatic venous waveforms. Conclusion : Loss of triphasic hepatic venous waveform is highly sensitive in predicting significant varices in patients with cirrhosis.

  16. Prediction of process induced shape distortions and residual stresses in large fibre reinforced composite laminates

    DEFF Research Database (Denmark)

    Nielsen, Michael Wenani

    The present thesis is devoted to numerical modelling of thermomechanical phenomena occurring during curing in the manufacture of large fibre reinforced polymer matrix composites with thick laminate sections using vacuum assisted resin transfer moulding (VARTM). The main application of interest...

  17. Wind Turbine Large-Eddy Simulations on Very Coarse Grid Resolutions using an Actuator Line Model

    CERN Document Server

    Tossas, Luis A Martínez; Meneveau, Charles

    2016-01-01

    In this work the accuracy of the Actuator Line Model (ALM) in Large Eddy Simulations of wind turbine flow is studied under the specific conditions of very coarse spatial resolutions. For finely-resolved conditions, it is known that ALM provides better accuracy compared to the standard Actuator Disk Model (ADM) without rotation. However, we show here that on very coarse resolutions, flow induction occurring at rotor scales can affect the predicted inflow angle and can adversely affect the ALM predictions. We first provide an illustration of coarse LES to reproduce wind tunnel measurements. The resulting flow predictions are good, but the challenges in predicting power outputs from the detailed ALM motivate more detailed analysis on a case with uniform inflow. We present a theoretical framework to compare the filtered quantities that enter the Large-Eddy Simulation equations as body forces with a scaling relation between the filtered and unfiltered quantities. The study aims to apply the theoretical derivation ...

  18. Prediction of Fecal Nitrogen and Fecal Phosphorus Content for Lactating Dairy Cows in Large-scale Dairy Farms

    Directory of Open Access Journals (Sweden)

    QU Qing-bo

    2017-05-01

    Full Text Available To facilitate efficient and sustainable manure management and reduce potential pollution, it's necessary for precise prediction of fecal nutrient content. The aim of this study is to build prediction models of fecal nitrogen and phosphorus content by the factors of dietary nutrient composition, days in milk, milk yield and body weight of Chinese Holstein lactating dairy cows. 20 kinds of dietary nutrient composition and 60 feces samples were collected from lactating dairy cows from 7 large-scale dairy farms in Tianjin City; The fecal nitrogen and phosphorus content were analyzed. The whole data set was divided into training data set and testing data set. The training data set, including 14 kinds of dietary nutrient composition and 48 feces samples, was used to develop prediction models. The relationship between fecal nitrogen or phosphorus content and dietary nutrient composition was illustrated by means of correlation and regression analysis using SAS software. The results showed that fecal nitrogen(FN content was highly positively correlated with organic matter intake(OMI and crude fat intake(CFi, and correlation coefficients were 0. 836 and 0. 705, respectively. Negative correlation coefficient was found between fecal phosphorus(FP content and body weight(BW, and the correlation coefficient was -0.525. Among different approaches to develop prediction models, the results indicated that determination coefficients of multiple linear regression equations were higher than those of simple linear regression equations. Specially, fecal nitrogen content was excellently predicted by milk yield(MY, days in milk(DIM, organic matter intake(OMI and nitrogen intake(NI, and the model was as follows:y=0.43+0.29×MY+0.02×DIM+0.92×OMI-13.01×NI (R2=0.96. Accordingly, the highest determination coefficient of prediction equation of FP content was 0.62, when body weight(BW, phosphorus intake(PI and nitrogen intake(NI were combined as predictors. The prediction

  19. Large-scale structural and textual similarity-based mining of knowledge graph to predict drug-drug interactions

    KAUST Repository

    Abdelaziz, Ibrahim

    2017-06-12

    Drug-Drug Interactions (DDIs) are a major cause of preventable Adverse Drug Reactions (ADRs), causing a significant burden on the patients’ health and the healthcare system. It is widely known that clinical studies cannot sufficiently and accurately identify DDIs for new drugs before they are made available on the market. In addition, existing public and proprietary sources of DDI information are known to be incomplete and/or inaccurate and so not reliable. As a result, there is an emerging body of research on in-silico prediction of drug-drug interactions. In this paper, we present Tiresias, a large-scale similarity-based framework that predicts DDIs through link prediction. Tiresias takes in various sources of drug-related data and knowledge as inputs, and provides DDI predictions as outputs. The process starts with semantic integration of the input data that results in a knowledge graph describing drug attributes and relationships with various related entities such as enzymes, chemical structures, and pathways. The knowledge graph is then used to compute several similarity measures between all the drugs in a scalable and distributed framework. In particular, Tiresias utilizes two classes of features in a knowledge graph: local and global features. Local features are derived from the information directly associated to each drug (i.e., one hop away) while global features are learnt by minimizing a global loss function that considers the complete structure of the knowledge graph. The resulting similarity metrics are used to build features for a large-scale logistic regression model to predict potential DDIs. We highlight the novelty of our proposed Tiresias and perform thorough evaluation of the quality of the predictions. The results show the effectiveness of Tiresias in both predicting new interactions among existing drugs as well as newly developed drugs.

  20. Remaining Useful Lifetime (RUL - Probabilistic Predictive Model

    Directory of Open Access Journals (Sweden)

    Ephraim Suhir

    2011-01-01

    Full Text Available Reliability evaluations and assurances cannot be delayed until the device (system is fabricated and put into operation. Reliability of an electronic product should be conceived at the early stages of its design; implemented during manufacturing; evaluated (considering customer requirements and the existing specifications, by electrical, optical and mechanical measurements and testing; checked (screened during manufacturing (fabrication; and, if necessary and appropriate, maintained in the field during the product’s operation Simple and physically meaningful probabilistic predictive model is suggested for the evaluation of the remaining useful lifetime (RUL of an electronic device (system after an appreciable deviation from its normal operation conditions has been detected, and the increase in the failure rate and the change in the configuration of the wear-out portion of the bathtub has been assessed. The general concepts are illustrated by numerical examples. The model can be employed, along with other PHM forecasting and interfering tools and means, to evaluate and to maintain the high level of the reliability (probability of non-failure of a device (system at the operation stage of its lifetime.

  1. Predictive modeling for EBPC in EBDW

    Science.gov (United States)

    Zimmermann, Rainer; Schulz, Martin; Hoppe, Wolfgang; Stock, Hans-Jürgen; Demmerle, Wolfgang; Zepka, Alex; Isoyan, Artak; Bomholt, Lars; Manakli, Serdar; Pain, Laurent

    2009-10-01

    We demonstrate a flow for e-beam proximity correction (EBPC) to e-beam direct write (EBDW) wafer manufacturing processes, demonstrating a solution that covers all steps from the generation of a test pattern for (experimental or virtual) measurement data creation, over e-beam model fitting, proximity effect correction (PEC), and verification of the results. We base our approach on a predictive, physical e-beam simulation tool, with the possibility to complement this with experimental data, and the goal of preparing the EBPC methods for the advent of high-volume EBDW tools. As an example, we apply and compare dose correction and geometric correction for low and high electron energies on 1D and 2D test patterns. In particular, we show some results of model-based geometric correction as it is typical for the optical case, but enhanced for the particularities of e-beam technology. The results are used to discuss PEC strategies, with respect to short and long range effects.

  2. Comparison of mixed layer models predictions with experimental data

    Energy Technology Data Exchange (ETDEWEB)

    Faggian, P.; Riva, G.M. [CISE Spa, Divisione Ambiente, Segrate (Italy); Brusasca, G. [ENEL Spa, CRAM, Milano (Italy)

    1997-10-01

    The temporal evolution of the PBL vertical structure for a North Italian rural site, situated within relatively large agricultural fields and almost flat terrain, has been investigated during the period 22-28 June 1993 by experimental and modellistic point of view. In particular, the results about a sunny day (June 22) and a cloudy day (June 25) are presented in this paper. Three schemes to estimate mixing layer depth have been compared, i.e. Holzworth (1967), Carson (1973) and Gryning-Batchvarova models (1990), which use standard meteorological observations. To estimate their degree of accuracy, model outputs were analyzed considering radio-sounding meteorological profiles and stability atmospheric classification criteria. Besides, the mixed layer depths prediction were compared with the estimated values obtained by a simple box model, whose input requires hourly measures of air concentrations and ground flux of {sup 222}Rn. (LN)

  3. A SUSY SO(10) model with large tan$\\beta$

    CERN Document Server

    Lazarides, G

    1994-01-01

    We construct a supersymmetric SO(10) model with the asymptotic relation tan\\beta \\simeq m_t/m_b automatically arising from its structure. The model retains the significant Minimal Supersymmetric Standard Model predictions for sin^2 \\theta_w and \\alpha_s and contains an automatic Z_2 matter parity. Proton decay through d=5 operators is sufficiently suppressed. It is remarkable that no global symmetries need to be imposed on the model.

  4. Model Predictive Vibration Control Efficient Constrained MPC Vibration Control for Lightly Damped Mechanical Structures

    CERN Document Server

    Takács, Gergely

    2012-01-01

    Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: ·         the implementation of ...

  5. Predicting macrobending loss for large-mode area photonic crystal fibres

    DEFF Research Database (Denmark)

    Nielsen, Martin D.; Mortensen, Niels Asger; Albertsen, Maja

    2004-01-01

    We report on an easy-to-evaluate expression for the prediction of the bend-loss for a large mode area photonic crystal fiber (PCF) with a triangular air-hole lattice. The expression is based on a recently proposed formulation of the V-parameter for a PCF and contains no free parameters. The valid....... The validity of the expression is verified experimentally for varying fiber parameters as well as bend radius. The typical deviation between the position of the measured and the predicted bend loss edge is within measurement uncertainty....

  6. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...... are presented as the small-scale model underpredicts the overtopping discharge....

  7. QSAR prediction of estrogen activity for a large set of diverse chemicals under the guidance of OECD principles.

    Science.gov (United States)

    Liu, Huanxiang; Papa, Ester; Gramatica, Paola

    2006-11-01

    A large number of environmental chemicals, known as endocrine-disrupting chemicals, are suspected of disrupting endocrine functions by mimicking or antagonizing natural hormones, and such chemicals may pose a serious threat to the health of humans and wildlife. They are thought to act through a variety of mechanisms, mainly estrogen-receptor-mediated mechanisms of toxicity. However, it is practically impossible to perform thorough toxicological tests on all potential xenoestrogens, and thus, the quantitative structure--activity relationship (QSAR) provides a promising method for the estimation of a compound's estrogenic activity. Here, QSAR models of the estrogen receptor binding affinity of a large data set of heterogeneous chemicals have been built using theoretical molecular descriptors, giving full consideration to the new OECD principles in regulation for QSAR acceptability, during model construction and assessment. An unambiguous multiple linear regression (MLR) algorithm was used to build the models, and model predictive ability was validated by both internal and external validation. The applicability domain was checked by the leverage approach to verify prediction reliability. The results obtained using several validation paths indicate that the proposed QSAR model is robust and satisfactory, and can provide a feasible and practical tool for the rapid screening of the estrogen activity of organic compounds.

  8. Mixing height computation from a numerical weather prediction model

    Energy Technology Data Exchange (ETDEWEB)

    Jericevic, A. [Croatian Meteorological and Hydrological Service, Zagreb (Croatia); Grisogono, B. [Univ. of Zagreb, Zagreb (Croatia). Andrija Mohorovicic Geophysical Inst., Faculty of Science

    2004-07-01

    Dispersion models require hourly values of the mixing height, H, that indicates the existence of turbulent mixing. The aim of this study was to investigate a model ability and characteristics in the prediction of H. The ALADIN, limited area numerical weather prediction (NWP) model for short-range 48-hour forecasts was used. The bulk Richardson number (R{sub iB}) method was applied to determine the height of the atmospheric boundary layer at one grid point nearest to Zagreb, Croatia. This specific location was selected because there were available radio soundings and the verification of the model could be done. Critical value of bulk Richardson number R{sub iBc}=0.3 was used. The values of H, modelled and measured, for 219 days at 12 UTC are compared, and the correlation coefficient of 0.62 is obtained. This indicates that ALADIN can be used for the calculation of H in the convective boundary layer. For the stable boundary layer (SBL), the model underestimated H systematically. Results showed that R{sub iBc} evidently increases with the increase of stability. Decoupling from the surface in the very SBL was detected, which is a consequence of the flow ease resulting in R{sub iB} becoming very large. Verification of the practical usage of the R{sub iB} method for H calculations from NWP model was performed. The necessity for including other stability parameters (e.g., surface roughness length) was evidenced. Since ALADIN model is in operational use in many European countries, this study would help the others in pre-processing NWP data for input to dispersion models. (orig.)

  9. RFI modeling and prediction approach for SATOP applications: RFI prediction models

    Science.gov (United States)

    Nguyen, Tien M.; Tran, Hien T.; Wang, Zhonghai; Coons, Amanda; Nguyen, Charles C.; Lane, Steven A.; Pham, Khanh D.; Chen, Genshe; Wang, Gang

    2016-05-01

    This paper describes a technical approach for the development of RFI prediction models using carrier synchronization loop when calculating Bit or Carrier SNR degradation due to interferences for (i) detecting narrow-band and wideband RFI signals, and (ii) estimating and predicting the behavior of the RFI signals. The paper presents analytical and simulation models and provides both analytical and simulation results on the performance of USB (Unified S-Band) waveforms in the presence of narrow-band and wideband RFI signals. The models presented in this paper will allow the future USB command systems to detect the RFI presence, estimate the RFI characteristics and predict the RFI behavior in real-time for accurate assessment of the impacts of RFI on the command Bit Error Rate (BER) performance. The command BER degradation model presented in this paper also allows the ground system operator to estimate the optimum transmitted SNR to maintain a required command BER level in the presence of both friendly and un-friendly RFI sources.

  10. Using dynamical uncertainty models estimating uncertainty bounds on power plant performance prediction

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Mataji, B.

    2007-01-01

    Predicting the performance of large scale plants can be difficult due to model uncertainties etc, meaning that one can be almost certain that the prediction will diverge from the plant performance with time. In this paper output multiplicative uncertainty models are used as dynamical models of th...... models, is applied to two different sets of measured plant data. The computed uncertainty bounds cover the measured plant output, while the nominal prediction is outside these uncertainty bounds for some samples in these examples.  ......Predicting the performance of large scale plants can be difficult due to model uncertainties etc, meaning that one can be almost certain that the prediction will diverge from the plant performance with time. In this paper output multiplicative uncertainty models are used as dynamical models...... of the prediction error. These proposed dynamical uncertainty models result in an upper and lower bound on the predicted performance of the plant. The dynamical uncertainty models are used to estimate the uncertainty of the predicted performance of a coal-fired power plant. The proposed scheme, which uses dynamical...

  11. Modelling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surfacesources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room.Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image sourcemodelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  12. Modelling Large sound sources in a room acoustical calculation program

    DEFF Research Database (Denmark)

    Christensen, Claus Lynge

    1999-01-01

    A room acoustical model capable of modelling point, line and surface sources is presented. Line and surfacesources are modelled using a special ray-tracing algorithm detecting the radiation pattern of the surfaces in the room.Point sources are modelled using a hybrid calculation method combining...... this ray-tracing method with Image sourcemodelling. With these three source types, it is possible to model large and complex sound sources in workrooms....

  13. Prediction models : the right tool for the right problem

    NARCIS (Netherlands)

    Kappen, Teus H.; Peelen, Linda M.

    2016-01-01

    PURPOSE OF REVIEW: Perioperative prediction models can help to improve personalized patient care by providing individual risk predictions to both patients and providers. However, the scientific literature on prediction model development and validation can be quite technical and challenging to unders

  14. An approach to model validation and model-based prediction -- polyurethane foam case study.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Rutherford, Brian Milne

    2003-07-01

    Enhanced software methodology and improved computing hardware have advanced the state of simulation technology to a point where large physics-based codes can be a major contributor in many systems analyses. This shift toward the use of computational methods has brought with it new research challenges in a number of areas including characterization of uncertainty, model validation, and the analysis of computer output. It is these challenges that have motivated the work described in this report. Approaches to and methods for model validation and (model-based) prediction have been developed recently in the engineering, mathematics and statistical literatures. In this report we have provided a fairly detailed account of one approach to model validation and prediction applied to an analysis investigating thermal decomposition of polyurethane foam. A model simulates the evolution of the foam in a high temperature environment as it transforms from a solid to a gas phase. The available modeling and experimental results serve as data for a case study focusing our model validation and prediction developmental efforts on this specific thermal application. We discuss several elements of the ''philosophy'' behind the validation and prediction approach: (1) We view the validation process as an activity applying to the use of a specific computational model for a specific application. We do acknowledge, however, that an important part of the overall development of a computational simulation initiative is the feedback provided to model developers and analysts associated with the application. (2) We utilize information obtained for the calibration of model parameters to estimate the parameters and quantify uncertainty in the estimates. We rely, however, on validation data (or data from similar analyses) to measure the variability that contributes to the uncertainty in predictions for specific systems or units (unit-to-unit variability). (3) We perform statistical

  15. Predicting the Probability of Lightning Occurrence with Generalized Additive Models

    Science.gov (United States)

    Fabsic, Peter; Mayr, Georg; Simon, Thorsten; Zeileis, Achim

    2017-04-01

    This study investigates the predictability of lightning in complex terrain. The main objective is to estimate the probability of lightning occurrence in the Alpine region during summertime afternoons (12-18 UTC) at a spatial resolution of 64 × 64 km2. Lightning observations are obtained from the ALDIS lightning detection network. The probability of lightning occurrence is estimated using generalized additive models (GAM). GAMs provide a flexible modelling framework to estimate the relationship between covariates and the observations. The covariates, besides spatial and temporal effects, include numerous meteorological fields from the ECMWF ensemble system. The optimal model is chosen based on a forward selection procedure with out-of-sample mean squared error as a performance criterion. Our investigation shows that convective precipitation and mid-layer stability are the most influential meteorological predictors. Both exhibit intuitive, non-linear trends: higher values of convective precipitation indicate higher probability of lightning, and large values of the mid-layer stability measure imply low lightning potential. The performance of the model was evaluated against a climatology model containing both spatial and temporal effects. Taking the climatology model as a reference forecast, our model attains a Brier Skill Score of approximately 46%. The model's performance can be further enhanced by incorporating the information about lightning activity from the previous time step, which yields a Brier Skill Score of 48%. These scores show that the method is able to extract valuable information from the ensemble to produce reliable spatial forecasts of the lightning potential in the Alps.

  16. Large scale stochastic spatio-temporal modelling with PCRaster

    Science.gov (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model builders as Python functions. The software comes with Python framework classes providing control flow for spatio-temporal modelling, Monte Carlo simulation, and data assimilation (Ensemble Kalman Filter and Particle Filter). Models are built by combining the spatial operations in these framework classes. This approach enables modellers without specialist programming experience to construct large, rather complicated models, as many technical details of modelling (e.g., data storage, solving spatial operations, data assimilation algorithms) are taken care of by the PCRaster toolbox. Exploratory modelling is supported by routines for prompt, interactive visualisation of stochastic spatio-temporal data generated by the models. The high computational requirements for stochastic spatio-temporal modelling, and an increasing demand to run models over large areas at high resolution, e.g. in global hydrological modelling, require an optimal use of available, heterogeneous computing resources by the modelling framework. Current work in the context of the eWaterCycle project is on a parallel implementation of the modelling engine, capable of running on a high-performance computing infrastructure such as clusters and supercomputers. Model runs will be distributed over multiple compute nodes and multiple processors (GPUs and CPUs). Parallelization will be done by parallel execution of Monte Carlo realizations and sub regions of the modelling domain. In our approach we use multiple levels of parallelism, improving scalability considerably. On the node level we will use OpenCL, the industry standard for low-level high performance computing kernels. To combine multiple nodes we will use

  17. Predicting lower mantle heterogeneity from 4-D Earth models

    Science.gov (United States)

    Flament, Nicolas; Williams, Simon; Müller, Dietmar; Gurnis, Michael; Bower, Dan J.

    2016-04-01

    The Earth's lower mantle is characterized by two large-low-shear velocity provinces (LLSVPs), approximately ˜15000 km in diameter and 500-1000 km high, located under Africa and the Pacific Ocean. The spatial stability and chemical nature of these LLSVPs are debated. Here, we compare the lower mantle structure predicted by forward global mantle flow models constrained by tectonic reconstructions (Bower et al., 2015) to an analysis of five global tomography models. In the dynamic models, spanning 230 million years, slabs subducting deep into the mantle deform an initially uniform basal layer containing 2% of the volume of the mantle. Basal density, convective vigour (Rayleigh number Ra), mantle viscosity, absolute plate motions, and relative plate motions are varied in a series of model cases. We use cluster analysis to classify a set of equally-spaced points (average separation ˜0.45°) on the Earth's surface into two groups of points with similar variations in present-day temperature between 1000-2800 km depth, for each model case. Below ˜2400 km depth, this procedure reveals a high-temperature cluster in which mantle temperature is significantly larger than ambient and a low-temperature cluster in which mantle temperature is lower than ambient. The spatial extent of the high-temperature cluster is in first-order agreement with the outlines of the African and Pacific LLSVPs revealed by a similar cluster analysis of five tomography models (Lekic et al., 2012). Model success is quantified by computing the accuracy and sensitivity of the predicted temperature clusters in predicting the low-velocity cluster obtained from tomography (Lekic et al., 2012). In these cases, the accuracy varies between 0.61-0.80, where a value of 0.5 represents the random case, and the sensitivity ranges between 0.18-0.83. The largest accuracies and sensitivities are obtained for models with Ra ≈ 5 x 107, no asthenosphere (or an asthenosphere restricted to the oceanic domain), and a

  18. Electric vehicle charge planning using Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Poulsen, Niels K.; Madsen, Henrik

    2012-01-01

    Economic Model Predictive Control (MPC) is very well suited for controlling smart energy systems since electricity price and demand forecasts are easily integrated in the controller. Electric vehicles (EVs) are expected to play a large role in the future Smart Grid. They are expected to provide g...... should be consumed as soon as it is produced to avoid the need for energy storage as this is expensive, limited and introduces efficiency losses. The Economic MPC for EVs described in this paper may contribute to facilitating transition to a fossil free energy system.......Economic Model Predictive Control (MPC) is very well suited for controlling smart energy systems since electricity price and demand forecasts are easily integrated in the controller. Electric vehicles (EVs) are expected to play a large role in the future Smart Grid. They are expected to provide...... grid services, both for peak reduction and for ancillary services, by absorbing short term variations in the electricity production. In this paper the Economic MPC minimizes the cost of electricity consumption for a single EV. Simulations show savings of 50–60% of the electricity costs compared...

  19. Foundation Settlement Prediction Based on a Novel NGM Model

    Directory of Open Access Journals (Sweden)

    Peng-Yu Chen

    2014-01-01

    Full Text Available Prediction of foundation or subgrade settlement is very important during engineering construction. According to the fact that there are lots of settlement-time sequences with a nonhomogeneous index trend, a novel grey forecasting model called NGM (1,1,k,c model is proposed in this paper. With an optimized whitenization differential equation, the proposed NGM (1,1,k,c model has the property of white exponential law coincidence and can predict a pure nonhomogeneous index sequence precisely. We used two case studies to verify the predictive effect of NGM (1,1,k,c model for settlement prediction. The results show that this model can achieve excellent prediction accuracy; thus, the model is quite suitable for simulation and prediction of approximate nonhomogeneous index sequence and has excellent application value in settlement prediction.

  20. Analysis and Prediction of Rural Residents’ Living Consumption Growth in Sichuan Province Based on Markov Prediction and ARMA Model

    Institute of Scientific and Technical Information of China (English)

    LU Xiao-li

    2012-01-01

    I select 32 samples concerning per capita living consumption of rural residents in Sichuan Province during the period 1978-2009. First, using Markov prediction method, the growth rate of living consumption level in the future is predicted to largely range from 10% to 20%. Then, in order to improve the prediction accuracy, time variable t is added into the traditional ARMA model for modeling and prediction. The prediction results show that the average relative error rate is 1.56%, and the absolute value of relative error during the period 2006-2009 is less than 0.5%. Finally, I compare the prediction results during the period 2010-2012 by Markov prediction method and ARMA model, respectively, indicating that the two are consistent in terms of growth rate of living consumption, and the prediction results are reliable. The results show that under the similar policies, rural residents’ consumer demand in Sichuan Province will continue to grow in the short term, so it is necessary to further expand the consumer market.

  1. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  2. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model buil

  3. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  4. Aero-acoustic modeling using large eddy simulation

    DEFF Research Database (Denmark)

    Shen, Wen Zhong; Sørensen, Jens Nørkær

    2007-01-01

    The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar...

  5. Predictability of the Indian Ocean Dipole in the coupled models

    Science.gov (United States)

    Liu, Huafeng; Tang, Youmin; Chen, Dake; Lian, Tao

    2017-03-01

    In this study, the Indian Ocean Dipole (IOD) predictability, measured by the Indian Dipole Mode Index (DMI), is comprehensively examined at the seasonal time scale, including its actual prediction skill and potential predictability, using the ENSEMBLES multiple model ensembles and the recently developed information-based theoretical framework of predictability. It was found that all model predictions have useful skill, which is normally defined by the anomaly correlation coefficient larger than 0.5, only at around 2-3 month leads. This is mainly because there are more false alarms in predictions as leading time increases. The DMI predictability has significant seasonal variation, and the predictions whose target seasons are boreal summer (JJA) and autumn (SON) are more reliable than that for other seasons. All of models fail to predict the IOD onset before May and suffer from the winter (DJF) predictability barrier. The potential predictability study indicates that, with the model development and initialization improvement, the prediction of IOD onset is likely to be improved but the winter barrier cannot be overcome. The IOD predictability also has decadal variation, with a high skill during the 1960s and the early 1990s, and a low skill during the early 1970s and early 1980s, which is very consistent with the potential predictability. The main factors controlling the IOD predictability, including its seasonal and decadal variations, are also analyzed in this study.

  6. Reynolds-stress model prediction of 3-D duct flows

    CERN Document Server

    Gerolymos, G A

    2014-01-01

    The paper examines the impact of different modelling choices in second-moment closures by assessing model performance in predicting 3-D duct flows. The test-cases (developing flow in a square duct [Gessner F.B., Emery A.F.: {\\em ASME J. Fluids Eng.} {\\bf 103} (1981) 445--455], circular-to-rectangular transition-duct [Davis D.O., Gessner F.B.: {\\em AIAA J.} {\\bf 30} (1992) 367--375], and \\tsn{S}-duct with large separation [Wellborn S.R., Reichert B.A., Okiishi T.H.: {\\em J. Prop. Power} {\\bf 10} (1994) 668--675]) include progressively more complex strains. Comparison of experimental data with selected 7-equation models (6 Reynolds-stress-transport and 1 scale-determining equations), which differ in the closure of the velocity/pressure-gradient tensor $\\Pi_{ij}$, suggests that rapid redistribution controls separation and secondary-flow prediction, whereas, inclusion of pressure-diffusion modelling improves reattachment and relaxation behaviour.

  7. The pig as a large preclinical model for therapeutic human anti-cancer vaccine development

    DEFF Research Database (Denmark)

    Overgaard, Nana Haahr; Frøsig, Thomas Mørch; Welner, Simon

    2016-01-01

    Development of therapeutic cancer vaccines has largely been based on rodent models and the majority failed to establish therapeutic responses in clinical trials. We therefore used pigs as a large animal model for human cancer vaccine development due to the large similarity between the porcine...... and human immunome. We administered peptides derived from porcine IDO, a cancer antigen important in human disease, formulated in Th1-inducing adjuvants to outbred pigs. By in silico prediction 136 candidate IDO-derived peptides were identified and peptide-SLA class I complex stability measurements revealed...

  8. Leptogenesis in minimal predictive seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Björkeroth, Fredrik [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom); Anda, Francisco J. de [Departamento de Física, CUCEI, Universidad de Guadalajara,Guadalajara (Mexico); Varzielas, Ivo de Medeiros; King, Stephen F. [School of Physics and Astronomy, University of Southampton,Southampton, SO17 1BJ (United Kingdom)

    2015-10-15

    We estimate the Baryon Asymmetry of the Universe (BAU) arising from leptogenesis within a class of minimal predictive seesaw models involving two right-handed neutrinos and simple Yukawa structures with one texture zero. The two right-handed neutrinos are dominantly responsible for the “atmospheric” and “solar” neutrino masses with Yukawa couplings to (ν{sub e},ν{sub μ},ν{sub τ}) proportional to (0,1,1) and (1,n,n−2), respectively, where n is a positive integer. The neutrino Yukawa matrix is therefore characterised by two proportionality constants with their relative phase providing a leptogenesis-PMNS link, enabling the lightest right-handed neutrino mass to be determined from neutrino data and the observed BAU. We discuss an SU(5) SUSY GUT example, where A{sub 4} vacuum alignment provides the required Yukawa structures with n=3, while a ℤ{sub 9} symmetry fixes the relatives phase to be a ninth root of unity.

  9. QSPR Models for Octane Number Prediction

    Directory of Open Access Journals (Sweden)

    Jabir H. Al-Fahemi

    2014-01-01

    Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.

  10. A Comparison Between Measured and Predicted Hydrodynamic Damping for a Jack-Up Rig Model

    DEFF Research Database (Denmark)

    Laursen, Thomas; Rohbock, Lars; Jensen, Jørgen Juncher

    1996-01-01

    methods.In the comparison between the model test results and the theoretical predictions, thehydro-dynamic damping proves to be the most important uncertain parameter. It is shown thata relative large hydrodynamic damping must be assumed in the theoretical calculations in orderto predict the measured...

  11. Large field excursions from a few site relaxion model

    Science.gov (United States)

    Fonseca, N.; de Lima, L.; Machado, C. S.; Matheus, R. D.

    2016-07-01

    Relaxion models are an interesting new avenue to explain the radiative stability of the Standard Model scalar sector. They require very large field excursions, which are difficult to generate in a consistent UV completion and to reconcile with the compact field space of the relaxion. We propose an N -site model which naturally generates the large decay constant needed to address these issues. Our model offers distinct advantages with respect to previous proposals: the construction involves non-Abelian fields, allowing for controlled high-energy behavior and more model building possibilities, both in particle physics and inflationary models, and also admits a continuum limit when the number of sites is large, which may be interpreted as a warped extra dimension.

  12. Modelling Morphological Response of Large Tidal Inlet Systems to Sea Level Rise

    NARCIS (Netherlands)

    Dissanayake, P.K.

    2011-01-01

    This dissertation qualitatively investigates the morphodynamic response of a large inlet system to IPCC projected relative sea level rise (RSLR). Adopted numerical approach (Delft3D) used a highly schematised model domain analogous to the Ameland inlet in the Dutch Wadden Sea. Predicted inlet evolut

  13. Supersymmetry and large-N limit in a zero-dimensional two-matrix model

    Energy Technology Data Exchange (ETDEWEB)

    Alfaro, J.; Retamal, J.C.

    1989-05-25

    We study the zero-dimensional two-hermitean-matrix model, by using a new method to obtain the large-N limit of a quantum field-theory. This mehtod predicts a closed system of integral equations that gives the solution in a closed form.

  14. Exchange Rate Prediction using Neural – Genetic Model

    Directory of Open Access Journals (Sweden)

    MECHGOUG Raihane

    2012-10-01

    Full Text Available Neural network have successfully used for exchange rate forecasting. However, due to a large number of parameters to be estimated empirically, it is not a simple task to select the appropriate neural network architecture for exchange rate forecasting problem.Researchers often overlook the effect of neural network parameters on the performance of neural network forecasting. The performance of neural network is critically dependant on the learning algorithms, thenetwork architecture and the choice of the control parameters. Even when a suitable setting of parameters (weight can be found, the ability of the resulting network to generalize the data not seen during learning may be far from optimal. For these reasons it seemslogical and attractive to apply genetic algorithms. Genetic algorithms may provide a useful tool for automating the design of neural network. The empirical results on foreign exchange rate prediction indicate that the proposed hybrid model exhibits effectively improved accuracy, when is compared with some other time series forecasting models.

  15. Predictability in models of the atmospheric circulation.

    NARCIS (Netherlands)

    Houtekamer, P.L.

    1992-01-01

    It will be clear from the above discussions that skill forecasts are still in their infancy. Operational skill predictions do not exist. One is still struggling to prove that skill predictions, at any range, have any quality at all. It is not clear what the statistics of the analysis error are. The

  16. Large field inflation models from higher-dimensional gauge theories

    Science.gov (United States)

    Furuuchi, Kazuyuki; Koyama, Yoji

    2015-02-01

    Motivated by the recent detection of B-mode polarization of CMB by BICEP2 which is possibly of primordial origin, we study large field inflation models which can be obtained from higher-dimensional gauge theories. The constraints from CMB observations on the gauge theory parameters are given, and their naturalness are discussed. Among the models analyzed, Dante's Inferno model turns out to be the most preferred model in this framework.

  17. Large field inflation models from higher-dimensional gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Furuuchi, Kazuyuki [Manipal Centre for Natural Sciences, Manipal University, Manipal, Karnataka 576104 (India); Koyama, Yoji [Department of Physics, National Tsing-Hua University, Hsinchu 30013, Taiwan R.O.C. (China)

    2015-02-23

    Motivated by the recent detection of B-mode polarization of CMB by BICEP2 which is possibly of primordial origin, we study large field inflation models which can be obtained from higher-dimensional gauge theories. The constraints from CMB observations on the gauge theory parameters are given, and their naturalness are discussed. Among the models analyzed, Dante’s Inferno model turns out to be the most preferred model in this framework.

  18. Standardizing the performance evaluation of short-term wind prediction models

    DEFF Research Database (Denmark)

    Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.

    2005-01-01

    Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...

  19. Machine learning models in breast cancer survival prediction.

    Science.gov (United States)

    Montazeri, Mitra; Montazeri, Mohadeseh; Montazeri, Mahdieh; Beigzadeh, Amin

    2016-01-01

    Breast cancer is one of the most common cancers with a high mortality rate among women. With the early diagnosis of breast cancer survival will increase from 56% to more than 86%. Therefore, an accurate and reliable system is necessary for the early diagnosis of this cancer. The proposed model is the combination of rules and different machine learning techniques. Machine learning models can help physicians to reduce the number of false decisions. They try to exploit patterns and relationships among a large number of cases and predict the outcome of a disease using historical cases stored in datasets. The objective of this study is to propose a rule-based classification method with machine learning techniques for the prediction of different types of Breast cancer survival. We use a dataset with eight attributes that include the records of 900 patients in which 876 patients (97.3%) and 24 (2.7%) patients were females and males respectively. Naive Bayes (NB), Trees Random Forest (TRF), 1-Nearest Neighbor (1NN), AdaBoost (AD), Support Vector Machine (SVM), RBF Network (RBFN), and Multilayer Perceptron (MLP) machine learning techniques with 10-cross fold technique were used with the proposed model for the prediction of breast cancer survival. The performance of machine learning techniques were evaluated with accuracy, precision, sensitivity, specificity, and area under ROC curve. Out of 900 patients, 803 patients and 97 patients were alive and dead, respectively. In this study, Trees Random Forest (TRF) technique showed better results in comparison to other techniques (NB, 1NN, AD, SVM and RBFN, MLP). The accuracy, sensitivity and the area under ROC curve of TRF are 96%, 96%, 93%, respectively. However, 1NN machine learning technique provided poor performance (accuracy 91%, sensitivity 91% and area under ROC curve 78%). This study demonstrates that Trees Random Forest model (TRF) which is a rule-based classification model was the best model with the highest level of

  20. Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations.

    Science.gov (United States)

    Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W

    2016-01-01

    A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches

  1. Allostasis: a model of predictive regulation.

    Science.gov (United States)

    Sterling, Peter

    2012-04-12

    The premise of the standard regulatory model, "homeostasis", is flawed: the goal of regulation is not to preserve constancy of the internal milieu. Rather, it is to continually adjust the milieu to promote survival and reproduction. Regulatory mechanisms need to be efficient, but homeostasis (error-correction by feedback) is inherently inefficient. Thus, although feedbacks are certainly ubiquitous, they could not possibly serve as the primary regulatory mechanism. A newer model, "allostasis", proposes that efficient regulation requires anticipating needs and preparing to satisfy them before they arise. The advantages: (i) errors are reduced in magnitude and frequency; (ii) response capacities of different components are matched -- to prevent bottlenecks and reduce safety factors; (iii) resources are shared between systems to minimize reserve capacities; (iv) errors are remembered and used to reduce future errors. This regulatory strategy requires a dedicated organ, the brain. The brain tracks multitudinous variables and integrates their values with prior knowledge to predict needs and set priorities. The brain coordinates effectors to mobilize resources from modest bodily stores and enforces a system of flexible trade-offs: from each organ according to its ability, to each organ according to its need. The brain also helps regulate the internal milieu by governing anticipatory behavior. Thus, an animal conserves energy by moving to a warmer place - before it cools, and it conserves salt and water by moving to a cooler one before it sweats. The behavioral strategy requires continuously updating a set of specific "shopping lists" that document the growing need for each key component (warmth, food, salt, water). These appetites funnel into a common pathway that employs a "stick" to drive the organism toward filling the need, plus a "carrot" to relax the organism when the need is satisfied. The stick corresponds broadly to the sense of anxiety, and the carrot broadly to

  2. REALIGNED MODEL PREDICTIVE CONTROL OF A PROPYLENE DISTILLATION COLUMN

    Directory of Open Access Journals (Sweden)

    A. I. Hinojosa

    Full Text Available Abstract In the process industry, advanced controllers usually aim at an economic objective, which usually requires closed-loop stability and constraints satisfaction. In this paper, the application of a MPC in the optimization structure of an industrial Propylene/Propane (PP splitter is tested with a controller based on a state space model, which is suitable for heavily disturbed environments. The simulation platform is based on the integration of the commercial dynamic simulator Dynsim® and the rigorous steady-state optimizer ROMeo® with the real-time facilities of Matlab. The predictive controller is the Infinite Horizon Model Predictive Control (IHMPC, based on a state-space model that that does not require the use of a state observer because the non-minimum state is built with the past inputs and outputs. The controller considers the existence of zone control of the outputs and optimizing targets for the inputs. We verify that the controller is efficient to control the propylene distillation system in a disturbed scenario when compared with a conventional controller based on a state observer. The simulation results show a good performance in terms of stability of the controller and rejection of large disturbances in the composition of the feed of the propylene distillation column.

  3. Improved survival prediction from lung function data in a large population sample

    DEFF Research Database (Denmark)

    Miller, M.R.; Pedersen, O.F.; Lange, P.

    2008-01-01

    mortality in the Copenhagen City Heart Study data. Cox regression models were derived for survival over 25 years in 13,900 subjects. Age on entry, sex, smoking status, body mass index, previous myocardial infarction and diabetes were putative predictors together with FEV1 either as raw data, standardised....... In univariate predictions of all cause mortality the HR for FEV1/ht(2) categories was 2-4 times higher than those for FEV1PP and 3-10 times higher for airway related tung disease mortality. We conclude that FEV1/ht(2) is superior to FEV1PP for predicting survival. in a general population and this method...

  4. Required Collaborative Work in Online Courses: A Predictive Modeling Approach

    Science.gov (United States)

    Smith, Marlene A.; Kellogg, Deborah L.

    2015-01-01

    This article describes a predictive model that assesses whether a student will have greater perceived learning in group assignments or in individual work. The model produces correct classifications 87.5% of the time. The research is notable in that it is the first in the education literature to adopt a predictive modeling methodology using data…

  5. A prediction model for assessing residential radon concentration in Switzerland

    NARCIS (Netherlands)

    Hauri, D.D.; Huss, A.; Zimmermann, F.; Kuehni, C.E.; Roosli, M.

    2012-01-01

    Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the

  6. Multi-center MRI prediction models : Predicting sex and illness course in first episode psychosis patients

    NARCIS (Netherlands)

    Nieuwenhuis, Mireille; Schnack, Hugo G.; van Haren, Neeltje E.; Kahn, René S.; Lappin, Julia; Dazzan, Paola; Morgan, Craig; Reinders, Antje A.; Gutierrez-Tordesillas, Diana; Gutierrez-Tordesillas, Diana; Roiz-Santiañez, Roberto; Crespo-Facorro, Benedicto; Schaufelberger, Maristela S.; Rosa, Pedro G.; Zanetti, Marcus V.; Busatto, Geraldo F.; McGorry, Patrick D.; Velakoulis, Dennis; Pantelis, Christos; Wood, Stephen J.; Mourao-Miranda, Janaina; Mourao-Miranda, Janaina; Dazzan, Paola; Crespo-Facorro, Benedicto

    2017-01-01

    Structural Magnetic Resonance Imaging (MRI) studies have attempted to use brain measures obtained at the first-episode of psychosis to predict subsequent outcome, with inconsistent results. Thus, there is a real need to validate the utility of brain measures in the prediction of outcome using large

  7. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  8. Predicting and adapting to the agricultural impacts of large-scale drought (Invited)

    Science.gov (United States)

    Elliott, J. W.; Glotter, M.; Best, N.; Ruane, A. C.; Boote, K.; Hatfield, J.; Jones, J.; Rosenzweig, C.; Smith, L. A.; Foster, I.

    2013-12-01

    The impact of drought on agriculture is an important socioeconomic consequence of climate extremes. Drought affects millions of people globally each year, causing an average of 6-8 billion of damage annually in the U.S. alone. The 1988 U.S. drought is estimated to have cost 79 billion in 2013 dollars, behind only Hurricane Katrina as the most costly U.S. climate-related disaster in recent decades. The 2012 U.S. drought is expected to cost about 30 billion. Droughts and heat waves accounted for 12% of all billion-dollar disaster events in the U.S. from 1980-2011 but almost one quarter of total monetary damages. To make matters worse, the frequency and severity of large-scale droughts in important agricultural regions is expected to increase as temperatures rise and precipitation patterns shift, leading some researchers to suggest that extended drought will harm more people than any other climate-related impact, specifically in the area of food security. Improved understanding and forecasts of drought would have both immediate and long-term implications for the global economy and food security. We show that mechanistic agricultural models, applied in novel ways, can reproduce historical crop yield anomalies, especially in seasons for which drought is the overriding factor. With more accurate observations and forecasts for temperature and precipitation, the accuracy and lead times of drought impact predictions could be improved further. We provide evidence that changes in agricultural technologies and management have reduced system-level drought sensitivity in US maize production in recent decades, adaptations that could be applied elsewhere. This work suggests a new approach to modeling, monitoring, and forecasting drought impacts on agriculture. Simulated (dashed line), observed (solid line), and observed linear trend (dashed straight green line) of national average maize yield in tonnes per hectare from 1979-2012. The red dot indicates the USDA estimate for 2012

  9. Distributional Analysis for Model Predictive Deferrable Load Control

    OpenAIRE

    Chen, Niangjun; Gan, Lingwen; Low, Steven H.; Wierman, Adam

    2014-01-01

    Deferrable load control is essential for handling the uncertainties associated with the increasing penetration of renewable generation. Model predictive control has emerged as an effective approach for deferrable load control, and has received considerable attention. In particular, previous work has analyzed the average-case performance of model predictive deferrable load control. However, to this point, distributional analysis of model predictive deferrable load control has been elusive. In ...

  10. Large N Scalars: From Glueballs to Dynamical Higgs Models

    CERN Document Server

    Sannino, Francesco

    2015-01-01

    We construct effective Lagrangians, and corresponding counting schemes, valid to describe the dynamics of the lowest lying large N stable massive composite state emerging in strongly coupled theories. The large N counting rules can now be employed when computing quantum corrections via an effective Lagrangian description. The framework allows for systematic investigations of composite dynamics of non-Goldstone nature. Relevant examples are the lightest glueball states emerging in any Yang-Mills theory. We further apply the effective approach and associated counting scheme to composite models at the electroweak scale. To illustrate the formalism we consider the possibility that the Higgs emerges as: the lightest glueball of a new composite theory; the large N scalar meson in models of dynamical electroweak symmetry breaking; the large N pseudodilaton useful also for models of near-conformal dynamics. For each of these realisations we determine the leading N corrections to the electroweak precision parameters. ...

  11. Curved Displacement Transfer Functions for Geometric Nonlinear Large Deformation Structure Shape Predictions

    Science.gov (United States)

    Ko, William L.; Fleischer, Van Tran; Lung, Shun-Fat

    2017-01-01

    For shape predictions of structures under large geometrically nonlinear deformations, Curved Displacement Transfer Functions were formulated based on a curved displacement, traced by a material point from the undeformed position to deformed position. The embedded beam (depth-wise cross section of a structure along a surface strain-sensing line) was discretized into multiple small domains, with domain junctures matching the strain-sensing stations. Thus, the surface strain distribution could be described with a piecewise linear or a piecewise nonlinear function. The discretization approach enabled piecewise integrations of the embedded-beam curvature equations to yield the Curved Displacement Transfer Functions, expressed in terms of embedded beam geometrical parameters and surface strains. By entering the surface strain data into the Displacement Transfer Functions, deflections along each embedded beam can be calculated at multiple points for mapping the overall structural deformed shapes. Finite-element linear and nonlinear analyses of a tapered cantilever tubular beam were performed to generate linear and nonlinear surface strains and the associated deflections to be used for validation. The shape prediction accuracies were then determined by comparing the theoretical deflections with the finiteelement- generated deflections. The results show that the newly developed Curved Displacement Transfer Functions are very accurate for shape predictions of structures under large geometrically nonlinear deformations.

  12. Prediction for Major Adverse Outcomes in Cardiac Surgery: Comparison of Three Prediction Models

    Directory of Open Access Journals (Sweden)

    Cheng-Hung Hsieh

    2007-09-01

    Conclusion: The Parsonnet score performed as well as the logistic regression models in predicting major adverse outcomes. The Parsonnet score appears to be a very suitable model for clinicians to use in risk stratification of cardiac surgery.

  13. Effects of large volcanic eruptions on Eurasian climate and societies: unravelling past evidence to predict future impacts

    Science.gov (United States)

    Churakova Sidorova, Olga; Guillet, Sébastien; Corona, Christophe; Khodri, Myriam; Vaganov, Eugene; Siegwolf, Rolf; Bryukhanova, Marina; Naumova, Oksana; Kirdyanov, Aleksander; Myglan, Vladimir; Sviderskaya, Irina; Pyzhev, Anton; Grachev, Alexei; Saurer, Matthias; Beniston, Martin; Stoffel, Markus

    2016-04-01

    Substantial evidence exists for the sulphur deposition in ice cores of Greenland and Antarctica after major volcanic eruptions but their impacts have not been documented with sufficient detail so far. This is true for temperature, of which the cooling induced by eruptions has been vividly debated in recent years, but even more so for precipitation. In the Era.Net RUS Plus ELVECS, we are currently quantifying climate disturbance induced by major Common Era eruptions, the persistence of changes and their impact on short- to mid-term temperature and precipitation anomalies by using an unprecedented dataset of tree-ring records across Eurasia and a large body of recently unearthed historical archives. We will compile a comprehensive database of tree-ring proxies and historical archives; quantify temperature and precipitation impacts of large eruptions; simulate on a case-by-case basis volcanic microphysical processes and radiative forcing induced by the eruptions as well as evaluate results against tree-ring records; quantify impacts of large volcanic eruptions on atmospheric and oceanic circulations and feedbacks; and assess impacts of possible future eruptions. The new and diversified proxy data sources and more sophisticated modelling are expected to reduce discrepancies and uncertainties related to climatic responses to some of the largest eruptions. We expect to capture persistence of anomalies correctly by climate models, even more so if they are evaluated against highly resolved proxy data of past events. This will increase our confidence in the overall reliability of climate models and help to correctly capture, and therefore predict, the cooling and precipitation anomalies of possible future, large eruptions. These predictions of climatic anomalies will then be used to quantify their likely impacts on major economy and society, including food security, migration and air traffic. Acknowledgements: Era.Net RUS Plus ELVECS project № 122

  14. Elastodynamic modeling and joint reaction prediction for 3-PRS PKM

    Institute of Scientific and Technical Information of China (English)

    张俊; 赵艳芹

    2015-01-01

    To gain a thorough understanding of the load state of parallel kinematic machines (PKMs), a methodology of elastodynamic modeling and joint reaction prediction is proposed. For this purpose, a Sprint Z3 model is used as a case study to illustrate the process of joint reaction analysis. The substructure synthesis method is applied to deriving an analytical elastodynamic model for the 3-PRS PKM device, in which the compliances of limbs and joints are considered. Each limb assembly is modeled as a spatial beam with non-uniform cross-section supported by lumped virtual springs at the centers of revolute and spherical joints. By introducing the deformation compatibility conditions between the limbs and the platform, the governing equations of motion of the system are obtained. After degenerating the governing equations into quasi-static equations, the effects of the gravity on system deflections and joint reactions are investigated with the purpose of providing useful information for the kinematic calibration and component strength calculations as well as structural optimizations of the 3-PRS PKM module. The simulation results indicate that the elastic deformation of the moving platform in the direction of gravity caused by gravity is quite large and cannot be ignored. Meanwhile, the distributions of joint reactions are axisymmetric and position-dependent. It is worthy to note that the proposed elastodynamic modeling method combines the benefits of accuracy of finite element method and concision of analytical method so that it can be used to predict the stiffness characteristics and joint reactions of a PKM throughout its entire workspace in a quick and accurate manner. Moreover, the present model can also be easily applied to evaluating the overall rigidity performance as well as statics of other PKMs with high efficiency after minor modifications.

  15. Comprehensive model for predicting elemental composition of coal pyrolysis products

    Energy Technology Data Exchange (ETDEWEB)

    Ricahrds, Andrew P. [Brigham Young Univ., Provo, UT (United States); Shutt, Tim [Brigham Young Univ., Provo, UT (United States); Fletcher, Thomas H. [Brigham Young Univ., Provo, UT (United States)

    2017-04-23

    Large-scale coal combustion simulations depend highly on the accuracy and utility of the physical submodels used to describe the various physical behaviors of the system. Coal combustion simulations depend on the particle physics to predict product compositions, temperatures, energy outputs, and other useful information. The focus of this paper is to improve the accuracy of devolatilization submodels, to be used in conjunction with other particle physics models. Many large simulations today rely on inaccurate assumptions about particle compositions, including that the volatiles that are released during pyrolysis are of the same elemental composition as the char particle. Another common assumption is that the char particle can be approximated by pure carbon. These assumptions will lead to inaccuracies in the overall simulation. There are many factors that influence pyrolysis product composition, including parent coal composition, pyrolysis conditions (including particle temperature history and heating rate), and others. All of these factors are incorporated into the correlations to predict the elemental composition of the major pyrolysis products, including coal tar, char, and light gases.

  16. On hydrological model complexity, its geometrical interpretations and prediction uncertainty

    NARCIS (Netherlands)

    Arkesteijn, E.C.M.M.; Pande, S.

    2013-01-01

    Knowledge of hydrological model complexity can aid selection of an optimal prediction model out of a set of available models. Optimal model selection is formalized as selection of the least complex model out of a subset of models that have lower empirical risk. This may be considered equivalent to

  17. Severity Prediction of Drought in A Large Geographical Area Using Distributed Wireless Sensor Networks

    CERN Document Server

    Dappin, Satish G; Nair, G Nithya; Nair, T R Gopalakrsihnan

    2010-01-01

    In this paper, the severity prediction of drought through the implementation of modern sensor networks is discussed. We describe how to design a drought prediction system using wireless sensor networks. This paper will describe a terrestrial interconnected wireless sensor network paradigm for the prediction of severity of drought over a vast area of 10,000 sq km. The communication architecture for sensor network is outlined and the protocols developed for each layer is explored. The data integration model and sensor data analysis at the central computer is explained. The advantages and limitations are discussed along with the use of wireless standards. They are analyzed for its relevance. Finally a conclusion is presented along with open research issues.

  18. Towards a self-consistent halo model for the nonlinear large-scale structure

    CERN Document Server

    Schmidt, Fabian

    2015-01-01

    The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: $(i)$ they do not enforce the stress-energy conservation of matter; $(ii)$ they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model ("EHM") that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed, and results of perturbation theory and the effective field theory can in principle be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written he...

  19. Probabilistic Modeling and Visualization for Bankruptcy Prediction

    DEFF Research Database (Denmark)

    Antunes, Francisco; Ribeiro, Bernardete; Pereira, Francisco Camara

    2017-01-01

    In accounting and finance domains, bankruptcy prediction is of great utility for all of the economic stakeholders. The challenge of accurate assessment of business failure prediction, specially under scenarios of financial crisis, is known to be complicated. Although there have been many successful...... studies on bankruptcy detection, seldom probabilistic approaches were carried out. In this paper we assume a probabilistic point-of-view by applying Gaussian Processes (GP) in the context of bankruptcy prediction, comparing it against the Support Vector Machines (SVM) and the Logistic Regression (LR......). Using real-world bankruptcy data, an in-depth analysis is conducted showing that, in addition to a probabilistic interpretation, the GP can effectively improve the bankruptcy prediction performance with high accuracy when compared to the other approaches. We additionally generate a complete graphical...

  20. Predictive modeling of dental pain using neural network.

    Science.gov (United States)

    Kim, Eun Yeob; Lim, Kun Ok; Rhee, Hyun Sill

    2009-01-01

    The mouth is a part of the body for ingesting food that is the most basic foundation and important part. The dental pain predicted by the neural network model. As a result of making a predictive modeling, the fitness of the predictive modeling of dental pain factors was 80.0%. As for the people who are likely to experience dental pain predicted by the neural network model, preventive measures including proper eating habits, education on oral hygiene, and stress release must precede any dental treatment.

  1. Modelling and measurements of wakes in large wind farms

    DEFF Research Database (Denmark)

    Barthelmie, Rebecca Jane; Rathmann, Ole; Frandsen, Sten Tronæs;

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve...

  2. Modelling and measurements of wakes in large wind farms

    DEFF Research Database (Denmark)

    Barthelmie, Rebecca Jane; Rathmann, Ole; Frandsen, Sten Tronæs

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve p...

  3. Predictive modeling of reactive wetting and metal joining.

    Energy Technology Data Exchange (ETDEWEB)

    van Swol, Frank B.

    2013-09-01

    The performance, reproducibility and reliability of metal joints are complex functions of the detailed history of physical processes involved in their creation. Prediction and control of these processes constitutes an intrinsically challenging multi-physics problem involving heating and melting a metal alloy and reactive wetting. Understanding this process requires coupling strong molecularscale chemistry at the interface with microscopic (diffusion) and macroscopic mass transport (flow) inside the liquid followed by subsequent cooling and solidification of the new metal mixture. The final joint displays compositional heterogeneity and its resulting microstructure largely determines the success or failure of the entire component. At present there exists no computational tool at Sandia that can predict the formation and success of a braze joint, as current capabilities lack the ability to capture surface/interface reactions and their effect on interface properties. This situation precludes us from implementing a proactive strategy to deal with joining problems. Here, we describe what is needed to arrive at a predictive modeling and simulation capability for multicomponent metals with complicated phase diagrams for melting and solidification, incorporating dissolutive and composition-dependent wetting.

  4. Sediment Yield Modeling in a Large Scale Drainage Basin

    Science.gov (United States)

    Ali, K.; de Boer, D. H.

    2009-05-01

    This paper presents the findings of spatially distributed sediment yield modeling in the upper Indus River basin. Spatial erosion rates calculated by using the Thornes model at 1-kilometre spatial resolution and monthly time scale indicate that 87 % of the annual gross erosion takes place in the three summer months. The model predicts a total annual erosion rate of 868 million tons, which is approximately 4.5 times the long- term observed annual sediment yield of the basin. Sediment delivery ratios (SDR) are hypothesized to be a function of the travel time of surface runoff from catchment cells to the nearest downstream channel. Model results indicate that higher delivery ratios (SDR > 0.6) are found in 18 % of the basin area, mostly located in the high-relief sub-basins and in the areas around the Nanga Parbat Massif. The sediment delivery ratio is lower than 0.2 in 70 % of the basin area, predominantly in the low-relief sub-basins like the Shyok on the Tibetan Plateau. The predicted annual basin sediment yield is 244 million tons which compares reasonably to the measured value of 192.5 million tons. The average annual specific sediment yield in the basin is predicted as 1110 tons per square kilometre. Model evaluation based on accuracy statistics shows very good to satisfactory performance ratings for predicted monthly basin sediment yields and for mean annual sediment yields of 17 sub-basins. This modeling framework mainly requires global datasets, and hence can be used to predict erosion and sediment yield in other ungauged drainage basins.

  5. Extreme value prediction of the wave-induced vertical bending moment in large container ships

    DEFF Research Database (Denmark)

    Andersen, Ingrid Marie Vincent; Jensen, Jørgen Juncher

    2015-01-01

    in the present paper is based on time series of full scale measurements from three large container ships of 8600, 9400 and 14000 TEU. When carrying out the extreme value estimation the peak-over-threshold (POT) method combined with an appropriate extreme value distribution is applied. The choice of a proper...... increase the extreme hull girder response significantly. Focus in the present paper is on the influence of the hull girder flexibility on the extreme response amidships, namely the wave-induced vertical bending moment (VBM) in hogging, and the prediction of the extreme value of the same. The analysis...... threshold level as well as the statistical correlation between clustered peaks influence the extreme value prediction and are taken into consideration in the present paper....

  6. Prediction of peptide bonding affinity: kernel methods for nonlinear modeling

    CERN Document Server

    Bergeron, Charles; Sundling, C Matthew; Krein, Michael; Katt, Bill; Sukumar, Nagamani; Breneman, Curt M; Bennett, Kristin P

    2011-01-01

    This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.

  7. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  8. Materials of large wind turbine blades: Recent results in testing and modeling

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon; Brøndsted, Povl; Nijssen, Rogier

    2012-01-01

    for the experimental determination of reliable material properties used in the design of wind turbine blades and experimental validation of design models, (ii) development of predictive models for the life prediction, prediction of residual strength and failure probability of the blades and (iii) analysis......The reliability of rotor blades is the pre-condition for the development and wide use of large wind turbines. In order to accurately predict and improve the wind turbine blade behavior, three main aspects of the reliability and strength of rotor blades were considered: (i) development of methods...... of the effect of the microstructure of wind turbine blade composites on their strength and ways of microstructural optimization of the materials. By testing reference coupons, the effect of testing parameters (temperature and frequency) on the lifetime of blade composites was investigated, and the input data...

  9. Predictive Modeling of Defibrillation utilizing Hexahedral and Tetrahedral Finite Element Models: Recent Advances

    Science.gov (United States)

    Triedman, John K.; Jolley, Matthew; Stinstra, Jeroen; Brooks, Dana H.; MacLeod, Rob

    2008-01-01

    ICD implants may be complicated by body size and anatomy. One approach to this problem has been the adoption of creative, extracardiac implant strategies using standard ICD components. Because data on safety or efficacy of such ad hoc implant strategies is lacking, we have developed image-based finite element models (FEMs) to compare electric fields and expected defibrillation thresholds (DFTs) using standard and novel electrode locations. In this paper, we review recently published studies by our group using such models, and progress in meshing strategies to improve efficiency and visualization. Our preliminary observations predict that they may be large changes in DFTs with clinically relevant variations of electrode placement. Extracardiac ICDs of various lead configurations are predicted to be effective in both children and adults. This approach may aid both ICD development and patient-specific optimization of electrode placement, but the simplified nature of current models dictates further development and validation prior to clinical or industrial utilization. PMID:18817926

  10. Large scale experiments as a tool for numerical model development

    DEFF Research Database (Denmark)

    Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper;

    2003-01-01

    for improvement of the reliability of physical model results. This paper demonstrates by examples that numerical modelling benefits in various ways from experimental studies (in large and small laboratory facilities). The examples range from very general hydrodynamic descriptions of wave phenomena to specific......Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...... hydrodynamic interaction with structures. The examples also show that numerical model development benefits from international co-operation and sharing of high quality results....

  11. LARGE SIGNAL DISCRETE-TIME MODEL FOR PARALLELED BUCK CONVERTERS

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    As a number of switch-combinations are involved in operation of multi-converter-system, conventional methods for obtaining discrete-time large signal model of these converter systems result in a very complex solution. A simple sampled-data technique for modeling distributed dc-dc PWM converters system (DCS) was proposed. The resulting model is nonlinear and can be linearized for analysis and design of DCS. These models are also suitable for fast simulation of these networks. As the input and output of dc-dc converters are slow varying, suitable model for DCS was obtained in terms of the finite order input/output approximation.

  12. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  13. Large-Deformation Displacement Transfer Functions for Shape Predictions of Highly Flexible Slender Aerospace Structures

    Science.gov (United States)

    Ko, William L.; Fleischer, Van Tran

    2013-01-01

    Large deformation displacement transfer functions were formulated for deformed shape predictions of highly flexible slender structures like aircraft wings. In the formulation, the embedded beam (depth wise cross section of structure along the surface strain sensing line) was first evenly discretized into multiple small domains, with surface strain sensing stations located at the domain junctures. Thus, the surface strain (bending strains) variation within each domain could be expressed with linear of nonlinear function. Such piecewise approach enabled piecewise integrations of the embedded beam curvature equations [classical (Eulerian), physical (Lagrangian), and shifted curvature equations] to yield closed form slope and deflection equations in recursive forms.

  14. Prediction using patient comparison vs. modeling: a case study for mortality prediction.

    Science.gov (United States)

    Hoogendoorn, Mark; El Hassouni, Ali; Mok, Kwongyen; Ghassemi, Marzyeh; Szolovits, Peter

    2016-08-01

    Information in Electronic Medical Records (EMRs) can be used to generate accurate predictions for the occurrence of a variety of health states, which can contribute to more pro-active interventions. The very nature of EMRs does make the application of off-the-shelf machine learning techniques difficult. In this paper, we study two approaches to making predictions that have hardly been compared in the past: (1) extracting high-level (temporal) features from EMRs and building a predictive model, and (2) defining a patient similarity metric and predicting based on the outcome observed for similar patients. We analyze and compare both approaches on the MIMIC-II ICU dataset to predict patient mortality and find that the patient similarity approach does not scale well and results in a less accurate model (AUC of 0.68) compared to the modeling approach (0.84). We also show that mortality can be predicted within a median of 72 hours.

  15. Handling a Small Dataset Problem in Prediction Model by employ Artificial Data Generation Approach: A Review

    Science.gov (United States)

    Lateh, Masitah Abdul; Kamilah Muda, Azah; Yusof, Zeratul Izzah Mohd; Azilah Muda, Noor; Sanusi Azmi, Mohd

    2017-09-01

    The emerging era of big data for past few years has led to large and complex data which needed faster and better decision making. However, the small dataset problems still arise in a certain area which causes analysis and decision are hard to make. In order to build a prediction model, a large sample is required as a training sample of the model. Small dataset is insufficient to produce an accurate prediction model. This paper will review an artificial data generation approach as one of the solution to solve the small dataset problem.

  16. Predicting hydrological signatures in ungauged catchments using spatial interpolation, index model, and rainfall-runoff modelling

    Science.gov (United States)

    Zhang, Yongqiang; Vaze, Jai; Chiew, Francis H. S.; Teng, Jin; Li, Ming

    2014-09-01

    Understanding a catchment's behaviours in terms of its underlying hydrological signatures is a fundamental task in surface water hydrology. It can help in water resource management, catchment classification, and prediction of runoff time series. This study investigated three approaches for predicting six hydrological signatures in southeastern Australia. These approaches were (1) spatial interpolation with three weighting schemes, (2) index model that estimates hydrological signatures using catchment characteristics, and (3) classical rainfall-runoff modelling. The six hydrological signatures fell into two categories: (1) long-term aggregated signatures - annual runoff coefficient, mean of log-transformed daily runoff, and zero flow ratio, and (2) signatures obtained from daily flow metrics - concavity index, seasonality ratio of runoff, and standard deviation of log-transformed daily flow. A total of 228 unregulated catchments were selected, with half the catchments randomly selected as gauged (or donors) for model building and the rest considered as ungauged (or receivers) to evaluate performance of the three approaches. The results showed that for two long-term aggregated signatures - the log-transformed daily runoff and runoff coefficient, the index model and rainfall-runoff modelling performed similarly, and were better than the spatial interpolation methods. For the zero flow ratio, the index model was best and the rainfall-runoff modelling performed worst. The other three signatures, derived from daily flow metrics and considered to be salient flow characteristics, were best predicted by the spatial interpolation methods of inverse distance weighting (IDW) and kriging. Comparison of flow duration curves predicted by the three approaches showed that the IDW method was best. The results found here provide guidelines for choosing the most appropriate approach for predicting hydrological behaviours at large scales.

  17. Fuzzy predictive filtering in nonlinear economic model predictive control for demand response

    DEFF Research Database (Denmark)

    Santos, Rui Mirra; Zong, Yi; Sousa, Joao M. C.;

    2016-01-01

    The performance of a model predictive controller (MPC) is highly correlated with the model's accuracy. This paper introduces an economic model predictive control (EMPC) scheme based on a nonlinear model, which uses a branch-and-bound tree search for solving the inherent non-convex optimization...... problem. Moreover, to reduce the computation time and improve the controller's performance, a fuzzy predictive filter is introduced. With the purpose of testing the developed EMPC, a simulation controlling the temperature levels of an intelligent office building (PowerFlexHouse), with and without fuzzy...

  18. Prediction of insulin resistance with anthropometric measures: lessons from a large adolescent population

    Directory of Open Access Journals (Sweden)

    Wedin WK

    2012-07-01

    Full Text Available William K Wedin,1 Lizmer Diaz-Gimenez,1 Antonio J Convit1,21Department of Psychiatry, NYU School of Medicine, New York, NY, USA; 2Nathan Kline Institute, Orangeburg, NY, USAObjective: The aim of this study was to describe the minimum number of anthropometric measures that will optimally predict insulin resistance (IR and to characterize the utility of these measures among obese and nonobese adolescents.Research design and methods: Six anthropometric measures (selected from three categories: central adiposity, weight, and body composition were measured from 1298 adolescents attending two New York City public high schools. Body composition was determined by bioelectric impedance analysis (BIA. The homeostatic model assessment of IR (HOMA-IR, based on fasting glucose and insulin concentrations, was used to estimate IR. Stepwise linear regression analyses were performed to predict HOMA-IR based on the six selected measures, while controlling for age.Results: The stepwise regression retained both waist circumference (WC and percentage of body fat (BF%. Notably, BMI was not retained. WC was a stronger predictor of HOMA-IR than BMI was. A regression model using solely WC performed best among the obese II group, while a model using solely BF% performed best among the lean group. Receiver operator characteristic curves showed the WC and BF% model to be more sensitive in detecting IR than BMI, but with less specificity.Conclusion: WC combined with BF% was the best predictor of HOMA-IR. This finding can be attributed partly to the ability of BF% to model HOMA-IR among leaner participants and to the ability of WC to model HOMA-IR among participants who are more obese. BMI was comparatively weak in predicting IR, suggesting that assessments that are more comprehensive and include body composition analysis could increase detection of IR during adolescence, especially among those who are lean, yet insulin-resistant.Keywords: BMI, bioelectrical impedance

  19. Modelling and Measuring Flow and Wind Turbine Wakes in Large Wind Farms Offshore

    DEFF Research Database (Denmark)

    Barthelmie, Rebecca Jane; Hansen, Kurt Schaldemose; Frandsen, Sten Tronæs

    2009-01-01

    power losses due to wakes and loads. The research presented is part of the EC-funded UpWind project, which aims to radically improve wind turbine and wind farm models in order to continue to improve the costs of wind energy. Reducing wake losses, or even reduce uncertainties in predicting power losses...... of models from computational fluid dynamics (CFD) to wind form models in terms of how accurately they represent wake losses when compared with measurements from offshore wind forms. The ultimate objective is to improve modelling of flow for large wind forms in order to optimize wind form layouts to reduce...... from wakes, contributes to the overall goal of reduced costs. Here, we assess the state of the art in wake and flow modelling for offshore wind forms, the focus so for has been cases at the Horns Rev wind form, which indicate that wind form models require modification to reduce under-prediction of wake...

  20. Predictive modeling and reducing cyclic variability in autoignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  1. Modeling the Effect of Climate Change on Large Fire Size, Counts, and Intensities Using the Large Fire Simulator (FSim)

    Science.gov (United States)

    Riley, K. L.; Haas, J. R.; Finney, M.; Abatzoglou, J. T.

    2013-12-01

    Changes in climate can be expected to cause changes in wildfire activity due to a combination of shifts in weather (temperature, precipitation, relative humidity, wind speed and direction) and vegetation. Changes in vegetation could include type conversions, altered forest structure, and shifts in species composition, the effects of which could be mitigated or exacerbated by management activities. Further, changes in suppression response and effectiveness may alter potential wildfire activity, as well as the consequences of wildfire. Feedbacks among these factors are extremely complex and uncertain. The ability to anticipate changes driven by fire weather (largely outside of human control) can lead to development of fire and fuel management strategies aimed at mitigating current and future risk. Therefore, in this study we focus on isolating the effects of climate-induced changes in weather on wildfire activity. Specifically, we investigated the effect of changes in weather on fire activity in the Canadian Rockies ecoregion, which encompasses Glacier National Park and several large wilderness areas to the south. To model the ignition, growth, and containment of wildfires, we used the Large Fire Simulator (FSim), which we coupled with current and projected future climatic conditions. Weather streams were based on data from 14 downscaled Global Circulation Models (GCMs) from the Coupled Model Intercomparison Project Phase 5 (CMIP5) using the Representative Concentration Pathways (RCP) 45 and 85 for the years 2040-2060. While all GCMs indicate increases in temperature for this area, which would be expected to exacerbate fire activity, precipitation predictions for the summer wildfire season are more variable, ranging from a decrease of approximately 50 mm to an increase of approximately 50 mm. Windspeeds are generally predicted to decrease, which would reduce rates of spread and fire intensity. The net effect of these weather changes on the size, number, and intensity

  2. Collaborative Research: Separating Forced and Unforced Decadal Predictability in Models and Observations

    Energy Technology Data Exchange (ETDEWEB)

    Tippett, Michael K. [Columbia University

    2014-04-09

    This report is a progress report of the accomplishments of the research grant “Collaborative Research: Separating Forced and Unforced Decadal Predictability in Models and Observa- tions” during the period 1 May 2011- 31 August 2013. This project is a collaborative one between Columbia University and George Mason University. George Mason University will submit a final technical report at the conclusion of their no-cost extension. The purpose of the proposed research is to identify unforced predictable components on decadal time scales, distinguish these components from forced predictable components, and to assess the reliability of model predictions of these components. Components of unforced decadal predictability will be isolated by maximizing the Average Predictability Time (APT) in long, multimodel control runs from state-of-the-art climate models. Components with decadal predictability have large APT, so maximizing APT ensures that components with decadal predictability will be detected. Optimal fingerprinting techniques, as used in detection and attribution analysis, will be used to separate variations due to natural and anthropogenic forcing from those due to unforced decadal predictability. This methodology will be applied to the decadal hindcasts generated by the CMIP5 project to assess the reliability of model projections. The question of whether anthropogenic forcing changes decadal predictability, or gives rise to new forms of decadal predictability, also will be investigated.

  3. Toy Model for Large Non-Symmetric Random Matrices

    CERN Document Server

    Snarska, Małgorzata

    2010-01-01

    Non-symmetric rectangular correlation matrices occur in many problems in economics. We test the method of extracting statistically meaningful correlations between input and output variables of large dimensionality and build a toy model for artificially included correlations in large random time series.The results are then applied to analysis of polish macroeconomic data and can be used as an alternative to classical cointegration approach.

  4. Intelligent predictive model of ventilating capacity of imperial smelt furnace

    Institute of Scientific and Technical Information of China (English)

    唐朝晖; 胡燕瑜; 桂卫华; 吴敏

    2003-01-01

    In order to know the ventilating capacity of imperial smelt furnace (ISF), and increase the output of plumbum, an intelligent modeling method based on gray theory and artificial neural networks(ANN) is proposed, in which the weight values in the integrated model can be adjusted automatically. An intelligent predictive model of the ventilating capacity of the ISF is established and analyzed by the method. The simulation results and industrial applications demonstrate that the predictive model is close to the real plant, the relative predictive error is 0.72%, which is 50% less than the single model, leading to a notable increase of the output of plumbum.

  5. A Prediction Model of the Capillary Pressure J-Function

    Science.gov (United States)

    Xu, W. S.; Luo, P. Y.; Sun, L.; Lin, N.

    2016-01-01

    The capillary pressure J-function is a dimensionless measure of the capillary pressure of a fluid in a porous medium. The function was derived based on a capillary bundle model. However, the dependence of the J-function on the saturation Sw is not well understood. A prediction model for it is presented based on capillary pressure model, and the J-function prediction model is a power function instead of an exponential or polynomial function. Relative permeability is calculated with the J-function prediction model, resulting in an easier calculation and results that are more representative. PMID:27603701

  6. Adaptation of Predictive Models to PDA Hand-Held Devices

    Directory of Open Access Journals (Sweden)

    Lin, Edward J

    2008-01-01

    Full Text Available Prediction models using multiple logistic regression are appearing with increasing frequency in the medical literature. Problems associated with these models include the complexity of computations when applied in their pure form, and lack of availability at the bedside. Personal digital assistant (PDA hand-held devices equipped with spreadsheet software offer the clinician a readily available and easily applied means of applying predictive models at the bedside. The purposes of this article are to briefly review regression as a means of creating predictive models and to describe a method of choosing and adapting logistic regression models to emergency department (ED clinical practice.

  7. A model to predict the power output from wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L. [Riso National Lab., Roskilde (Denmark)

    1997-12-31

    This paper will describe a model that can predict the power output from wind farms. To give examples of input the model is applied to a wind farm in Texas. The predictions are generated from forecasts from the NGM model of NCEP. These predictions are made valid at individual sites (wind farms) by applying a matrix calculated by the sub-models of WASP (Wind Atlas Application and Analysis Program). The actual wind farm production is calculated using the Riso PARK model. Because of the preliminary nature of the results, they will not be given. However, similar results from Europe will be given.

  8. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.

    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of

  9. Modelling microbial interactions and food structure in predictive microbiology

    NARCIS (Netherlands)

    Malakar, P.K.

    2002-01-01

    Keywords: modelling, dynamic models, microbial interactions, diffusion, microgradients, colony growth, predictive microbiology.    Growth response of microorganisms in foods is a complex process. Innovations in food production and preservation techniques have resulted in adoption of new technologies

  10. Statistical Modeling of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  11. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  12. Satellite image collection modeling for large area hazard emergency response

    Science.gov (United States)

    Liu, Shufan; Hodgson, Michael E.

    2016-08-01

    Timely collection of critical hazard information is the key to intelligent and effective hazard emergency response decisions. Satellite remote sensing imagery provides an effective way to collect critical information. Natural hazards, however, often have large impact areas - larger than a single satellite scene. Additionally, the hazard impact area may be discontinuous, particularly in flooding or tornado hazard events. In this paper, a spatial optimization model is proposed to solve the large area satellite image acquisition planning problem in the context of hazard emergency response. In the model, a large hazard impact area is represented as multiple polygons and image collection priorities for different portion of impact area are addressed. The optimization problem is solved with an exact algorithm. Application results demonstrate that the proposed method can address the satellite image acquisition planning problem. A spatial decision support system supporting the optimization model was developed. Several examples of image acquisition problems are used to demonstrate the complexity of the problem and derive optimized solutions.

  13. Modelling and transient stability of large wind farms

    DEFF Research Database (Denmark)

    Akhmatov, Vladislav; Knudsen, Hans; Nielsen, Arne Hejde

    2003-01-01

    The paper is dealing-with modelling and short-term Voltage stability considerations of large wind farms. A physical model of a large offshore wind farm consisting of a large number of windmills is implemented in the dynamic simulation tool PSS/E. Each windmill in the wind farm is represented...... by a physical model of grid-connected windmills. The windmill generators ate conventional induction generators and the wind farm is ac-connected to the power system. Improvements-of short-term voltage stability in case of failure events in the external power system are treated with use of conventional generator...... of dynamic reactive compensation demands. In case of blade angle control applied at failure events, dynamic reactive compensation is not necessary for maintaining the voltage stability....

  14. Dualities in 3D large N vector models

    Science.gov (United States)

    Muteeb, Nouman; Zayas, Leopoldo A. Pando; Quevedo, Fernando

    2016-05-01

    Using an explicit path integral approach we derive non-abelian bosonization and duality of 3D systems in the large N limit. We first consider a fermionic U( N) vector model coupled to level k Chern-Simons theory, following standard techniques we gauge the original global symmetry and impose the corresponding field strength F μν to vanish introducing a Lagrange multiplier Λ. Exchanging the order of integrations we obtain the bosonized theory with Λ as the propagating field using the large N rather than the previously used large mass limit. Next we follow the same procedure to dualize the scalar U ( N) vector model coupled to Chern-Simons and find its corresponding dual theory. Finally, we compare the partition functions of the two resulting theories and find that they agree in the large N limit including a level/rank duality. This provides a constructive evidence for previous proposals on level/rank duality of 3D vector models in the large N limit. We also present a partial analysis at subleading order in large N and find that the duality does not generically hold at this level.

  15. Dualities in 3D large N vector models

    Energy Technology Data Exchange (ETDEWEB)

    Muteeb, Nouman [The Abdus Salam International Centre for Theoretical Physics, ICTP,Strada Costiera 11, 34014 Trieste (Italy); SISSA,Via Bonomea 265, 34136 Trieste (Italy); Zayas, Leopoldo A. Pando [The Abdus Salam International Centre for Theoretical Physics, ICTP,Strada Costiera 11, 34014 Trieste (Italy); Michigan Center for Theoretical Physics, Department of Physics,University of Michigan, Ann Arbor, MI 48109 (United States); Quevedo, Fernando [The Abdus Salam International Centre for Theoretical Physics, ICTP,Strada Costiera 11, 34014 Trieste (Italy); DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0WA (United Kingdom)

    2016-05-09

    Using an explicit path integral approach we derive non-abelian bosonization and duality of 3D systems in the large N limit. We first consider a fermionic U(N) vector model coupled to level k Chern-Simons theory, following standard techniques we gauge the original global symmetry and impose the corresponding field strength F{sub μν} to vanish introducing a Lagrange multiplier Λ. Exchanging the order of integrations we obtain the bosonized theory with Λ as the propagating field using the large N rather than the previously used large mass limit. Next we follow the same procedure to dualize the scalar U(N) vector model coupled to Chern-Simons and find its corresponding dual theory. Finally, we compare the partition functions of the two resulting theories and find that they agree in the large N limit including a level/rank duality. This provides a constructive evidence for previous proposals on level/rank duality of 3D vector models in the large N limit. We also present a partial analysis at subleading order in large N and find that the duality does not generically hold at this level.

  16. Modelling the spreading of large-scale wildland fires

    CERN Document Server

    Drissi, Mohamed

    2014-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning cells that strongly depends on local conditions of wind, topography, and vegetation. Radiation and convection from the flaming zone, and radiative heat loss to the ambient are considered in the preheating process of unburned cells. Second, the model is applied to an Australian grassland fire experiment as well as to a real fire that took place in Corsica in 2009. Predictions compare favorably to experiments in terms of rate of spread, area and shape of the burn. Finally, the sensitivity of the model outcomes (here the rate of spread) to six input parameters is studied using a two-level full factorial design.

  17. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  18. Predicting Career Advancement with Structural Equation Modelling

    Science.gov (United States)

    Heimler, Ronald; Rosenberg, Stuart; Morote, Elsa-Sofia

    2012-01-01

    Purpose: The purpose of this paper is to use the authors' prior findings concerning basic employability skills in order to determine which skills best predict career advancement potential. Design/methodology/approach: Utilizing survey responses of human resource managers, the employability skills showing the largest relationships to career…

  19. Modeling and prediction of surgical procedure times

    NARCIS (Netherlands)

    P.S. Stepaniak (Pieter); C. Heij (Christiaan); G. de Vries (Guus)

    2009-01-01

    textabstractAccurate prediction of medical operation times is of crucial importance for cost efficient operation room planning in hospitals. This paper investigates the possible dependence of procedure times on surgeon factors like age, experience, gender, and team composition. The effect of these f

  20. Prediction Model of Sewing Technical Condition by Grey Neural Network

    Institute of Scientific and Technical Information of China (English)

    DONG Ying; FANG Fang; ZHANG Wei-yuan

    2007-01-01

    The grey system theory and the artificial neural network technology were applied to predict the sewing technical condition. The representative parameters, such as needle, stitch, were selected. Prediction model was established based on the different fabrics' mechanical properties that measured by KES instrument. Grey relevant degree analysis was applied to choose the input parameters of the neural network. The result showed that prediction model has good precision. The average relative error was 4.08% for needle and 4.25% for stitch.