WorldWideScience

Sample records for model input variables

  1. Variable Input Power Supply.

    Science.gov (United States)

    An electronic power supply using pulse width modulated (PWM) voltage regulation provides a regulated output for a wide range of input voltages. Thus...switch to change the level of voltage regulation and the turns ratio of the primary winding of the power supply output transformer, thereby obtaining increased tolerance to input voltage change. (Author)

  2. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  3. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    Science.gov (United States)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  4. Simulation model structure numerically robust to changes in magnitude and combination of input and output variables

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1999-01-01

    Mathematical models of refrigeration systems are often based on a coupling of component models forming a “closed loop” type of system model. In these models the coupling structure of the component models represents the actual flow path of refrigerant in the system. Very often numerical...... variables with narrow definition intervals for the exchange of information between the cycle model and the component models.The advantages of the cycle-oriented method are illustrated by an example showing the refrigeration cycle similarities between two very different refrigeration systems....... instabilities prevent the practical use of such a system model for more than one input/output combination and for other magnitudes of refrigerating capacities.A higher numerical robustness of system models can be achieved by making a model for the refrigeration cycle the core of the system model and by using...

  5. Why inputs matter: Selection of climatic variables for species distribution modelling in the Himalayan region

    Science.gov (United States)

    Bobrowski, Maria; Schickhoff, Udo

    2017-04-01

    Betula utilis is a major constituent of alpine treeline ecotones in the western and central Himalayan region. The objective of this study is to provide first time analysis of the potential distribution of Betula utilis in the subalpine and alpine belts of the Himalayan region using species distribution modelling. Using Generalized Linear Models (GLM) we aim at examining climatic factors controlling the species distribution under current climate conditions. Furthermore we evaluate the prediction ability of climate data derived from different statistical methods. GLMs were created using least correlated bioclimatic variables derived from two different climate models: 1) interpolated climate data (i.e. Worldclim, Hijmans et al., 2005) and 2) quasi-mechanistical statistical downscaling (i.e. Chelsa; Karger et al., 2016). Model accuracy was evaluated by the ability to predict the potential species distribution range. We found that models based on variables of Chelsa climate data had higher predictive power, whereas models using Worldclim climate data consistently overpredicted the potential suitable habitat for Betula utilis. Although climatic variables of Worldclim are widely used in modelling species distribution, our results suggest to treat them with caution when remote regions like the Himalayan mountains are in focus. Unmindful usage of climatic variables for species distribution models potentially cause misleading projections and may lead to wrong implications and recommendations for nature conservation. References: Hijmans, R.J., Cameron, S.E., Parra, J.L., Jones, P.G. & Jarvis, A. (2005) Very high resolution interpolated climate surfaces for global land areas. International Journal of Climatology, 25, 1965-1978. Karger, D.N., Conrad, O., Böhner, J., Kawohl, T., Kreft, H., Soria-Auza, R.W., Zimmermann, N., Linder, H.P. & Kessler, M. (2016) Climatologies at high resolution for the earth land surface areas. arXiv:1607.00217 [physics].

  6. Optimization modeling of U.S. renewable electricity deployment using local input variables

    Science.gov (United States)

    Bernstein, Adam

    For the past five years, state Renewable Portfolio Standard (RPS) laws have been a primary driver of renewable electricity (RE) deployments in the United States. However, four key trends currently developing: (i) lower natural gas prices, (ii) slower growth in electricity demand, (iii) challenges of system balancing intermittent RE within the U.S. transmission regions, and (iv) fewer economical sites for RE development, may limit the efficacy of RPS laws over the remainder of the current RPS statutes' lifetime. An outsized proportion of U.S. RE build occurs in a small number of favorable locations, increasing the effects of these variables on marginal RE capacity additions. A state-by-state analysis is necessary to study the U.S. electric sector and to generate technology specific generation forecasts. We used LP optimization modeling similar to the National Renewable Energy Laboratory (NREL) Renewable Energy Development System (ReEDS) to forecast RE deployment across the 8 U.S. states with the largest electricity load, and found state-level RE projections to Year 2031 significantly lower than thoseimplied in the Energy Information Administration (EIA) 2013 Annual Energy Outlook forecast. Additionally, the majority of states do not achieve their RPS targets in our forecast. Combined with the tendency of prior research and RE forecasts to focus on larger national and global scale models, we posit that further bottom-up state and local analysis is needed for more accurate policy assessment, forecasting, and ongoing revision of variables as parameter values evolve through time. Current optimization software eliminates much of the need for algorithm coding and programming, allowing for rapid model construction and updating across many customized state and local RE parameters. Further, our results can be tested against the empirical outcomes that will be observed over the coming years, and the forecast deviation from the actuals can be attributed to discrete parameter

  7. Latitudinal and seasonal variability of the micrometeor input function: A study using model predictions and observations from Arecibo and PFISR

    Science.gov (United States)

    Fentzke, J. T.; Janches, D.; Sparks, J. J.

    2009-05-01

    In this work, we use a semi-empirical model of the micrometeor input function (MIF) together with meteor head-echo observations obtained with two high power and large aperture (HPLA) radars, the 430 MHz Arecibo Observatory (AO) radar in Puerto Rico (18°N, 67°W) and the 450 MHz Poker flat incoherent scatter radar (PFISR) in Alaska (65°N, 147°W), to study the seasonal and geographical dependence of the meteoric flux in the upper atmosphere. The model, recently developed by Janches et al. [2006a. Modeling the global micrometeor input function in the upper atmosphere observed by high power and large aperture radars. Journal of Geophysical Research 111] and Fentzke and Janches [2008. A semi-empirical model of the contribution from sporadic meteoroid sources on the meteor input function observed at arecibo. Journal of Geophysical Research (Space Physics) 113 (A03304)], includes an initial mass flux that is provided by the six known meteor sources (i.e. orbital families of dust) as well as detailed modeling of meteoroid atmospheric entry and ablation physics. In addition, we use a simple ionization model to treat radar sensitivity issues by defining minimum electron volume density production thresholds required in the meteor head-echo plasma for detection. This simplified approach works well because we use observations from two radars with similar frequencies, but different sensitivities and locations. This methodology allows us to explore the initial input of particles and how it manifests in different parts of the MLT as observed by these instruments without the need to invoke more sophisticated plasma models, which are under current development. The comparisons between model predictions and radar observations show excellent agreement between diurnal, seasonal, and latitudinal variability of the detected meteor rate and radial velocity distributions, allowing us to understand how individual meteoroid populations contribute to the overall flux at a particular

  8. Estimating severity of sideways fall using a generic multi linear regression model based on kinematic input variables.

    Science.gov (United States)

    van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V

    2017-03-21

    Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Statistical identification of effective input variables. [SCREEN

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications.

  10. Wind Power Curve Modeling Using Statistical Models: An Investigation of Atmospheric Input Variables at a Flat and Complex Terrain Wind Farm

    Energy Technology Data Exchange (ETDEWEB)

    Wharton, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bulaevskaya, V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Irons, Z. [Enel Green Power North America, Andover, MA (United States); Qualley, G. [Infigen Energy, Dallas, TX (United States); Newman, J. F. [Univ. of Oklahoma, Norman, OK (United States); Miller, W. O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-28

    The goal of our FY15 project was to explore the use of statistical models and high-resolution atmospheric input data to develop more accurate prediction models for turbine power generation. We modeled power for two operational wind farms in two regions of the country. The first site is a 235 MW wind farm in Northern Oklahoma with 140 GE 1.68 turbines. Our second site is a 38 MW wind farm in the Altamont Pass Region of Northern California with 38 Mitsubishi 1 MW turbines. The farms are very different in topography, climatology, and turbine technology; however, both occupy high wind resource areas in the U.S. and are representative of typical wind farms found in their respective areas.

  11. Stochastic weather inputs for improved urban water demand forecasting: application of nonlinear input variable selection and machine learning methods

    Science.gov (United States)

    Quilty, J.; Adamowski, J. F.

    2015-12-01

    Urban water supply systems are often stressed during seasonal outdoor water use as water demands related to the climate are variable in nature making it difficult to optimize the operation of the water supply system. Urban water demand forecasts (UWD) failing to include meteorological conditions as inputs to the forecast model may produce poor forecasts as they cannot account for the increase/decrease in demand related to meteorological conditions. Meteorological records stochastically simulated into the future can be used as inputs to data-driven UWD forecasts generally resulting in improved forecast accuracy. This study aims to produce data-driven UWD forecasts for two different Canadian water utilities (Montreal and Victoria) using machine learning methods by first selecting historical UWD and meteorological records derived from a stochastic weather generator using nonlinear input variable selection. The nonlinear input variable selection methods considered in this work are derived from the concept of conditional mutual information, a nonlinear dependency measure based on (multivariate) probability density functions and accounts for relevancy, conditional relevancy, and redundancy from a potential set of input variables. The results of our study indicate that stochastic weather inputs can improve UWD forecast accuracy for the two sites considered in this work. Nonlinear input variable selection is suggested as a means to identify which meteorological conditions should be utilized in the forecast.

  12. Modeling and generating input processes

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M.E.

    1987-01-01

    This tutorial paper provides information relevant to the selection and generation of stochastic inputs to simulation studies. The primary area considered is multivariate but much of the philosophy at least is relevant to univariate inputs as well. 14 refs.

  13. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  14. Input-variable sensitivity assessment for sediment transport relations

    Science.gov (United States)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  15. Input variable selection for interpolating high-resolution climate ...

    African Journals Online (AJOL)

    Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...

  16. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  17. Variable Input and the Acquisition of Plural Morphology

    Science.gov (United States)

    Miller, Karen L.; Schmitt, Cristina

    2012-01-01

    The present article examines the effect of variable input on the acquisition of plural morphology in two varieties of Spanish: Chilean Spanish, where the plural marker is sometimes omitted due to a phonological process of syllable final /s/ lenition, and Mexican Spanish (of Mexico City), with no such lenition process. The goal of the study is to…

  18. Sensitivity of a juvenile subject-specific musculoskeletal model of the ankle joint to the variability of operator-dependent input.

    Science.gov (United States)

    Hannah, Iain; Montefiori, Erica; Modenese, Luca; Prinold, Joe; Viceconti, Marco; Mazzà, Claudia

    2017-05-01

    Subject-specific musculoskeletal modelling is especially useful in the study of juvenile and pathological subjects. However, such methodologies typically require a human operator to identify key landmarks from medical imaging data and are thus affected by unavoidable variability in the parameters defined and subsequent model predictions. The aim of this study was to thus quantify the inter- and intra-operator repeatability of a subject-specific modelling methodology developed for the analysis of subjects with juvenile idiopathic arthritis. Three operators each created subject-specific musculoskeletal foot and ankle models via palpation of bony landmarks, adjustment of geometrical muscle points and definition of joint coordinate systems. These models were then fused to a generic Arnold lower limb model for each of three modelled patients. The repeatability of each modelling operation was found to be comparable to those previously reported for the modelling of healthy, adult subjects. However, the inter-operator repeatability of muscle point definition was significantly greater than intra-operator repeatability ( p definition of muscle geometry the most significant source of output uncertainty. The development of automated procedures to prevent the misplacement of crucial muscle points should therefore be considered a particular priority for those developing subject-specific models.

  19. Estimates of volume and magma input in crustal magmatic systems from zircon geochronology: the effect of modelling assumptions and system variables

    Directory of Open Access Journals (Sweden)

    Luca eCaricchi

    2016-04-01

    Full Text Available Magma fluxes in the Earth’s crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes. Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions.Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.

  20. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  1. Determination of the optimal training principle and input variables in artificial neural network model for the biweekly chlorophyll-a prediction: a case study of the Yuqiao Reservoir, China.

    Science.gov (United States)

    Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang

    2015-01-01

    Predicting the levels of chlorophyll-a (Chl-a) is a vital component of water quality management, which ensures that urban drinking water is safe from harmful algal blooms. This study developed a model to predict Chl-a levels in the Yuqiao Reservoir (Tianjin, China) biweekly using water quality and meteorological data from 1999-2012. First, six artificial neural networks (ANNs) and two non-ANN methods (principal component analysis and the support vector regression model) were compared to determine the appropriate training principle. Subsequently, three predictors with different input variables were developed to examine the feasibility of incorporating meteorological factors into Chl-a prediction, which usually only uses water quality data. Finally, a sensitivity analysis was performed to examine how the Chl-a predictor reacts to changes in input variables. The results were as follows: first, ANN is a powerful predictive alternative to the traditional modeling techniques used for Chl-a prediction. The back program (BP) model yields slightly better results than all other ANNs, with the normalized mean square error (NMSE), the correlation coefficient (Corr), and the Nash-Sutcliffe coefficient of efficiency (NSE) at 0.003 mg/l, 0.880 and 0.754, respectively, in the testing period. Second, the incorporation of meteorological data greatly improved Chl-a prediction compared to models solely using water quality factors or meteorological data; the correlation coefficient increased from 0.574-0.686 to 0.880 when meteorological data were included. Finally, the Chl-a predictor is more sensitive to air pressure and pH compared to other water quality and meteorological variables.

  2. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  3. Generalized instrumental variable models

    OpenAIRE

    Andrew Chesher; Adam Rosen

    2014-01-01

    This paper develops characterizations of identified sets of structures and structural features for complete and incomplete models involving continuous or discrete variables. Multiple values of unobserved variables can be associated with particular combinations of observed variables. This can arise when there are multiple sources of heterogeneity, censored or discrete endogenous variables, or inequality restrictions on functions of observed and unobserved variables. The models g...

  4. Classifying variability modeling techniques

    NARCIS (Netherlands)

    Sinnema, Marco; Deelstra, Sybren

    Variability modeling is important for managing variability in software product families, especially during product derivation. In the past few years, several variability modeling techniques have been developed, each using its own concepts to model the variability provided by a product family. The

  5. A Choice of Input Variables for a Multilayer Perceptron

    International Nuclear Information System (INIS)

    Bonyushkina, A.Yu.; Zrelov, P.V.; Ivanov, V.V.

    1994-01-01

    In the paper some aspects of multilayer perceptron (MLP) application to the problem of classifying the events presented by empirical samples of a finite volume are considered. The results of the MLP learning for various forms of the input data are analyzed and the reasons leading to the effect of an instantaneous learning of the MLP and rise of the neural network are investigated for the case when the input data are presented in a form of variational series. The problem of hidden layer neuron reduction without raising the recognition error is discussed. (author). 13 refs., 6 figs., 1 tab

  6. PM(10) emission forecasting using artificial neural networks and genetic algorithm input variable optimization.

    Science.gov (United States)

    Antanasijević, Davor Z; Pocajt, Viktor V; Povrenović, Dragan S; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A

    2013-01-15

    This paper describes the development of an artificial neural network (ANN) model for the forecasting of annual PM(10) emissions at the national level, using widely available sustainability and economical/industrial parameters as inputs. The inputs for the model were selected and optimized using a genetic algorithm and the ANN was trained using the following variables: gross domestic product, gross inland energy consumption, incineration of wood, motorization rate, production of paper and paperboard, sawn wood production, production of refined copper, production of aluminum, production of pig iron and production of crude steel. The wide availability of the input parameters used in this model can overcome a lack of data and basic environmental indicators in many countries, which can prevent or seriously impede PM emission forecasting. The model was trained and validated with the data for 26 EU countries for the period from 1999 to 2006. PM(10) emission data, collected through the Convention on Long-range Transboundary Air Pollution - CLRTAP and the EMEP Programme or as emission estimations by the Regional Air Pollution Information and Simulation (RAINS) model, were obtained from Eurostat. The ANN model has shown very good performance and demonstrated that the forecast of PM(10) emission up to two years can be made successfully and accurately. The mean absolute error for two-year PM(10) emission prediction was only 10%, which is more than three times better than the predictions obtained from the conventional multi-linear regression and principal component regression models that were trained and tested using the same datasets and input variables. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Handwriting generates variable visual input to facilitate symbol learning

    Science.gov (United States)

    Li, Julia X.; James, Karin H.

    2015-01-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913

  8. Remote sensing inputs to water demand modeling

    Science.gov (United States)

    Estes, J. E.; Jensen, J. R.; Tinney, L. R.; Rector, M.

    1975-01-01

    In an attempt to determine the ability of remote sensing techniques to economically generate data required by water demand models, the Geography Remote Sensing Unit, in conjunction with the Kern County Water Agency of California, developed an analysis model. As a result it was determined that agricultural cropland inventories utilizing both high altitude photography and LANDSAT imagery can be conducted cost effectively. In addition, by using average irrigation application rates in conjunction with cropland data, estimates of agricultural water demand can be generated. However, more accurate estimates are possible if crop type, acreage, and crop specific application rates are employed. An analysis of the effect of saline-alkali soils on water demand in the study area is also examined. Finally, reference is made to the detection and delineation of water tables that are perched near the surface by semi-permeable clay layers. Soil salinity prediction, automated crop identification on a by-field basis, and a potential input to the determination of zones of equal benefit taxation are briefly touched upon.

  9. Multiobjective reservoir operating rules based on cascade reservoir input variable selection method

    Science.gov (United States)

    Yang, Guang; Guo, Shenglian; Liu, Pan; Li, Liping; Xu, Chongyu

    2017-04-01

    The input variable selection in multiobjective cascade reservoir operation is an important and difficult task. To address this problem, this study proposes the cascade reservoir input variable selection (CIS) method that searches for the most valuable input variables for decision making in multiple-objectivity cascade reservoir operations. From a case study of Hanjiang cascade reservoirs in China, we derive reservoir operating rules based on the combination of CIS and Gaussian radial basis functions (RBFs) methods and optimize the rules through Pareto-archived dynamically dimensioned search (PA-DDS) with two objectives: to maximize both power generation and water supply. We select the most effective input variables and evaluate their impacts on cascade reservoir operations. From the simulated trajectories of reservoir water level, power generation, and water supply, we analyze the multiobjective operating rules with several input variables. The results demonstrate that the CIS method performs well in the selection of input variables for the cascade reservoir operation, and the RBFs method can fully express the nonlinear operating rules for cascade reservoirs. We conclude that the CIS method is an effective and stable approach to identifying the most valuable information from a large number of candidate input variables. While the reservoir storage state is the most valuable information for the Hanjiang cascade reservoir multiobjective operation, the reservoir inflow is the most effective input variable for the single-objective operation of Danjiangkou.

  10. A stochastic model of input effectiveness during irregular gamma rhythms.

    Science.gov (United States)

    Dumont, Grégory; Northoff, Georg; Longtin, André

    2016-02-01

    Gamma-band synchronization has been linked to attention and communication between brain regions, yet the underlying dynamical mechanisms are still unclear. How does the timing and amplitude of inputs to cells that generate an endogenously noisy gamma rhythm affect the network activity and rhythm? How does such "communication through coherence" (CTC) survive in the face of rhythm and input variability? We present a stochastic modelling approach to this question that yields a very fast computation of the effectiveness of inputs to cells involved in gamma rhythms. Our work is partly motivated by recent optogenetic experiments (Cardin et al. Nature, 459(7247), 663-667 2009) that tested the gamma phase-dependence of network responses by first stabilizing the rhythm with periodic light pulses to the interneurons (I). Our computationally efficient model E-I network of stochastic two-state neurons exhibits finite-size fluctuations. Using the Hilbert transform and Kuramoto index, we study how the stochastic phase of its gamma rhythm is entrained by external pulses. We then compute how this rhythmic inhibition controls the effectiveness of external input onto pyramidal (E) cells, and how variability shapes the window of firing opportunity. For transferring the time variations of an external input to the E cells, we find a tradeoff between the phase selectivity and depth of rate modulation. We also show that the CTC is sensitive to the jitter in the arrival times of spikes to the E cells, and to the degree of I-cell entrainment. We further find that CTC can occur even if the underlying deterministic system does not oscillate; quasicycle-type rhythms induced by the finite-size noise retain the basic CTC properties. Finally a resonance analysis confirms the relative importance of the I cell pacing for rhythm generation. Analysis of whole network behaviour, including computations of synchrony, phase and shifts in excitatory-inhibitory balance, can be further sped up by orders of

  11. Study of input variables in group method of data handling methodology

    International Nuclear Information System (INIS)

    Pereira, Iraci Martinez; Bueno, Elaine Inacio

    2013-01-01

    The Group Method of Data Handling - GMDH is a combinatorial multi-layer algorithm in which a network of layers and nodes is generated using a number of inputs from the data stream being evaluated. The GMDH network topology has been traditionally determined using a layer by layer pruning process based on a pre-selected criterion of what constitutes the best nodes at each level. The traditional GMDH method is based on an underlying assumption that the data can be modeled by using an approximation of the Volterra Series or Kolmorgorov-Gabor polynomial. A Monitoring and Diagnosis System was developed based on GMDH and ANN methodologies, and applied to the IPEN research Reactor IEA-1. The system performs the monitoring by comparing the GMDH and ANN calculated values with measured ones. As the GMDH is a self-organizing methodology, the input variables choice is made automatically. On the other hand, the results of ANN methodology are strongly dependent on which variables are used as neural network input. (author)

  12. Soil-related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    A. J. Smith

    2003-01-01

    This analysis is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the geologic repository at Yucca Mountain. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN biosphere model is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003 [163602]). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. ''The Biosphere Model Report'' (BSC 2003 [160699]) describes in detail the conceptual model as well as the mathematical model and its input parameters. The purpose of this analysis was to develop the biosphere model parameters needed to evaluate doses from pathways associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation and ash

  13. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Effects of Input Power Factor Correction on Variable Speed Drive Systems

    OpenAIRE

    Lee, Shiyoung

    1999-01-01

    The use of variable speed drive (VSD) systems in the appliance industry is growing due to emerging high volume of fractional horsepower VSD applications. Almost all of the appliance VSDs have no input power factor correction (PFC) circuits. This results in harmonic pollution of the utility supply which could be avoided. The impact of the PFC circuit in the overall drive system efficiency, harmonic content, magnitude of the system input current and input power factor is particularly address...

  15. Speaker Input Variability Does Not Explain Why Larger Populations Have Simpler Languages.

    Science.gov (United States)

    Atkinson, Mark; Kirby, Simon; Smith, Kenny

    2015-01-01

    A learner's linguistic input is more variable if it comes from a greater number of speakers. Higher speaker input variability has been shown to facilitate the acquisition of phonemic boundaries, since data drawn from multiple speakers provides more information about the distribution of phonemes in a speech community. It has also been proposed that speaker input variability may have a systematic influence on individual-level learning of morphology, which can in turn influence the group-level characteristics of a language. Languages spoken by larger groups of people have less complex morphology than those spoken in smaller communities. While a mechanism by which the number of speakers could have such an effect is yet to be convincingly identified, differences in speaker input variability, which is thought to be larger in larger groups, may provide an explanation. By hindering the acquisition, and hence faithful cross-generational transfer, of complex morphology, higher speaker input variability may result in structural simplification. We assess this claim in two experiments which investigate the effect of such variability on language learning, considering its influence on a learner's ability to segment a continuous speech stream and acquire a morphologically complex miniature language. We ultimately find no evidence to support the proposal that speaker input variability influences language learning and so cannot support the hypothesis that it explains how population size determines the structural properties of language.

  16. Speaker Input Variability Does Not Explain Why Larger Populations Have Simpler Languages.

    Directory of Open Access Journals (Sweden)

    Mark Atkinson

    Full Text Available A learner's linguistic input is more variable if it comes from a greater number of speakers. Higher speaker input variability has been shown to facilitate the acquisition of phonemic boundaries, since data drawn from multiple speakers provides more information about the distribution of phonemes in a speech community. It has also been proposed that speaker input variability may have a systematic influence on individual-level learning of morphology, which can in turn influence the group-level characteristics of a language. Languages spoken by larger groups of people have less complex morphology than those spoken in smaller communities. While a mechanism by which the number of speakers could have such an effect is yet to be convincingly identified, differences in speaker input variability, which is thought to be larger in larger groups, may provide an explanation. By hindering the acquisition, and hence faithful cross-generational transfer, of complex morphology, higher speaker input variability may result in structural simplification. We assess this claim in two experiments which investigate the effect of such variability on language learning, considering its influence on a learner's ability to segment a continuous speech stream and acquire a morphologically complex miniature language. We ultimately find no evidence to support the proposal that speaker input variability influences language learning and so cannot support the hypothesis that it explains how population size determines the structural properties of language.

  17. Soil-Related Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Smith, A. J.

    2004-01-01

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure was defined as AP-SIII.9Q, ''Scientific Analyses''. This

  18. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  19. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  20. Input Selection for Return Temperature Estimation in Mixing Loops using Partial Mutual Information with Flow Variable Delay

    DEFF Research Database (Denmark)

    Overgaard, Anders; Kallesøe, Carsten Skovmose; Bendtsen, Jan Dimon

    2017-01-01

    adgang til data, er ønsker at skabe en datadreven model til kontrol. Grundet den store mængde tilgængelig data anvendes der en metode til valg af inputs kaldet "Partial Mutual Information" (PMI). Denne artikel introducerer en metode til at inkluderer flow variable forsinkelser i PMI. Data fra en...

  1. Variance-based sensitivity indices for models with dependent inputs

    International Nuclear Information System (INIS)

    Mara, Thierry A.; Tarantola, Stefano

    2012-01-01

    Computational models are intensively used in engineering for risk analysis or prediction of future outcomes. Uncertainty and sensitivity analyses are of great help in these purposes. Although several methods exist to perform variance-based sensitivity analysis of model output with independent inputs only a few are proposed in the literature in the case of dependent inputs. This is explained by the fact that the theoretical framework for the independent case is set and a univocal set of variance-based sensitivity indices is defined. In the present work, we propose a set of variance-based sensitivity indices to perform sensitivity analysis of models with dependent inputs. These measures allow us to distinguish between the mutual dependent contribution and the independent contribution of an input to the model response variance. Their definition relies on a specific orthogonalisation of the inputs and ANOVA-representations of the model output. In the applications, we show the interest of the new sensitivity indices for model simplification setting. - Highlights: ► Uncertainty and sensitivity analyses are of great help in engineering. ► Several methods exist to perform variance-based sensitivity analysis of model output with independent inputs. ► We define a set of variance-based sensitivity indices for models with dependent inputs. ► Inputs mutual contributions are distinguished from their independent contributions. ► Analytical and computational tests are performed and discussed.

  2. Variable importance in latent variable regression models

    NARCIS (Netherlands)

    Kvalheim, O.M.; Arneberg, R.; Bleie, O.; Rajalahti, T.; Smilde, A.K.; Westerhuis, J.A.

    2014-01-01

    The quality and practical usefulness of a regression model are a function of both interpretability and prediction performance. This work presents some new graphical tools for improved interpretation of latent variable regression models that can also assist in improved algorithms for variable

  3. Analysis of reactor capital costs and correlated sampling of economic input variables - 15342

    International Nuclear Information System (INIS)

    Ganda, F.; Kim, T.K.; Taiwo, T.A.; Wigeland, R.

    2015-01-01

    In this paper we present work aimed at enhancing the capability to perform nuclear fuel cycle cost estimates and evaluation of financial risk. Reactor capital costs are of particular relevance, since they typically comprise about 60% to 70% of the calculated Levelized Cost of Electricity at Equilibrium (LCAE). The work starts with the collection of historical construction cost and construction duration of nuclear plants in the U.S. and France, as well as forecasted costs of nuclear plants currently under construction in the U.S. This data has the primary goal of supporting the introduction of an appropriate framework, supported in this paper by two case studies with historical data, which allows the development of solid and defensible assumptions on nuclear reactor capital costs. Work is also presented on the enhancement of the capability to model interdependence of cost estimates between facilities and uncertainties. The correlated sampling capabilities in the nuclear economic code NECOST have been expanded to include partial correlations between input variables, according to a given correlation matrix. Accounting for partial correlations correctly allows a narrowing, where appropriate, of the probability density function of the difference in the LCAE between alternative, but correlated, fuel cycles. It also allows the correct calculation of the standard deviation of the LCAE of multistage systems, which appears smaller than the correct value if correlated input costs are treated as uncorrelated. (authors)

  4. A didactic Input-Output model for territorial ecology analyses

    OpenAIRE

    Garry Mcdonald

    2010-01-01

    This report describes a didactic input-output modelling framework created jointly be the team at REEDS, Universite de Versailles and Dr Garry McDonald, Director, Market Economics Ltd. There are three key outputs associated with this framework: (i) a suite of didactic input-output models developed in Microsoft Excel, (ii) a technical report (this report) which describes the framework and the suite of models1, and (iii) a two week intensive workshop dedicated to the training of REEDS researcher...

  5. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  6. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  7. Computer supported estimation of input data for transportation models

    OpenAIRE

    Cenek, Petr; Tarábek, Peter; Kopf, Marija

    2010-01-01

    Control and management of transportation systems frequently rely on optimization or simulation methods based on a suitable model. Such a model uses optimization or simulation procedures and correct input data. The input data define transportation infrastructure and transportation flows. Data acquisition is a costly process and so an efficient approach is highly desirable. The infrastructure can be recognized from drawn maps using segmentation, thinning and vectorization. The accurate definiti...

  8. Applications of flocking algorithms to input modeling for agent movement

    OpenAIRE

    Singham, Dashi; Therkildsen, Meredith; Schruben, Lee

    2011-01-01

    Refereed Conference Paper The article of record as published can be found at http://dx.doi.org/10.1109/WSC.2011.6147953 Simulation flocking has been introduced as a method for generating simulation input from multivariate dependent time series for sensitivity and risk analysis. It can be applied to data for which a parametric model is not readily available or imposes too many restrictions on the possible inputs. This method uses techniques from agent-based modeling to generate ...

  9. Space market model space industry input-output model

    Science.gov (United States)

    Hodgin, Robert F.; Marchesini, Roberto

    1987-01-01

    The goal of the Space Market Model (SMM) is to develop an information resource for the space industry. The SMM is intended to contain information appropriate for decision making in the space industry. The objectives of the SMM are to: (1) assemble information related to the development of the space business; (2) construct an adequate description of the emerging space market; (3) disseminate the information on the space market to forecasts and planners in government agencies and private corporations; and (4) provide timely analyses and forecasts of critical elements of the space market. An Input-Output model of market activity is proposed which are capable of transforming raw data into useful information for decision makers and policy makers dealing with the space sector.

  10. Graphical user interface for input output characterization of single variable and multivariable highly nonlinear systems

    Directory of Open Access Journals (Sweden)

    Shahrukh Adnan Khan M. D.

    2017-01-01

    Full Text Available This paper presents a Graphical User Interface (GUI software utility for the input/output characterization of single variable and multivariable nonlinear systems by obtaining the sinusoidal input describing function (SIDF of the plant. The software utility is developed on MATLAB R2011a environment. The developed GUI holds no restriction on the nonlinearity type, arrangement and system order; provided that output(s of the system is obtainable either though simulation or experiments. An insight to the GUI and its features are presented in this paper and example problems from both single variable and multivariable cases are demonstrated. The formulation of input/output behavior of the system is discussed and the nucleus of the MATLAB command underlying the user interface has been outlined. Some of the industries that would benefit from this software utility includes but not limited to aerospace, defense technology, robotics and automotive.

  11. Urban vs. Rural CLIL: An Analysis of Input-Related Variables, Motivation and Language Attainment

    Science.gov (United States)

    Alejo, Rafael; Piquer-Píriz, Ana

    2016-01-01

    The present article carries out an in-depth analysis of the differences in motivation, input-related variables and linguistic attainment of the students at two content and language integrated learning (CLIL) schools operating within the same institutional and educational context, the Spanish region of Extremadura, and differing only in terms of…

  12. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    International Nuclear Information System (INIS)

    Costa, Valter Magalhaes; Pereira, Iraci Martinez

    2011-01-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because a previous diagnosis allows the correction of the fault and, like this, to prevent the production stopped, improving operator's security and it's not provoking economics losses. The objective of this work is to build a set, using bivariate correlation and canonical correlation, which will be the set of input variables of an artificial neural network to monitor the greater number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. Initially, for the input set of neural network we selected the variables: nuclear power, primary circuit flow rate, control/safety rod position and difference in pressure in the core of the reactor, because almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The nuclear power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures; the primary circuit flow rate has the function of energy transport by removing the nucleus heat. An artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables. (author)

  13. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Valter Magalhaes; Pereira, Iraci Martinez, E-mail: valter.costa@usp.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because a previous diagnosis allows the correction of the fault and, like this, to prevent the production stopped, improving operator's security and it's not provoking economics losses. The objective of this work is to build a set, using bivariate correlation and canonical correlation, which will be the set of input variables of an artificial neural network to monitor the greater number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. Initially, for the input set of neural network we selected the variables: nuclear power, primary circuit flow rate, control/safety rod position and difference in pressure in the core of the reactor, because almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The nuclear power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures; the primary circuit flow rate has the function of energy transport by removing the nucleus heat. An artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables. (author)

  14. Effect of variable heat input on the heat transfer characteristics in an Organic Rankine Cycle system

    Directory of Open Access Journals (Sweden)

    Aboaltabooq Mahdi Hatf Kadhum

    2016-01-01

    Full Text Available This paper analyzes the heat transfer characteristics of an ORC evaporator applied on a diesel engine using measured data from experimental work such as flue gas mass flow rate and flue gas temperature. A mathematical model was developed with regard to the preheater, boiler and the superheater zones of a counter flow evaporator. Each of these zones has been subdivided into a number of cells. The hot source of the ORC cycle was modeled. The study involves the variable heat input's dependence on the ORC system's heat transfer characteristics, with especial emphasis on the evaporator. The results show that the refrigerant's heat transfer coefficient has a higher value for a 100% load from the diesel engine, and decreases with the load decrease. Also, on the exhaust gas side, the heat transfer coefficient decreases with the decrease of the load. The refrigerant's heat transfer coefficient increased normally with the evaporator's tube length in the preheater zone, and then increases rapidly in the boiler zone, followed by a decrease in the superheater zone. The exhaust gases’ heat transfer coefficient increased with the evaporator’ tube length in all zones. The results were compared with result by other authors and were found to be in agreement.

  15. Bayesian tsunami fragility modeling considering input data uncertainty

    OpenAIRE

    De Risi, Raffaele; Goda, Katsu; Mori, Nobuhito; Yasuda, Tomohiro

    2017-01-01

    Empirical tsunami fragility curves are developed based on a Bayesian framework by accounting for uncertainty of input tsunami hazard data in a systematic and comprehensive manner. Three fragility modeling approaches, i.e. lognormal method, binomial logistic method, and multinomial logistic method, are considered, and are applied to extensive tsunami damage data for the 2011 Tohoku earthquake. A unique aspect of this study is that uncertainty of tsunami inundation data (i.e. input hazard data ...

  16. Agricultural and Environmental Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rasmuson; K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters

  17. Quality assurance of weather data for agricultural system model input

    Science.gov (United States)

    It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...

  18. Wide Input Range Power Converters Using a Variable Turns Ratio Transformer

    DEFF Research Database (Denmark)

    Ouyang, Ziwei; Andersen, Michael A. E.

    2016-01-01

    of diagonal secondary windings, in order to make the transformer turns ratio adjustable by controlling the phase between the two current excitations subjected to the two primary windings. Full-bridge boost dc-dc converter is employed with the proposed transformer to demonstrate the feasibility of the variable......A new integrated transformer with variable turns ratio is proposed to enable dc-dc converters operating over a wide input voltage range. The integrated transformer employs a new geometry of magnetic core with “four legs”, two primary windings with orthogonal arrangement, and “8” shape connection...... turns ratio. 1-kW experimental prototype targeting to the PV standalone system has been built to well demonstrate a wide input voltage operation with high efficiencies....

  19. Variable input observer for state estimation of high-rate dynamics

    Science.gov (United States)

    Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob

    2017-04-01

    High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.

  20. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    International Nuclear Information System (INIS)

    Costa, Valter Magalhaes

    2011-01-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because an early diagnosis allows the correction of the fault and, like this, do not cause the production interruption, improving operator's security and it's not provoking economics losses. The objective of this work is, in the whole of all variables monitor of a nuclear power plant, to build a set, not necessary minimum, which will be the set of input variables of an artificial neural network and, like way, to monitor the biggest number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. For this, the variables Power, Rate of flow of primary circuit, Rod of control/security and Difference in pressure in the core of the reactor ( Δ P) was grouped, because, for hypothesis, almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The Power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the Rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures and the Rate of flow of primary circuit has function of the transport of energy by removing of heat of the nucleus Like this, labeling B= {Power, Rate of flow of Primary Circuit, Rod of Control/Security and Δ P} was computed the correlation between B and all another variables monitoring (coefficient of multiple correlation), that is, by the computer of the multiple correlation, that is tool of Theory of Canonical Correlations, was possible to computer how much the set B can predict each variable. Due the impossibility of a satisfactory approximation by B in the prediction of some variables, it was included one or more variables that have high correlation with this variable to improve the quality of prediction. In this work an artificial neural network

  1. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    K. Rautenstrauch

    2004-01-01

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception

  2. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  3. Investigation of RADTRAN Stop Model input parameters for truck stops

    International Nuclear Information System (INIS)

    Griego, N.R.; Smith, J.D.; Neuhauser, K.S.

    1996-01-01

    RADTRAN is a computer code for estimating the risks and consequences as transport of radioactive materials (RAM). RADTRAN was developed and is maintained by Sandia National Laboratories for the US Department of Energy (DOE). For incident-free transportation, the dose to persons exposed while the shipment is stopped is frequently a major percentage of the overall dose. This dose is referred to as Stop Dose and is calculated by the Stop Model. Because stop dose is a significant portion of the overall dose associated with RAM transport, the values used as input for the Stop Model are important. Therefore, an investigation of typical values for RADTRAN Stop Parameters for truck stops was performed. The resulting data from these investigations were analyzed to provide mean values, standard deviations, and histograms. Hence, the mean values can be used when an analyst does not have a basis for selecting other input values for the Stop Model. In addition, the histograms and their characteristics can be used to guide statistical sampling techniques to measure sensitivity of the RADTRAN calculated Stop Dose to the uncertainties in the stop model input parameters. This paper discusses the details and presents the results of the investigation of stop model input parameters at truck stops

  4. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573])

  5. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  6. Modeling Shared Variables in VHDL

    DEFF Research Database (Denmark)

    Madsen, Jan; Brage, Jens P.

    1994-01-01

    A set of concurrent processes communicating through shared variables is an often used model for hardware systems. This paper presents three modeling techniques for representing such shared variables in VHDL, depending on the acceptable constraints on accesses to the variables. Also a set of guide......A set of concurrent processes communicating through shared variables is an often used model for hardware systems. This paper presents three modeling techniques for representing such shared variables in VHDL, depending on the acceptable constraints on accesses to the variables. Also a set...

  7. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Kaylie Rasmuson; Kurt Rautenstrauch

    2003-06-20

    This analysis is one of nine technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. It documents input parameters for the biosphere model, and supports the use of the model to develop Biosphere Dose Conversion Factors (BDCF). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in the biosphere Technical Work Plan (TWP, BSC 2003a). It should be noted that some documents identified in Figure 1-1 may be under development and therefore not available at the time this document is issued. The ''Biosphere Model Report'' (BSC 2003b) describes the ERMYN and its input parameters. This analysis report, ANL-MGR-MD-000006, ''Agricultural and Environmental Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. This report defines and justifies values for twelve parameters required in the biosphere model. These parameters are related to use of contaminated groundwater to grow crops. The parameter values recommended in this report are used in the soil, plant, and carbon-14 submodels of the ERMYN.

  8. Model reduction of nonlinear systems subject to input disturbances

    KAUST Repository

    Ndoye, Ibrahima

    2017-07-10

    The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.

  9. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. Wasiolek

    2006-01-01

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This report is concerned primarily with the

  10. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  11. 'Quantization' of stochastic variables: description and effects on the input noise sources in a BWR

    International Nuclear Information System (INIS)

    Matthey, M.

    1979-01-01

    A set of macrostochastic and discrete variables, with Markovian properties, is used to characterize the state of a BWR, whose input noise sources are of interest. The ratio between the auto-power spectral density (APSD) of the neutron noise fluctuations and the square modulus of the transfer function (SMTF) defines 'the total input noise source' (TINS), the components of which are the different noise source corresponding to the relevant variables. A white contribution to TINS arises from the birth and death processes of neutrons in the reactor and corresponds to a 'shot noise' (SN). Non-white contributions arise from fluctuations of the neutron cross-sections caused by fuel temperature and steam content variations. These terms called 'Flicker noises' (FN) are characterized by cut-off frequencies related to time constants of reactivity feedback effects. The respective magnitudes of the shot and flicker noises depend not only on the frequency, the feedback reactivity coefficients or the power of the reactor, but also on the 'quantization' of the continuous variables introduced such as fuel temperature and steam content. The effects of this last 'quantization' on the shapes of the noise sources and their sum are presented in this paper. (author)

  12. Input point distribution for regular stem form spline modeling

    Directory of Open Access Journals (Sweden)

    Karel Kuželka

    2015-04-01

    Full Text Available Aim of study: To optimize an interpolation method and distribution of measured diameters to represent regular stem form of coniferous trees using a set of discrete points. Area of study: Central-Bohemian highlands, Czech Republic; a region that represents average stand conditions of production forests of Norway spruce (Picea abies [L.] Karst. in central Europe Material and methods: The accuracy of stem curves modeled using natural cubic splines from a set of measured diameters was evaluated for 85 closely measured stems of Norway spruce using five statistical indicators and compared to the accuracy of three additional models based on different spline types selected for their ability to represent stem curves. The optimal positions to measure diameters were identified using an aggregate objective function approach. Main results: The optimal positions of the input points vary depending on the properties of each spline type. If the optimal input points for each spline are used, then all spline types are able to give reasonable results with higher numbers of input points. The commonly used natural cubic spline was outperformed by other spline types. The lowest errors occur by interpolating the points using the Catmull-Rom spline, which gives accurate and unbiased volume estimates, even with only five input points. Research highlights: The study contributes to more accurate representation of stem form and therefore more accurate estimation of stem volume using data obtained from terrestrial imagery or other close-range remote sensing methods.

  13. Inhalation Exposure Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air inhaled by a receptor. Concentrations in air to which the

  14. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-09-24

    This analysis is one of the nine reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2003a) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents a set of input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for a Yucca Mountain repository. This report, ''Inhalation Exposure Input Parameters for the Biosphere Model'', is one of the five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (BSC 2003b). It should be noted that some documents identified in Figure 1-1 may be under development at the time this report is issued and therefore not available at that time. This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this analysis report. This analysis report defines and justifies values of mass loading, which is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Measurements of mass loading are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air surrounding crops and concentrations in air

  15. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  16. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  17. Applications of Flocking Algorithms to Input Modeling for Agent Movement

    Science.gov (United States)

    2011-12-01

    2445 Singham, Therkildsen, and Schruben We apply the following flocking algorithm to this leading boid to generate followers, who will then be mapped...due to the paths crossing. 2447 Singham, Therkildsen, and Schruben Figure 2: Plot of the path of a boid generated by the Group 4 flocking algorithm ...on the possible inputs. This method uses techniques from agent-based modeling to generate a flock of boids that follow the data. In this paper, we

  18. Filtering Based Recursive Least Squares Algorithm for Multi-Input Multioutput Hammerstein Models

    OpenAIRE

    Wang, Ziyun; Wang, Yan; Ji, Zhicheng

    2014-01-01

    This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-MA) systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model. The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational...

  19. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    Science.gov (United States)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  20. MODELING SUPPLY CHAIN PERFORMANCE VARIABLES

    Directory of Open Access Journals (Sweden)

    Ashish Agarwal

    2005-01-01

    Full Text Available In order to understand the dynamic behavior of the variables that can play a major role in the performance improvement in a supply chain, a System Dynamics-based model is proposed. The model provides an effective framework for analyzing different variables affecting supply chain performance. Among different variables, a causal relationship among different variables has been identified. Variables emanating from performance measures such as gaps in customer satisfaction, cost minimization, lead-time reduction, service level improvement and quality improvement have been identified as goal-seeking loops. The proposed System Dynamics-based model analyzes the affect of dynamic behavior of variables for a period of 10 years on performance of case supply chain in auto business.

  1. Remotely sensed soil moisture input to a hydrologic model

    Science.gov (United States)

    Engman, E. T.; Kustas, W. P.; Wang, J. R.

    1989-01-01

    The possibility of using detailed spatial soil moisture maps as input to a runoff model was investigated. The water balance of a small drainage basin was simulated using a simple storage model. Aircraft microwave measurements of soil moisture were used to construct two-dimensional maps of the spatial distribution of the soil moisture. Data from overflights on different dates provided the temporal changes resulting from soil drainage and evapotranspiration. The study site and data collection are described, and the soil measurement data are given. The model selection is discussed, and the simulation results are summarized. It is concluded that a time series of soil moisture is a valuable new type of data for verifying model performance and for updating and correcting simulated streamflow.

  2. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  3. Comprehensive Information Retrieval and Model Input Sequence (CIRMIS)

    Energy Technology Data Exchange (ETDEWEB)

    Friedrichs, D.R.

    1977-04-01

    The Comprehensive Information Retrieval and Model Input Sequence (CIRMIS) was developed to provide the research scientist with man--machine interactive capabilities in a real-time environment, and thereby produce results more quickly and efficiently. The CIRMIS system was originally developed to increase data storage and retrieval capabilities and ground-water model control for the Hanford site. The overall configuration, however, can be used in other areas. The CIRMIS system provides the user with three major functions: retrieval of well-based data, special application for manipulating surface data or background maps, and the manipulation and control of ground-water models. These programs comprise only a portion of the entire CIRMIS system. A complete description of the CIRMIS system is given in this report. 25 figures, 7 tables. (RWR)

  4. Remote Sensing Analysis of Malawi's Agricultural Inputs Subsidy and Climate Variability Impacts on Productivity

    Science.gov (United States)

    Galford, G. L.; Fiske, G. J.; Sedano, F.; Michelson, H.

    2016-12-01

    Agriculture in sub-Saharan Africa is characterized by smallholder production and low yields ( 1 ton ha-1 year-1 since records began in 1961) for staple food crops such as maize (Zea mays). Many years of low-input farming have depleted much of the region's agricultural land of critical soil carbon and nitrogen, further reducing yield potentials. Malawi is a 98,000 km2 subtropical nation with a short rainy season from November to May, with most rainfall occurring between December and mid-April. This short growing season supports the cultivation of one primary crop, maize. In Malawi, many smallholder farmers face annual nutrient deficits as nutrients removed as grain harvest and residues are beyond replenishment levels. As a result, Malawi has had stagnant maize yields averaging 1.2 ton ha-1 year-1 for decades. After multiple years of drought and widespread hunger in the early 2000s, Malawi introduced an agricultural input support program (fertilizer and seed subsidy) in time for the 2006 harvest that was designed to restore soil nutrients, improve maize production, and decrease dependence on food aid. Malawi's subsidy program targets 50-67% of smallholder farmers who cultivate half a hectare or less, yet collectively supply 80% of the country's maize. The country has achieved significant increases in crop yields (now 2 tons/ha/year) and, as our analysis shows, benefited from a new resilience against drought. We utilized Landsat time series to determine cropland extent from 2000-present and identify areas of marginal and/or intermittent production. We found a strong latitudinal gradient of precipitation variability from north to south in CHIRPS data. We used the precipitation variability to normalize trends in a productivity proxy derived from MODIS EVI. After normalization of productivity to precipitation variability, we found significant productivity trends correlated to subsidy distribution. This work was conducted with Google's Earth Engine, a cloud-based platform

  5. Environmental Transport Input Parameters for the Biosphere Model

    International Nuclear Information System (INIS)

    Wasiolek, M. A.

    2003-01-01

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699], Section 6.2). Parameter values

  6. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  7. Measurement of Laser Weld Temperatures for 3D Model Input

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grossetete, Grant [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maccallum, Danny O. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  8. The definition of input parameters for modelling of energetic subsystems

    Directory of Open Access Journals (Sweden)

    Ptacek M.

    2013-06-01

    Full Text Available This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  9. The definition of input parameters for modelling of energetic subsystems

    Science.gov (United States)

    Ptacek, M.

    2013-06-01

    This paper is a short review and a basic description of mathematical models of renewable energy sources which present individual investigated subsystems of a system created in Matlab/Simulink. It solves the physical and mathematical relationships of photovoltaic and wind energy sources that are often connected to the distribution networks. The fuel cell technology is much less connected to the distribution networks but it could be promising in the near future. Therefore, the paper informs about a new dynamic model of the low-temperature fuel cell subsystem, and the main input parameters are defined as well. Finally, the main evaluated and achieved graphic results for the suggested parameters and for all the individual subsystems mentioned above are shown.

  10. Lysimeter data as input to performance assessment models

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.

    1998-01-01

    The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste forms in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-117 prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. The program includes reviewing radionuclide releases from those waste forms in the first 7 years of sampling and examining the relationship between code input parameters and lysimeter data. Also, lysimeter data are applied to performance assessment source term models, and initial results from use of data in two models are presented

  11. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  12. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  13. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  14. Metocean input data for drift models applications: Loustic study

    International Nuclear Information System (INIS)

    Michon, P.; Bossart, C.; Cabioc'h, M.

    1995-01-01

    Real-time monitoring and crisis management of oil slicks or floating structures displacement require a good knowledge of local winds, waves and currents used as input data for operational drift models. Fortunately, thanks to world-wide and all-weather coverage, satellite measurements have recently enabled the introduction of new methods for the remote sensing of the marine environment. Within a French joint industry project, a procedure has been developed using basically satellite measurements combined to metocean models in order to provide marine operators' drift models with reliable wind, wave and current analyses and short term forecasts. Particularly, a model now allows the calculation of the drift current, under the joint action of wind and sea-state, thus radically improving the classical laws. This global procedure either directly uses satellite wind and waves measurements (if available on the study area) or indirectly, as calibration of metocean models results which are brought to the oil slick or floating structure location. The operational use of this procedure is reported here with an example of floating structure drift offshore from the Brittany coasts

  15. Input modeling with phase-type distributions and Markov models theory and applications

    CERN Document Server

    Buchholz, Peter; Felko, Iryna

    2014-01-01

    Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence of measurements from a real system...

  16. Precipitation forecasts and their uncertainty as input into hydrological models

    Directory of Open Access Journals (Sweden)

    M. Kobold

    2005-01-01

    Full Text Available Torrential streams and fast runoff are characteristic of most Slovenian rivers and extensive damage is caused almost every year by rainstorms affecting different regions of Slovenia. Rainfall-runoff models which are tools for runoff calculation can be used for flood forecasting. In Slovenia, the lag time between rainfall and runoff is only a few hours and on-line data are used only for now-casting. Predicted precipitation is necessary in flood forecasting some days ahead. The ECMWF (European Centre for Medium-Range Weather Forecasts model gives general forecasts several days ahead while more detailed precipitation data with the ALADIN/SI model are available for two days ahead. Combining the weather forecasts with the information on catchment conditions and a hydrological forecasting model can give advance warning of potential flooding notwithstanding a certain degree of uncertainty in using precipitation forecasts based on meteorological models. Analysis of the sensitivity of the hydrological model to the rainfall error has shown that the deviation in runoff is much larger than the rainfall deviation. Therefore, verification of predicted precipitation for large precipitation events was performed with the ECMWF model. Measured precipitation data were interpolated on a regular grid and compared with the results from the ECMWF model. The deviation in predicted precipitation from interpolated measurements is shown with the model bias resulting from the inability of the model to predict the precipitation correctly and a bias for horizontal resolution of the model and natural variability of precipitation.

  17. A Study on the Fuzzy-Logic-Based Solar Power MPPT Algorithms Using Different Fuzzy Input Variables

    Directory of Open Access Journals (Sweden)

    Jaw-Kuen Shiau

    2015-04-01

    Full Text Available Maximum power point tracking (MPPT is one of the key functions of the solar power management system in solar energy deployment. This paper investigates the design of fuzzy-logic-based solar power MPPT algorithms using different fuzzy input variables. Six fuzzy MPPT algorithms, based on different input variables, were considered in this study, namely (i slope (of solar power-versus-solar voltage and changes of the slope; (ii slope and variation of the power; (iii variation of power and variation of voltage; (iv variation of power and variation of current; (v sum of conductance and increment of the conductance; and (vi sum of angles of arctangent of the conductance and arctangent of increment of the conductance. Algorithms (i–(iv have two input variables each while algorithms (v and (vi use a single input variable. The fuzzy logic MPPT function is deployed using a buck-boost power converter. This paper presents the details of the determinations, considerations of the fuzzy rules, as well as advantages and disadvantages of each MPPT algorithm based upon photovoltaic (PV cell properties. The range of the input variable of Algorithm (vi is finite and the maximum power point condition is well defined in steady condition and, therefore, it can be used for multipurpose controller design. Computer simulations are conducted to verify the design.

  18. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  19. A Markovian model of evolving world input-output network.

    Directory of Open Access Journals (Sweden)

    Vahid Moosavi

    Full Text Available The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  20. A Markovian model of evolving world input-output network.

    Science.gov (United States)

    Moosavi, Vahid; Isacchini, Giulio

    2017-01-01

    The initial theoretical connections between Leontief input-output models and Markov chains were established back in 1950s. However, considering the wide variety of mathematical properties of Markov chains, so far there has not been a full investigation of evolving world economic networks with Markov chain formalism. In this work, using the recently available world input-output database, we investigated the evolution of the world economic network from 1995 to 2011 through analysis of a time series of finite Markov chains. We assessed different aspects of this evolving system via different known properties of the Markov chains such as mixing time, Kemeny constant, steady state probabilities and perturbation analysis of the transition matrices. First, we showed how the time series of mixing times and Kemeny constants could be used as an aggregate index of globalization. Next, we focused on the steady state probabilities as a measure of structural power of the economies that are comparable to GDP shares of economies as the traditional index of economies welfare. Further, we introduced two measures of systemic risk, called systemic influence and systemic fragility, where the former is the ratio of number of influenced nodes to the total number of nodes, caused by a shock in the activity of a node, and the latter is based on the number of times a specific economic node is affected by a shock in the activity of any of the other nodes. Finally, focusing on Kemeny constant as a global indicator of monetary flow across the network, we showed that there is a paradoxical effect of a change in activity levels of economic nodes on the overall flow of the world economic network. While the economic slowdown of the majority of nodes with high structural power results to a slower average monetary flow over the network, there are some nodes, where their slowdowns improve the overall quality of the network in terms of connectivity and the average flow of the money.

  1. Filtering Based Recursive Least Squares Algorithm for Multi-Input Multioutput Hammerstein Models

    Directory of Open Access Journals (Sweden)

    Ziyun Wang

    2014-01-01

    Full Text Available This paper considers the parameter estimation problem for Hammerstein multi-input multioutput finite impulse response (FIR-MA systems. Filtered by the noise transfer function, the FIR-MA model is transformed into a controlled autoregressive model. The key-term variable separation principle is used to derive a data filtering based recursive least squares algorithm. The numerical examples confirm that the proposed algorithm can estimate parameters more accurately and has a higher computational efficiency compared with the recursive least squares algorithm.

  2. Optimal input design for model discrimination using Pontryagin's maximum principle: Application to kinetic model structures

    NARCIS (Netherlands)

    Keesman, K.J.; Walter, E.

    2014-01-01

    The paper presents a methodology for an optimal input design for model discrimination. To allow analytical solutions, the method, using Pontryagin’s maximum principle, is developed for non-linear single-state systems that are affine in their joint input. The method is demonstrated on a fed-batch

  3. Regulation of Wnt signaling by nociceptive input in animal models

    Directory of Open Access Journals (Sweden)

    Shi Yuqiang

    2012-06-01

    Full Text Available Abstract Background Central sensitization-associated synaptic plasticity in the spinal cord dorsal horn (SCDH critically contributes to the development of chronic pain, but understanding of the underlying molecular pathways is still incomplete. Emerging evidence suggests that Wnt signaling plays a crucial role in regulation of synaptic plasticity. Little is known about the potential function of the Wnt signaling cascades in chronic pain development. Results Fluorescent immunostaining results indicate that β-catenin, an essential protein in the canonical Wnt signaling pathway, is expressed in the superficial layers of the mouse SCDH with enrichment at synapses in lamina II. In addition, Wnt3a, a prototypic Wnt ligand that activates the canonical pathway, is also enriched in the superficial layers. Immunoblotting analysis indicates that both Wnt3a a β-catenin are up-regulated in the SCDH of various mouse pain models created by hind-paw injection of capsaicin, intrathecal (i.t. injection of HIV-gp120 protein or spinal nerve ligation (SNL. Furthermore, Wnt5a, a prototypic Wnt ligand for non-canonical pathways, and its receptor Ror2 are also up-regulated in the SCDH of these models. Conclusion Our results suggest that Wnt signaling pathways are regulated by nociceptive input. The activation of Wnt signaling may regulate the expression of spinal central sensitization during the development of acute and chronic pain.

  4. ON MODELING METHODS OF REPRODUCTION OF FIXED ASSETS IN DYNAMIC INPUT - OUTPUT MODELS

    Directory of Open Access Journals (Sweden)

    Baranov A. O.

    2014-12-01

    Full Text Available The article presents a comparative study of methods for modeling reproduction of fixed assets in various types of dynamic input-output models, which have been developed at the Novosibirsk State University and at the Institute of Economics and Industrial Engineering of the Siberian Division of Russian Academy of Sciences. The study compares the technique of information providing for the investment blocks of the models. Considered in detail mathematical description of the block of fixed assets reproduction in the Dynamic Input - Output Model included in the KAMIN system and the optimization interregional input - output model. Analyzes the peculiarities of information support of investment and fixed assets blocks of the Dynamic Input - Output Model included in the KAMIN system and the optimization interregional input - output model. In conclusion of the article provides suggestions for joint use of the analyzed models for Russian economy development forecasting. Provided the use of the KAMIN system’s models for short-term and middle-term forecasting and the optimization interregional input - output model to develop long-term forecasts based on the spatial structure of the economy.

  5. Modelling Implicit Communication in Multi-Agent Systems with Hybrid Input/Output Automata

    Directory of Open Access Journals (Sweden)

    Marta Capiluppi

    2012-10-01

    Full Text Available We propose an extension of Hybrid I/O Automata (HIOAs to model agent systems and their implicit communication through perturbation of the environment, like localization of objects or radio signals diffusion and detection. To this end we decided to specialize some variables of the HIOAs whose values are functions both of time and space. We call them world variables. Basically they are treated similarly to the other variables of HIOAs, but they have the function of representing the interaction of each automaton with the surrounding environment, hence they can be output, input or internal variables. Since these special variables have the role of simulating implicit communication, their dynamics are specified both in time and space, because they model the perturbations induced by the agent to the environment, and the perturbations of the environment as perceived by the agent. Parallel composition of world variables is slightly different from parallel composition of the other variables, since their signals are summed. The theory is illustrated through a simple example of agents systems.

  6. Variability of energy input into selected subsystems of the human-glove-tool system: a theoretical study.

    Science.gov (United States)

    Hermann, Tomasz; Dobry, Marian Witalis

    2017-05-31

    This article presents an application of the energy method to assess the energy input introduced into two subsystems of the human-glove-tool system. To achieve this aim, a physical model of the system was developed. This consists of dynamic models of the human body and the glove described in Standard No. ISO 10068:2012, and a model of a hand-held power tool. The energy input introduced into the subsystems, i.e., the human body and the glove, was analysed in the domain of energy and involved calculating three component energy inputs of forces. The energy model was solved using numerical simulation implemented in the MATLAB/simulink environment. This procedure demonstrates that the vibration energy was distributed quite differently in the internal structure of the two subsystems. The results suggest that the operating frequency of the tool has a significant impact on the level of energy inputs transmitted into both subsystems.

  7. Unitary input DEA model to identify beef cattle production systems typologies

    Directory of Open Access Journals (Sweden)

    Eliane Gonçalves Gomes

    2012-08-01

    Full Text Available The cow-calf beef production sector in Brazil has a wide variety of operating systems. This suggests the identification and the characterization of homogeneous regions of production, with consequent implementation of actions to achieve its sustainability. In this paper we attempted to measure the performance of 21 livestock modal production systems, in their cow-calf phase. We measured the performance of these systems, considering husbandry and production variables. The proposed approach is based on data envelopment analysis (DEA. We used unitary input DEA model, with apparent input orientation, together with the efficiency measurements generated by the inverted DEA frontier. We identified five modal production systems typologies, using the isoefficiency layers approach. The results showed that the knowledge and the processes management are the most important factors for improving the efficiency of beef cattle production systems.

  8. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  9. The stability of input structures in a supply-driven input-output model: A regional analysis

    Energy Technology Data Exchange (ETDEWEB)

    Allison, T.

    1994-06-01

    Disruptions in the supply of strategic resources or other crucial factor inputs often present significant problems for planners and policymakers. The problem may be particularly significant at the regional level where higher levels of product specialization mean supply restrictions are more likely to affect leading regional industries. To maintain economic stability in the event of a supply restriction, regional planners may therefore need to evaluate the importance of market versus non-market systems for allocating the remaining supply of the disrupted resource to the region`s leading consuming industries. This paper reports on research that has attempted to show that large short term changes on the supply side do not lead to substantial changes in input coefficients and do not therefore mean the abandonment of the concept of the production function as has been suggested (Oosterhaven, 1988). The supply-driven model was tested for six sectors of the economy of Washington State and found to yield new input coefficients whose values were in most cases close approximations of their original values, even with substantial changes in supply. Average coefficient changes from a 50% output reduction in these six sectors were in the vast majority of cases (297 from a total of 315) less than +2.0% of their original values, excluding coefficient changes for the restricted input. Given these small changes, the most important issue for the validity of the supply-driven input-output model may therefore be the empirical question of the extent to which these coefficient changes are acceptable as being within the limits of approximation.

  10. Effective Moisture Penetration Depth Model for Residential Buildings: Sensitivity Analysis and Guidance on Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, Jason D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Winkler, Jonathan M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-31

    Moisture buffering of building materials has a significant impact on the building's indoor humidity, and building energy simulations need to model this buffering to accurately predict the humidity. Researchers requiring a simple moisture-buffering approach typically rely on the effective-capacitance model, which has been shown to be a poor predictor of actual indoor humidity. This paper describes an alternative two-layer effective moisture penetration depth (EMPD) model and its inputs. While this model has been used previously, there is a need to understand the sensitivity of this model to uncertain inputs. In this paper, we use the moisture-adsorbent materials exposed to the interior air: drywall, wood, and carpet. We use a global sensitivity analysis to determine which inputs are most influential and how the model's prediction capability degrades due to uncertainty in these inputs. We then compare the model's humidity prediction with measured data from five houses, which shows that this model, and a set of simple inputs, can give reasonable prediction of the indoor humidity.

  11. Boolean modeling of neural systems with point-process inputs and outputs. Part I: theory and simulations.

    Science.gov (United States)

    Marmarelis, Vasilis Z; Zanos, Theodoros P; Berger, Theodore W

    2009-08-01

    This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a "Boolean-Volterra" model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II).

  12. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    Science.gov (United States)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-06-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low

  13. A non-linear dimension reduction methodology for generating data-driven stochastic input models

    International Nuclear Information System (INIS)

    Ganapathysubramanian, Baskar; Zabaras, Nicholas

    2008-01-01

    Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R n . An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R d (d<< n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology

  14. Visual Predictive Check in Models with Time-Varying Input Function.

    Science.gov (United States)

    Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio

    2015-11-01

    The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.

  15. Concomitant variables in finite mixture models

    NARCIS (Netherlands)

    Wedel, M

    The standard mixture model, the concomitant variable mixture model, the mixture regression model and the concomitant variable mixture regression model all enable simultaneous identification and description of groups of observations. This study reviews the different ways in which dependencies among

  16. Development of ANFIS models for air quality forecasting and input optimization for reducing the computational cost and time

    Science.gov (United States)

    Prasad, Kanchan; Gorai, Amit Kumar; Goyal, Pramila

    2016-03-01

    This study aims to develop adaptive neuro-fuzzy inference system (ANFIS) for forecasting of daily air pollution concentrations of five air pollutants [sulphur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), ozone (O3) and particular matters (PM10)] in the atmosphere of a Megacity (Howrah). Air pollution in the city (Howrah) is rising in parallel with the economics and thus observing, forecasting and controlling the air pollution becomes increasingly important due to the health impact. ANFIS serve as a basis for constructing a set of fuzzy IF-THEN rules, with appropriate membership functions to generate the stipulated input-output pairs. The ANFIS model predictor considers the value of meteorological factors (pressure, temperature, relative humidity, dew point, visibility, wind speed, and precipitation) and previous day's pollutant concentration in different combinations as the inputs to predict the 1-day advance and same day air pollution concentration. The concentration value of five air pollutants and seven meteorological parameters of the Howrah city during the period 2009 to 2011 were used for development of the ANFIS model. Collinearity tests were conducted to eliminate the redundant input variables. A forward selection (FS) method is used for selecting the different subsets of input variables. Application of collinearity tests and FS techniques reduces the numbers of input variables and subsets which helps in reducing the computational cost and time. The performances of the models were evaluated on the basis of four statistical indices (coefficient of determination, normalized mean square error, index of agreement, and fractional bias).

  17. A DNA-based system for selecting and displaying the combined result of two input variables

    DEFF Research Database (Denmark)

    Liu, Huajie; Wang, Jianbang; Song, S

    2015-01-01

    demonstrate this capability in a DNA-based system that takes two input numbers, represented in DNA strands, and returns the result of their multiplication, writing this as a number in a display. Unlike a conventional calculator, this system operates by selecting the result from a library of solutions rather...

  18. High Temperature Test Facility Preliminary RELAP5-3D Input Model Description

    Energy Technology Data Exchange (ETDEWEB)

    Bayless, Paul David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-12-01

    A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.

  19. Getting innovation out of suppliers? A conceptual model for characterizing supplier inputs to new product development

    OpenAIRE

    Lakemond, Nicolette; Rosell, David T.

    2011-01-01

    There are many studies on supplier collaborations in NPD. However, there is not much written about what suppliers actually contribute to innovation. Based on a literature review focusing on 80 articles we develop a conceptual framework categorizing different supplier inputs to innovation. This model is formulated by characterizing supplier inputs related to the component level and architectural level, and inputs that are incremental or radical in nature. On a component level, supplier inputs ...

  20. Synchronization Model for Pulsating Variables

    Science.gov (United States)

    Takahashi, S.; Morikawa, M.

    2013-12-01

    A simple model is proposed, which describes the variety of stellar pulsations. In this model, a star is described as an integration of independent elements which interact with each other. This interaction, which may be gravitational or hydrodynamic, promotes the synchronization of elements to yield a coherent mean field pulsation provided some conditions are satisfied. In the case of opacity driven pulsations, the whole star is described as a coupling of many heat engines. In the case of stochastic oscillation, the whole star is described as a coupling of convection cells, interacting through their flow patterns. Convection cells are described by the Lorentz model. In both models, interactions of elements lead to various pulsations, from irregular to regular. The coupled Lorenz model also describes a light curve which shows a semi-regular variability and also shows a low-frequency enhancement proportional to 1/f in its power spectrum. This is in agreement with observations (Kiss et al. 2006). This new modeling method of ‘coupled elements’ may provide a powerful description for a variety of stellar pulsations.

  1. Continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics

    International Nuclear Information System (INIS)

    Chen, Haixia; Zhang, Jing

    2007-01-01

    We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme loses the output of phase-conjugate clones and is regarded as irreversible quantum cloning

  2. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    Science.gov (United States)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  3. Gaussian-input Gaussian mixture model for representing density maps and atomic models.

    Science.gov (United States)

    Kawabata, Takeshi

    2018-03-06

    A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  4. Behavioral and electrophysiological evidence for early and automatic detection of phonological equivalence in variable speech inputs.

    Science.gov (United States)

    Kharlamov, Viktor; Campbell, Kenneth; Kazanina, Nina

    2011-11-01

    Speech sounds are not always perceived in accordance with their acoustic-phonetic content. For example, an early and automatic process of perceptual repair, which ensures conformity of speech inputs to the listener's native language phonology, applies to individual input segments that do not exist in the native inventory or to sound sequences that are illicit according to the native phonotactic restrictions on sound co-occurrences. The present study with Russian and Canadian English speakers shows that listeners may perceive phonetically distinct and licit sound sequences as equivalent when the native language system provides robust evidence for mapping multiple phonetic forms onto a single phonological representation. In Russian, due to an optional but productive t-deletion process that affects /stn/ clusters, the surface forms [sn] and [stn] may be phonologically equivalent and map to a single phonological form /stn/. In contrast, [sn] and [stn] clusters are usually phonologically distinct in (Canadian) English. Behavioral data from identification and discrimination tasks indicated that [sn] and [stn] clusters were more confusable for Russian than for English speakers. The EEG experiment employed an oddball paradigm with nonwords [asna] and [astna] used as the standard and deviant stimuli. A reliable mismatch negativity response was elicited approximately 100 msec postchange in the English group but not in the Russian group. These findings point to a perceptual repair mechanism that is engaged automatically at a prelexical level to ensure immediate encoding of speech inputs in phonological terms, which in turn enables efficient access to the meaning of a spoken utterance.

  5. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    Science.gov (United States)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  6. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    Science.gov (United States)

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal

  7. The MARINA model (Model to Assess River Inputs of Nutrients to seAs)

    NARCIS (Netherlands)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-01-01

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients

  8. Combining predictions from linear models when training and test inputs differ

    NARCIS (Netherlands)

    T. van Ommen (Thijs); N.L. Zhang (Nevin); J. Tian (Jin)

    2014-01-01

    textabstractMethods for combining predictions from different models in a supervised learning setting must somehow estimate/predict the quality of a model's predictions at unknown future inputs. Many of these methods (often implicitly) make the assumption that the test inputs are identical to the

  9. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    Science.gov (United States)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  10. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  11. Variable Selection in Model-based Clustering: A General Variable Role Modeling

    OpenAIRE

    Maugis, Cathy; Celeux, Gilles; Martin-Magniette, Marie-Laure

    2008-01-01

    The currently available variable selection procedures in model-based clustering assume that the irrelevant clustering variables are all independent or are all linked with the relevant clustering variables. We propose a more versatile variable selection model which describes three possible roles for each variable: The relevant clustering variables, the irrelevant clustering variables dependent on a part of the relevant clustering variables and the irrelevant clustering variables totally indepe...

  12. The Modulated-Input Modulated-Output Model

    National Research Council Canada - National Science Library

    Moskowitz, Ira S; Kang, Myong H

    1995-01-01

    .... The data replication problem in database systems is our motivation. We introduce a new queueing theoretic model, the MIMO model, that incorporates burstiness in the sending side and busy periods in the receiving side...

  13. Modeling and Control of a Dual-Input Isolated Full-Bridge Boost Converter

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2012-01-01

    In this paper, a steady-state model, a large-signal (LS) model and an ac small-signal (SS) model for a recently proposed dual-input transformer-isolated boost converter are derived respectively by the switching flow-graph (SFG) nonlinear modeling technique. Based upon the converter’s model, the c....... The measured experimental results match the simulation results fairly well on both input source dynamic and step load transient responses....

  14. Experimental demonstration of continuous variable cloning with phase-conjugate inputs

    DEFF Research Database (Denmark)

    Sabuncu, Metin; Andersen, Ulrik Lund; Leuchs, G.

    2007-01-01

    We report the first experimental demonstration of continuous variable cloning of phase-conjugate coherent states as proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)]. In contrast to this proposal, the cloning transformation is accomplished using only linear optical components......, homodyne detection, and feedforward. As a result of combining phase conjugation with a joint measurement strategy, superior cloning is demonstrated with cloning fidelities reaching 89%....

  15. The MARINA model (Model to Assess River Inputs of Nutrients to seAs)

    OpenAIRE

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-01-01

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients to seAs (MARINA) for China. The MARINA Nutrient Model quantifies river export of nutrients by source at the sub-basin scale as a function of human activities on land. MARINA is a downscaled version for...

  16. Fresh carbon inputs to seagrass sediments induce variable microbial priming responses.

    Science.gov (United States)

    Trevathan-Tackett, Stacey M; Thomson, Alexandra C G; Ralph, Peter J; Macreadie, Peter I

    2018-04-15

    Microbes are the 'gatekeepers' of the marine carbon cycle, yet the mechanisms for how microbial metabolism drives carbon sequestration in coastal ecosystems are still being defined. The proximity of coastal habitats to runoff and disturbance creates ideal conditions for microbial priming, i.e., the enhanced remineralisation of stored carbon in response to fresh substrate availability and oxygen introduction. Microbial priming, therefore, poses a risk for enhanced CO 2 release in these carbon sequestration hotspots. Here we quantified the existence of priming in seagrass sediments and showed that the addition of fresh carbon stimulated a 1.7- to 2.7-fold increase in CO 2 release from recent and accumulated carbon deposits. We propose that priming taking place at the sediment surface is a natural occurrence and can be minimised by the recalcitrant components of the fresh inputs (i.e., lignocellulose) and by reduced metabolism in low oxygen and high burial rate conditions. Conversely, priming of deep sediments after the reintroduction to the water column through physical disturbances (e.g., dredging, boat scars) would cause rapid remineralisation of previously preserved carbon. Microbial priming is identified as a process that weakens sediment carbon storage capacity and is a pathway to CO 2 release in disturbed or degraded seagrass ecosystems; however, increased management and restoration practices can reduce these anthropogenic disturbances and enhance carbon sequestration capacity. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Influence of input matrix representation on topic modelling performance

    CSIR Research Space (South Africa)

    De Waal, A

    2010-11-01

    Full Text Available Topic models explain a collection of documents with a small set of distributions over terms. These distributions over terms define the topics. Topic models ignore the structure of documents and use a bag-of-words approach which relies solely...

  18. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  19. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  20. Using the Iterative Input variable Selection (IIS) algorithm to assess the relevance of ENSO teleconnections patterns on hydro-meteorological processes at the catchment scale

    Science.gov (United States)

    Beltrame, Ludovica; Carbonin, Daniele; Galelli, Stefano; Castelletti, Andrea

    2014-05-01

    Population growth, water scarcity and climate change are three major factors making the understanding of variations in water availability increasingly important. Therefore, reliable medium-to-long range forecasts of streamflows are essential to the development of water management policies. To this purpose, recent modelling efforts have been dedicated to seasonal and inter-annual streamflow forecasts based on the teleconnection between "at-site" hydro-meteorological processes and low frequency climate fluctuations, such as El Niño Southern Oscillation (ENSO). This work proposes a novel procedure for first detecting the impact of ENSO on hydro-meteorological processes at the catchment scale, and then assessing the potential of ENSO indicators for building medium-to-long range statistical streamflow prediction models. Core of this procedure is the adoption of the Iterative Input variable Selection (IIS) algorithm that is employed to find the most relevant forcings of streamflow variability and derive predictive models based on the selected inputs. The procedure is tested on the Columbia (USA) and Williams (Australia) Rivers, where ENSO influence has been well-documented, and then adopted on the unexplored Red River basin (Vietnam). Results show that IIS outcomes on the Columbia and Williams Rivers are consistent with the results of previous studies, and that ENSO indicators can be effectively used to enhance the streamflow forecast models capabilities. The experiments on the Red River basin show that the ENSO influence is less pronounced, inducing little effects on the basin hydro-meteorological processes.

  1. Using Crowd Sensed Data as Input to Congestion Model

    DEFF Research Database (Denmark)

    Lehmann, Anders; Gross, Allan

    2016-01-01

    Emission of airborne pollutants and climate gasses from the transport sector is a growing problem, both in indus- trialised and developing countries. Planning of urban transport system is essential to minimise the environmental, health and economic impact of congestion in the transport system...... traffic systems, in less than an hour. The model is implemented in an open source database system, for easy interface with GIS resources and crowd sensed transportation data........ To get accurate and timely information on traffic congestion, and by extension information on air pollution, near real time traffic models are needed. We present in this paper an implementation of the Restricted Stochastic User equilibrium model, that is capable to model congestions for very large Urban...

  2. Catchment2Coast: making the link between coastal resource variability and river inputs

    CSIR Research Space (South Africa)

    Monteiro, P

    2003-07-01

    Full Text Available An interdisciplinary, multi-institutional modelling research project, which will help improve scientific understanding of the linkages between river catchments and their associated coastal environments, was started in October 2002. Named...

  3. Input-dependent wave attenuation in a critically-balanced model of cortex.

    Directory of Open Access Journals (Sweden)

    Xiao-Hu Yan

    Full Text Available A number of studies have suggested that many properties of brain activity can be understood in terms of critical systems. However it is still not known how the long-range susceptibilities characteristic of criticality arise in the living brain from its local connectivity structures. Here we prove that a dynamically critically-poised model of cortex acquires an infinitely-long ranged susceptibility in the absence of input. When an input is presented, the susceptibility attenuates exponentially as a function of distance, with an increasing spatial attenuation constant (i.e., decreasing range the larger the input. This is in direct agreement with recent results that show that waves of local field potential activity evoked by single spikes in primary visual cortex of cat and macaque attenuate with a characteristic length that also increases with decreasing contrast of the visual stimulus. A susceptibility that changes spatial range with input strength can be thought to implement an input-dependent spatial integration: when the input is large, no additional evidence is needed in addition to the local input; when the input is weak, evidence needs to be integrated over a larger spatial domain to achieve a decision. Such input-strength-dependent strategies have been demonstrated in visual processing. Our results suggest that input-strength dependent spatial integration may be a natural feature of a critically-balanced cortical network.

  4. Reissner-Mindlin plate model with uncertain input data

    Czech Academy of Sciences Publication Activity Database

    Hlaváček, Ivan; Chleboun, J.

    2014-01-01

    Roč. 17, Jun (2014), s. 71-88 ISSN 1468-1218 Institutional support: RVO:67985840 Keywords : Reissner-Mindlin model * orthotropic plate Subject RIV: BA - General Mathematics Impact factor: 2.519, year: 2014 http://www.sciencedirect.com/science/article/pii/S1468121813001077

  5. Determining input values for a simple parametric model to estimate ...

    African Journals Online (AJOL)

    Estimating soil evaporation (Es) is an important part of modelling vineyard evapotranspiration for irrigation purposes. Furthermore, quantification of possible soil texture and trellis effects is essential. Daily Es from six topsoils packed into lysimeters was measured under grapevines on slanting and vertical trellises, ...

  6. Land Building Models: Uncertainty in and Sensitivity to Input Parameters

    Science.gov (United States)

    2013-08-01

    Vicksburg, MS: US Army Engineer Research and Development Center. An electronic copy of this CHETN is available from http://chl.erdc.usace.army.mil/chetn...Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area ( LCA ) Comprehensive

  7. Scientific and technical advisory committee review of the nutrient inputs to the watershed model

    Science.gov (United States)

    The following is a report by a STAC Review Team concerning the methods and documentation used by the Chesapeake Bay Partnership for evaluation of nutrient inputs to Phase 6 of the Chesapeake Bay Watershed Model. The “STAC Review of the Nutrient Inputs to the Watershed Model” (previously referred to...

  8. Crop growth modelling and crop yield forecasting using satellite derived meteorological inputs

    NARCIS (Netherlands)

    Wit, de A.J.W.; Diepen, van K.

    2006-01-01

    One of the key challenges for operational crop monitoring and yield forecasting using crop models is to find spatially representative meteorological input data. Currently, weather inputs are often interpolated from low density networks of weather stations or derived from output from coarse (0.5

  9. Little Higgs model limits from LHC - Input for Snowmass 2013

    International Nuclear Information System (INIS)

    Reuter, Juergen; Tonini, Marco; Vries, Maikel de

    2013-07-01

    The status of the most prominent model implementations of the Little Higgs paradigm, the Littlest Higgs with and without discrete T parity as well as the Simplest Little Higgs are reviewed. For this, we are taking into account a fit to 21 electroweak precision observables from LEP, SLC, Tevatron together with the full 25 fb -1 of Higgs data reported from ATLAS and CMS at Moriond 2013. We also - focusing on the Littlest Higgs with T parity - include an outlook on corresponding direct searches at the 8 TeV LHC and their competitiveness with the EW and Higgs data regarding their exclusion potential. This contribution to the Snowmass procedure serves as a guideline which regions in parameter space of Little Higgs models are still compatible for the upcoming LHC runs and future experiments at the energy frontier. For this we propose two different benchmark scenarios for the Littlest Higgs with T parity, one with heavy mirror quarks, one with light ones.

  10. A robust hybrid model integrating enhanced inputs based extreme learning machine with PLSR (PLSR-EIELM) and its application to intelligent measurement.

    Science.gov (United States)

    He, Yan-Lin; Geng, Zhi-Qiang; Xu, Yuan; Zhu, Qun-Xiong

    2015-09-01

    In this paper, a robust hybrid model integrating an enhanced inputs based extreme learning machine with the partial least square regression (PLSR-EIELM) was proposed. The proposed PLSR-EIELM model can overcome two main flaws in the extreme learning machine (ELM), i.e. the intractable problem in determining the optimal number of the hidden layer neurons and the over-fitting phenomenon. First, a traditional extreme learning machine (ELM) is selected. Second, a method of randomly assigning is applied to the weights between the input layer and the hidden layer, and then the nonlinear transformation for independent variables can be obtained from the output of the hidden layer neurons. Especially, the original input variables are regarded as enhanced inputs; then the enhanced inputs and the nonlinear transformed variables are tied together as the whole independent variables. In this way, the PLSR can be carried out to identify the PLS components not only from the nonlinear transformed variables but also from the original input variables, which can remove the correlation among the whole independent variables and the expected outputs. Finally, the optimal relationship model of the whole independent variables with the expected outputs can be achieved by using PLSR. Thus, the PLSR-EIELM model is developed. Then the PLSR-EIELM model served as an intelligent measurement tool for the key variables of the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. The experimental results show that the predictive accuracy of PLSR-EIELM is stable, which indicate that PLSR-EIELM has good robust character. Moreover, compared with ELM, PLSR, hierarchical ELM (HELM), and PLSR-ELM, PLSR-EIELM can achieve much smaller predicted relative errors in these two applications. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  11. A Core Language for Separate Variability Modeling

    DEFF Research Database (Denmark)

    Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina

    2014-01-01

    Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... models. Our language, Featherweight VML, has several distinctive features. Its architecture and operations are inspired by the recently proposed Common Variability Language (CVL). Its semantics is considerably simpler than that of CVL, while remaining confluent (unlike CVL). We simplify complex......, which makes it suitable to serve as a specification for implementations of trustworthy variant derivation. Featherweight VML offers insights in the execution of other variability modeling languages such as the Orthogonal Variability Model and Delta Modeling. To the best of our knowledge...

  12. Modeling the Variable Heliopause Location

    Science.gov (United States)

    Hensley, Kerry

    2018-03-01

    In 2012, Voyager 1 zipped across the heliopause. Five and a half years later, Voyager 2 still hasnt followed its twin into interstellar space. Can models of the heliopause location help determine why?How Far to the Heliopause?Artists conception of the heliosphere with the important structures and boundaries labeled. [NASA/Goddard/Walt Feimer]As our solar system travels through the galaxy, the solar outflow pushes against the surrounding interstellar medium, forming a bubble called the heliosphere. The edge of this bubble, the heliopause, is the outermost boundary of our solar system, where the solar wind and the interstellar medium meet. Since the solar outflow is highly variable, the heliopause is constantly moving with the motion driven by changes inthe Sun.NASAs twin Voyager spacecraft were poisedto cross the heliopause after completingtheir tour of the outer planets in the 1980s. In 2012, Voyager 1 registered a sharp increase in the density of interstellar particles, indicating that the spacecraft had passed out of the heliosphere and into the interstellar medium. The slower-moving Voyager 2 was set to pierce the heliopause along a different trajectory, but so far no measurements have shown that the spacecraft has bid farewell to oursolar system.In a recent study, ateam of scientists led by Haruichi Washimi (Kyushu University, Japan and CSPAR, University of Alabama-Huntsville) argues that models of the heliosphere can help explain this behavior. Because the heliopause location is controlled by factors that vary on many spatial and temporal scales, Washimiand collaborators turn to three-dimensional, time-dependent magnetohydrodynamics simulations of the heliosphere. In particular, they investigate how the position of the heliopause along the trajectories of Voyager 1 and Voyager 2 changes over time.Modeled location of the heliopause along the paths of Voyagers 1 (blue) and 2 (orange). Click for a closer look. The red star indicates the location at which Voyager

  13. Mechanistic interpretation of glass reaction: Input to kinetic model development

    International Nuclear Information System (INIS)

    Bates, J.K.; Ebert, W.L.; Bradley, J.P.; Bourcier, W.L.

    1991-05-01

    Actinide-doped SRL 165 type glass was reacted in J-13 groundwater at 90 degree C for times up to 278 days. The reaction was characterized by both solution and solid analyses. The glass was seen to react nonstoichiometrically with preferred leaching of alkali metals and boron. High resolution electron microscopy revealed the formation of a complex layer structure which became separated from the underlying glass as the reaction progressed. The formation of the layer and its effect on continued glass reaction are discussed with respect to the current model for glass reaction used in the EQ3/6 computer simulation. It is concluded that the layer formed after 278 days is not protective and may eventually become fractured and generate particulates that may be transported by liquid water. 5 refs., 5 figs. , 3 tabs

  14. Comparison of different snow model formulations and their responses to input uncertainties in the Upper Indus Basin

    Science.gov (United States)

    Pritchard, David; Fowler, Hayley; Forsythe, Nathan; O'Donnell, Greg; Rutter, Nick; Bardossy, Andras

    2017-04-01

    Snow and glacier melt in the mountainous Upper Indus Basin (UIB) sustain water supplies, irrigation networks, hydropower production and ecosystems in extensive downstream lowlands. Understanding hydrological and cryospheric sensitivities to climatic variability and change in the basin is therefore critical for local, national and regional water resources management. Assessing these sensitivities using numerical modelling is challenging, due to limitations in the quality and quantity of input and evaluation data, as well as uncertainties in model structures and parameters. This study explores how these uncertainties in inputs and process parameterisations affect distributed simulations of ablation in the complex climatic setting of the UIB. The role of model forcing uncertainties is explored using combinations of local observations, remote sensing and reanalysis - including the high resolution High Asia Refined Analysis - to generate multiple realisations of spatiotemporal model input fields. Forcing a range of model structures with these input fields then provides an indication of how different ablation parameterisations respond to uncertainties and perturbations in climatic drivers. Model structures considered include simple, empirical representations of melt processes through to physically based, full energy balance models with multi-physics options for simulating snowpack evolution (including an adapted version of FSM). Analysing model input and structural uncertainties in this way provides insights for methodological choices in climate sensitivity assessments of data-sparse, high mountain catchments. Such assessments are key for supporting water resource management in these catchments, particularly given the potential complications of enhanced warming through elevation effects or, in the case of the UIB, limited understanding of how and why local climate change signals differ from broader patterns.

  15. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    Science.gov (United States)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  16. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    Directory of Open Access Journals (Sweden)

    Priska Arindya Purnama

    2017-11-01

    Full Text Available The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt sequence expected to be effected by an input series (Xt and other inputs in a group called a noise series (Nt. Multi input transfer function model obtained is (b1,s1,r1 (b2,s2,r2 (b3,s3,r3 (b4,s4,r4(pn,qn = (0,0,0 (23,0,0 (1,2,0 (0,0,0 ([5,8],2 and shows that air temperature on t-day affects rainfall on t-day, rainfall on t-day is influenced by air humidity in the previous 23 days, rainfall on t-day is affected by wind speed in the previous day , and rainfall on day t is affected by clouds on day t. The results of rainfall forecasting in Batu City with multi input transfer function model can be said to be accurate, because it produces relatively small RMSE value. The value of RMSE data forecasting training is 7.7921 while forecasting data testing is 4.2184. Multi-input transfer function model is suitable for rainfall in Batu City.

  17. Evaluating the effects of model structure and meteorological input data on runoff modelling in an alpine headwater basin

    Science.gov (United States)

    Schattan, Paul; Bellinger, Johannes; Förster, Kristian; Schöber, Johannes; Huttenlau, Matthias; Kirnbauer, Robert; Achleitner, Stefan

    2017-04-01

    Modelling water resources in snow-dominated mountainous catchments is challenging due to both, short concentration times and a highly variable contribution of snow melt in space and time from complex terrain. A number of model setups exist ranging from physically based models to conceptional models which do not attempt to represent the natural processes in a physically meaningful way. Within the flood forecasting system for the Tyrolean Inn River two serially linked hydrological models with differing process representation are used. Non- glacierized catchments are modelled by a semi-distributed, water balance model (HQsim) based on the HRU-approach. A fully-distributed energy and mass balance model (SES), purpose-built for snow- and icemelt, is used for highly glacierized headwater catchments. Previous work revealed uncertainties and limitations within the models' structures regarding (i) the representation of snow processes in HQsim, (ii) the runoff routing of SES, and (iii) the spatial resolution of the meteorological input data in both models. To overcome these limitations, a "strengths driven" model coupling is applied. Instead of linking the models serially, a vertical one-way coupling of models has been implemented. The fully-distributed snow modelling of SES is combined with the semi-distributed HQsim structure, allowing to benefit from soil and runoff routing schemes in HQsim. A monte-carlo based modelling experiment was set up to evaluate the resulting differences in the runoff prediction due to the improved model coupling and a refined spatial resolution of the meteorological forcing. The experiment design follows a gradient of spatial discretisation of hydrological processes and meteorological forcing data with a total of six different model setups for the alpine headwater basin of the Fagge River in the Tyrolean Alps. In general, all setups show a good performance for this particular basin. It is therefore planned to include other basins with differing

  18. Cardinality-dependent Variability in Orthogonal Variability Models

    DEFF Research Database (Denmark)

    Mærsk-Møller, Hans Martin; Jørgensen, Bo Nørregaard

    2012-01-01

    During our work on developing and running a software product line for eco-sustainable greenhouse-production software tools, which currently have three products members we have identified a need for extending the notation of the Orthogonal Variability Model (OVM) to support what we refer...

  19. Handbook of latent variable and related models

    CERN Document Server

    Lee, Sik-Yum

    2011-01-01

    This Handbook covers latent variable models, which are a flexible class of models for modeling multivariate data to explore relationships among observed and latent variables.- Covers a wide class of important models- Models and statistical methods described provide tools for analyzing a wide spectrum of complicated data- Includes illustrative examples with real data sets from business, education, medicine, public health and sociology.- Demonstrates the use of a wide variety of statistical, computational, and mathematical techniques.

  20. Variability Properties of Four Million Sources in the TESS Input Catalog Observed with the Kilodegree Extremely Little Telescope Survey

    Science.gov (United States)

    Oelkers, Ryan J.; Rodriguez, Joseph E.; Stassun, Keivan G.; Pepper, Joshua; Somers, Garrett; Kafka, Stella; Stevens, Daniel J.; Beatty, Thomas G.; Siverd, Robert J.; Lund, Michael B.; Kuhn, Rudolf B.; James, David; Gaudi, B. Scott

    2018-01-01

    The Kilodegree Extremely Little Telescope (KELT) has been surveying more than 70% of the celestial sphere for nearly a decade. While the primary science goal of the survey is the discovery of transiting, large-radii planets around bright host stars, the survey has collected more than 106 images, with a typical cadence between 10–30 minutes, for more than four million sources with apparent visual magnitudes in the approximate range 7TESS Input catalog and the AAVSO Variable Star Index to precipitate the follow-up and classification of each source. The catalog is maintained as a living database on the Filtergraph visualization portal at the URL https://filtergraph.com/kelt_vars.

  1. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  2. Garbage In Garbage Out Garbage In : Improving the Inputs and Atmospheric Feedbacks in Seasonal Snowpack Modeling

    Science.gov (United States)

    Gutmann, E. D.

    2016-12-01

    Without good input data, almost any model will produce bad output; however, alpine environments are extremely difficult places to make measurements of those inputs. Perhaps the least well known input is precipitation, but almost as important are temperature, wind, humidity, and radiation. Recent advances in atmospheric modeling have improved the fidelity of the output such that model output is sometimes better than interpolated observations, particularly for precipitation; however these models come with a tremendous computational cost. We describe the Intermediate Complexity Atmospheric Research model (ICAR) as one path to a computationally efficient method to improve snow pack model inputs over complex terrain. ICAR provides estimates of all inputs at a small fraction of the computational cost of a traditional atmospheric model such as the Weather Research and Forecasting model (WRF). Importantly, ICAR is able to simulate feedbacks from the land surface that are critical for estimating the air temperature. In addition, we will explore future improvements to the local wind fields including the use of statistics derived from limited duration Large Eddy Simulation (LES) model runs. These wind fields play a critical role in determing the redistribution of snow, and the redistribution of snow changes the surface topography and thus the wind field. We show that a proper depiction of snowpack redistribution can have a large affect on streamflow timing, and an even larger effect on the climate change signal of that streamflow.

  3. Modeling of heat transfer into a heat pipe for a localized heat input zone

    International Nuclear Information System (INIS)

    Rosenfeld, J.H.

    1987-01-01

    A general model is presented for heat transfer into a heat pipe using a localized heat input. Conduction in the wall of the heat pipe and boiling in the interior structure are treated simultaneously. The model is derived from circumferential heat transfer in a cylindrical heat pipe evaporator and for radial heat transfer in a circular disk with boiling from the interior surface. A comparison is made with data for a localized heat input zone. Agreement between the theory and the model is good. This model can be used for design purposes if a boiling correlation is available. The model can be extended to provide improved predictions of heat pipe performance

  4. On the Influence of Input Data Quality to Flood Damage Estimation: The Performance of the INSYDE Model

    Directory of Open Access Journals (Sweden)

    Daniela Molinari

    2017-09-01

    Full Text Available IN-depth SYnthetic Model for Flood Damage Estimation (INSYDE is a model for the estimation of flood damage to residential buildings at the micro-scale. This study investigates the sensitivity of INSYDE to the accuracy of input data. Starting from the knowledge of input parameters at the scale of individual buildings for a case study, the level of detail of input data is progressively downgraded until the condition in which a representative value is defined for all inputs at the census block scale. The analysis reveals that two conditions are required to limit the errors in damage estimation: the representativeness of representatives values with respect to micro-scale values and the local knowledge of the footprint area of the buildings, being the latter the main extensive variable adopted by INSYDE. Such a result allows for extending the usability of the model at the meso-scale, also in different countries, depending on the availability of aggregated building data.

  5. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  6. Allocatable Fixed Inputs and Two-Stage Aggregation Models of Multioutput Production Decisions

    OpenAIRE

    Barry T. Coyle

    1993-01-01

    Allocation decisions for a fixed input such as land are incorporated into a two-stage aggregation model of multioutput production decisions. The resulting two-stage model is more realistic and is as tractable for empirical research as the standard model.

  7. Multivariate Self-Exciting Threshold Autoregressive Models with eXogenous Input

    OpenAIRE

    Peter Martey Addo

    2014-01-01

    This study defines a multivariate Self--Exciting Threshold Autoregressive with eXogenous input (MSETARX) models and present an estimation procedure for the parameters. The conditions for stationarity of the nonlinear MSETARX models is provided. In particular, the efficiency of an adaptive parameter estimation algorithm and LSE (least squares estimate) algorithm for this class of models is then provided via simulations.

  8. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    International Nuclear Information System (INIS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-01-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  9. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  10. Importance analysis for models with correlated variables and its sparse grid solution

    International Nuclear Information System (INIS)

    Li, Luyi; Lu, Zhenzhou

    2013-01-01

    For structural models involving correlated input variables, a novel interpretation for variance-based importance measures is proposed based on the contribution of the correlated input variables to the variance of the model output. After the novel interpretation of the variance-based importance measures is compared with the existing ones, two solutions of the variance-based importance measures of the correlated input variables are built on the sparse grid numerical integration (SGI): double-loop nested sparse grid integration (DSGI) method and single loop sparse grid integration (SSGI) method. The DSGI method solves the importance measure by decreasing the dimensionality of the input variables procedurally, while SSGI method performs importance analysis through extending the dimensionality of the inputs. Both of them can make full use of the advantages of the SGI, and are well tailored for different situations. By analyzing the results of several numerical and engineering examples, it is found that the novel proposed interpretation about the importance measures of the correlated input variables is reasonable, and the proposed methods for solving importance measures are efficient and accurate. -- Highlights: •The contribution of correlated variables to the variance of the output is analyzed. •A novel interpretation for variance-based indices of correlated variables is proposed. •Two solutions for variance-based importance measures of correlated variables are built

  11. Use of regional climate model simulations as an input for hydrological models for the Hindukush-Karakorum-Himalaya region

    NARCIS (Netherlands)

    Akhtar, M.; Ahmad, N.; Booij, Martijn J.

    2009-01-01

    The most important climatological inputs required for the calibration and validation of hydrological models are temperature and precipitation that can be derived from observational records or alternatively from regional climate models (RCMs). In this paper, meteorological station observations and

  12. Backstepping control for a 3DOF model helicopter with input and output constraints

    Directory of Open Access Journals (Sweden)

    Rong Mei

    2016-02-01

    Full Text Available In this article, a backstepping control scheme is developed for the motion control of a Three degrees of freedom (3DOF model helicopter with unknown external disturbance, modelling uncertainties and input and output constraints. In the developed robust control scheme, augmented state observers are applied to estimate the unknown states, unknown external disturbance and modelling uncertainties. Auxiliary systems are designed to deal with input saturation. A barrier Lyapunov function is employed to handle the output saturation. The stability of closed-loop system is proved by the Lyapunov method. Simulation results show that the designed control scheme is effective at dealing with the motion control of a 3DOF model helicopter in the presence of unknown external disturbance and modelling uncertainties, and input and output saturation.

  13. ASR in a Human Word Recognition Model: Generating Phonemic Input for Shortlist

    OpenAIRE

    Scharenborg, O.E.; Boves, L.W.J.; Veth, J.M. de

    2002-01-01

    The current version of the psycholinguistic model of human word recognition Shortlist suffers from two unrealistic constraints. First, the input of Shortlist must consist of a single string of phoneme symbols. Second, the current version of the search in Shortlist makes it difficult to deal with insertions and deletions in the input phoneme string. This research attempts to fully automatically derive a phoneme string from the acoustic signal that is as close as possible to the number of phone...

  14. Calibration of uncertain inputs to computer models using experimentally measured quantities and the BMARS emulator

    International Nuclear Information System (INIS)

    Stripling, H.F.; McClarren, R.G.; Kuranz, C.C.; Grosskopf, M.J.; Rutter, E.; Torralva, B.R.

    2011-01-01

    We present a method for calibrating the uncertain inputs to a computer model using available experimental data. The goal of the procedure is to produce posterior distributions of the uncertain inputs such that when samples from the posteriors are used as inputs to future model runs, the model is more likely to replicate (or predict) the experimental response. The calibration is performed by sampling the space of the uncertain inputs, using the computer model (or, more likely, an emulator for the computer model) to assign weights to the samples, and applying the weights to produce the posterior distributions and generate predictions of new experiments within confidence bounds. The method is similar to the Markov chain Monte Carlo (MCMC) calibration methods with independent sampling with the exception that we generate samples beforehand and replace the candidate acceptance routine with a weighting scheme. We apply our method to the calibration of a Hyades 2D model of laser energy deposition in beryllium. We employ a Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator as a surrogate for Hyades 2D. We treat a range of uncertainties in our system, including uncertainties in the experimental inputs, experimental measurement error, and systematic experimental timing errors. The results of the calibration are posterior distributions that both agree with intuition and improve the accuracy and decrease the uncertainty in experimental predictions. (author)

  15. Development of the MARS input model for Kori nuclear units 1 transient analyzer

    International Nuclear Information System (INIS)

    Hwang, M.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Jeong, J. J.

    2004-11-01

    KAERI has been developing the 'NSSS transient analyzer' based on best-estimate codes for Kori Nuclear Units 1 plants. The MARS and RETRAN codes have been used as the best-estimate codes for the NSSS transient analyzer. Among these codes, the MARS code is adopted for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. So it is necessary to develop the MARS input model for Kori Nuclear Units 1 plants. This report includes the input model (hydrodynamic component and heat structure models) requirements and the calculation note for the MARS input data generation for Kori Nuclear Units 1 plant analyzer (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Kori Nuclear Units 1

  16. Sensitivity Analysis of Input Parameters for a Dynamic Food Chain Model DYNACON

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Lee, Geun Chang; Han, Moon Hee; Cho, Gyu Seong

    2000-01-01

    The sensitivity analysis of input parameters for a dynamic food chain model DYNACON was conducted as a function of deposition data for the long-lived radionuclides ( 137 Cs, 90 Sr). Also, the influence of input parameters for the short and long-terms contamination of selected foodstuffs (cereals, leafy vegetables, milk) was investigated. The input parameters were sampled using the LHS technique, and their sensitivity indices represented as PRCC. The sensitivity index was strongly dependent on contamination period as well as deposition data. In case of deposition during the growing stages of plants, the input parameters associated with contamination by foliar absorption were relatively important in long-term contamination as well as short-term contamination. They were also important in short-term contamination in case of deposition during the non-growing stages. In long-term contamination, the influence of input parameters associated with foliar absorption decreased, while the influence of input parameters associated with root uptake increased. These phenomena were more remarkable in case of the deposition of non-growing stages than growing stages, and in case of 90 Sr deposition than 137 Cs deposition. In case of deposition during growing stages of pasture, the input parameters associated with the characteristics of cattle such as feed-milk transfer factor and daily intake rate of cattle were relatively important in contamination of milk

  17. The interspike interval of a cable model neuron with white noise input.

    Science.gov (United States)

    Tuckwell, H C; Wan, F Y; Wong, Y S

    1984-01-01

    The firing time of a cable model neuron in response to white noise current injection is investigated with various methods. The Fourier decomposition of the depolarization leads to partial differential equations for the moments of the firing time. These are solved by perturbation and numerical methods, and the results obtained are in excellent agreement with those obtained by Monte Carlo simulation. The convergence of the random Fourier series is found to be very slow for small times so that when the firing time is small it is more efficient to simulate the solution of the stochastic cable equation directly using the two different representations of the Green's function, one which converges rapidly for small times and the other which converges rapidly for large times. The shape of the interspike interval density is found to depend strongly on input position. The various shapes obtained for different input positions resemble those for real neurons. The coefficient of variation of the interspike interval decreases monotonically as the distance between the input and trigger zone increases. A diffusion approximation for a nerve cell receiving Poisson input is considered and input/output frequency relations obtained for different input sites. The cases of multiple trigger zones and multiple input sites are briefly discussed.

  18. Input data requirements for performance modelling and monitoring of photovoltaic plants

    DEFF Research Database (Denmark)

    Gavriluta, Anamaria Florina; Spataru, Sergiu; Sera, Dezso

    2018-01-01

    This work investigates the input data requirements in the context of performance modeling of thin-film photovoltaic (PV) systems. The analysis focuses on the PVWatts performance model, well suited for on-line performance monitoring of PV strings, due to its low number of parameters and high...... accuracy. The work aims to identify the minimum amount of input data required for parameterizing an accurate model of the PV plant. The analysis was carried out for both amorphous silicon (a-Si) and cadmium telluride (CdTe), using crystalline silicon (c-Si) as a base for comparison. In the studied cases...

  19. An integrated model for the assessment of global water resources Part 1: Model description and input meteorological forcing

    Science.gov (United States)

    Hanasaki, N.; Kanae, S.; Oki, T.; Masuda, K.; Motoya, K.; Shirakawa, N.; Shen, Y.; Tanaka, K.

    2008-07-01

    basins and less than ±2 mo in 25 basins. The performance was similar to the best available precedent studies with closure of energy and water. The input meteorological forcing component and the integrated model provide a framework with which to assess global water resources, with the potential application to investigate the subannual variability in water resources.

  20. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  1. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    NARCIS (Netherlands)

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.

    2016-01-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  2. Improving the Performance of Water Demand Forecasting Models by Using Weather Input

    NARCIS (Netherlands)

    Bakker, M.; Van Duist, H.; Van Schagen, K.; Vreeburg, J.; Rietveld, L.

    2014-01-01

    Literature shows that water demand forecasting models which use water demand as single input, are capable of generating a fairly accurate forecast. However, at changing weather conditions the forecasting errors are quite large. In this paper three different forecasting models are studied: an

  3. Logistics flows and enterprise input-output models: aggregate and disaggregate analysis

    NARCIS (Netherlands)

    Albino, V.; Yazan, Devrim; Messeni Petruzzelli, A.; Okogbaa, O.G.

    2011-01-01

    In the present paper, we propose the use of enterprise input-output (EIO) models to describe and analyse the logistics flows considering spatial issues and related environmental effects associated with production and transportation processes. In particular, transportation is modelled as a specific

  4. GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise Paul [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read

  5. Development of an Input Model to MELCOR 1.8.5 for the Oskarshamn 3 BWR

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Lars [Lentek, Nykoeping (Sweden)

    2006-05-15

    An input model has been prepared to the code MELCOR 1.8.5 for the Swedish Oskarshamn 3 Boiling Water Reactor (O3). This report describes the modelling work and the various files which comprise the input deck. Input data are mainly based on original drawings and system descriptions made available by courtesy of OKG AB. Comparison and check of some primary system data were made against an O3 input file to the SCDAP/RELAP5 code that was used in the SARA project. Useful information was also obtained from the FSAR (Final Safety Analysis Report) for O3 and the SKI report '2003 Stoerningshandboken BWR'. The input models the O3 reactor at its current state with the operating power of 3300 MW{sub th}. One aim with this work is that the MELCOR input could also be used for power upgrading studies. All fuel assemblies are thus assumed to consist of the new Westinghouse-Atom's SVEA-96 Optima2 fuel. MELCOR is a severe accident code developed by Sandia National Laboratory under contract from the U.S. Nuclear Regulatory Commission (NRC). MELCOR is a successor to STCP (Source Term Code Package) and has thus a long evolutionary history. The input described here is adapted to the latest version 1.8.5 available when the work began. It was released the year 2000, but a new version 1.8.6 was distributed recently. Conversion to the new version is recommended. (During the writing of this report still another code version, MELCOR 2.0, has been announced to be released within short.) In version 1.8.5 there is an option to describe the accident progression in the lower plenum and the melt-through of the reactor vessel bottom in more detail by use of the Bottom Head (BH) package developed by Oak Ridge National Laboratory especially for BWRs. This is in addition to the ordinary MELCOR COR package. Since problems arose running with the BH input two versions of the O3 input deck were produced, a NONBH and a BH deck. The BH package is no longer a separate package in the new 1

  6. Relevance units latent variable model and nonlinear dimensionality reduction.

    Science.gov (United States)

    Gao, Junbin; Zhang, Jun; Tien, David

    2010-01-01

    A new dimensionality reduction method, called relevance units latent variable model (RULVM), is proposed in this paper. RULVM has a close link with the framework of Gaussian process latent variable model (GPLVM) and it originates from a recently developed sparse kernel model called relevance units machine (RUM). RUM follows the idea of relevance vector machine (RVM) under the Bayesian framework but releases the constraint that relevance vectors (RVs) have to be selected from the input vectors. RUM treats relevance units (RUs) as part of the parameters to be learned from the data. As a result, a RUM maintains all the advantages of RVM and offers superior sparsity. RULVM inherits the advantages of sparseness offered by the RUM and the experimental result shows that RULVM algorithm possesses considerable computational advantages over GPLVM algorithm.

  7. Design of vaccination and fumigation on Host-Vector Model by input-output linearization method

    Science.gov (United States)

    Nugraha, Edwin Setiawan; Naiborhu, Janson; Nuraini, Nuning

    2017-03-01

    Here, we analyze the Host-Vector Model and proposed design of vaccination and fumigation to control infectious population by using feedback control especially input-output liniearization method. Host population is divided into three compartments: susceptible, infectious and recovery. Whereas the vector population is divided into two compartment such as susceptible and infectious. In this system, vaccination and fumigation treat as input factors and infectious population as output result. The objective of design is to stabilize of the output asymptotically tend to zero. We also present the examples to illustrate the design model.

  8. Recurrent network models for perfect temporal integration of fluctuating correlated inputs.

    Directory of Open Access Journals (Sweden)

    Hiroshi Okamoto

    2009-06-01

    Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.

  9. Effects of uncertain topographic input data on two-dimensional modeling of flow hydraulics, habitat suitability, and bed mobility

    Science.gov (United States)

    Legleiter, C. J.; McDonald, R.; Kyriakidis, P. C.; Nelson, J. M.

    2009-12-01

    Numerical models of flow and sediment transport increasingly are used to inform studies of aquatic habitat and river morphodynamics. Accurate topographic information is required to parameterize such models, but this fundamental input is typically subject to considerable uncertainty, which can propagate through a model to produce uncertain predictions of flow hydraulics. In this study, we examined the effects of uncertain topographic input on the output from FaSTMECH, a two-dimensional, finite difference flow model implemented on a regular, channel-centered grid; the model was applied to a simple, restored gravel-bed river. We adopted a spatially explicit stochastic simulation approach because elevation differences (i.e., perturbations) at one node of the computational grid influenced model predictions at nearby nodes, due to the strong coupling between proximal locations dictated by the governing equations of fluid flow. Geostatistical techniques provided an appropriate framework for examining the impacts of topographic uncertainty by generating many, equally likely realizations, each consistent with a statistical model summarizing the variability and spatial structure of channel morphology. By applying the model to each realization in turn, a distribution of model outputs was generated for each grid node. One set of realizations, conditioned to the available survey data and progressively thinned versions thereof, was used to quantify the effects of sampling strategy on topographic uncertainty and hence the uncertainty of model predictions. This analysis indicated that as the spacing between surveyed cross-sections increased, the reach-averaged ensemble standard deviation of water surface elevation, depth, velocity, and boundary shear stress increased as well, for both baseflow conditions and for a discharge of ~75% bankfull. A second set of realizations was generated by retaining randomly selected subsets of the original survey data and used to investigate the

  10. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    OpenAIRE

    Keller Alevtina; Vinogradova Tatyana

    2017-01-01

    The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the...

  11. Stream Heat Budget Modeling of Groundwater Inputs: Model Development and Validation

    Science.gov (United States)

    Glose, A.; Lautz, L. K.

    2012-12-01

    Models of physical processes in fluvial systems are useful for improving understanding of hydrologic systems and for predicting future conditions. Process-based models of fluid flow and heat transport in fluvial systems can be used to quantify unknown spatial and temporal patterns of hydrologic fluxes, such as groundwater discharge, and to predict system response to future change. In this study, a stream heat budget model was developed and calibrated to observed stream water temperature data for Meadowbrook Creek in Syracuse, NY. The one-dimensional (longitudinal), transient stream temperature model is programmed in Matlab and solves the equations for heat and fluid transport using a Crank-Nicholson finite difference scheme. The model considers four meteorologically driven heat fluxes: shortwave solar radiation, longwave radiation, latent heat flux, and sensible heat flux. Streambed conduction is also considered. Input data for the model were collected from June 13-18, 2012 over a 500 m reach of Meadowbrook Creek, a first order urban stream that drains a retention pond in the city of Syracuse, NY. Stream temperature data were recorded every 20 m longitudinally in the stream at 5-minute intervals using iButtons (model DS1922L, accuracy of ±0.5°C, resolution of 0.0625°C). Meteorological data, including air temperature, solar radiation, relative humidity, and wind speed, were recorded at 5-minute intervals using an on-site weather station. Groundwater temperature was measured in wells adjacent to the stream. Stream dimensions, bed temperatures, and type of bed sediments were also collected. A constant rate tracer injection of Rhodamine WT was used to quantify groundwater inputs every 10 m independently to validate model results. Stream temperatures fluctuated diurnally by ~3-5 °C during the observation period with temperatures peaking around 2 pm and cooling overnight, reaching a minimum between 6 and 7 am. Spatially, the stream shows a cooling trend along the

  12. An integrated model for the assessment of global water resources - Part 1: Input meteorological forcing and natural hydrological cycle modules

    Science.gov (United States)

    Hanasaki, N.; Kanae, S.; Oki, T.; Masuda, K.; Motoya, K.; Shen, Y.; Tanaka, K.

    2007-10-01

    application to investigate the subannual variability in water resources. GSWP2 participants are encouraged to re-run their model using this newly developed meteorological forcing input, which is in identical format to the original GSWP2 forcing input.

  13. Input-Output model for waste management plan for Nigeria | Njoku ...

    African Journals Online (AJOL)

    An Input-Output Model for Waste Management Plan has been developed for Nigeria based on Leontief concept and life cycle analysis. Waste was considered as source of pollution, loss of resources, and emission of green house gasses from bio-chemical treatment and decomposition, with negative impact on the ...

  14. Land cover models to predict non-point nutrient inputs for selected ...

    African Journals Online (AJOL)

    WQSAM is a practical water quality model for use in guiding southern African water quality management. However, the estimation of non-point nutrient inputs within WQSAM is uncertain, as it is achieved through a combination of calibration and expert knowledge. Non-point source loads can be correlated to particular land ...

  15. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    NARCIS (Netherlands)

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  16. The economic impact of multifunctional agriculture in The Netherlands: A regional input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2012-01-01

    Multifunctional agriculture is a broad concept lacking a precise and uniform definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model is constructed for multifunctional

  17. The economic impact of multifunctional agriculture in Dutch regions: An input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2013-01-01

    Multifunctional agriculture is a broad concept lacking a precise definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model was constructed for multifunctional agriculture

  18. Integrating models that depend on variable data

    Science.gov (United States)

    Banks, A. T.; Hill, M. C.

    2016-12-01

    Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log

  19. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope

    Directory of Open Access Journals (Sweden)

    Cheng-Yang Chang

    2017-10-01

    Full Text Available Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the “open loop sensitivity” of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  20. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Design, Fabrication, and Modeling of a Novel Dual-Axis Control Input PZT Gyroscope.

    Science.gov (United States)

    Chang, Cheng-Yang; Chen, Tsung-Lin

    2017-10-31

    Conventional gyroscopes are equipped with a single-axis control input, limiting their performance. Although researchers have proposed control algorithms with dual-axis control inputs to improve gyroscope performance, most have verified the control algorithms through numerical simulations because they lacked practical devices with dual-axis control inputs. The aim of this study was to design a piezoelectric gyroscope equipped with a dual-axis control input so that researchers may experimentally verify those control algorithms in future. Designing a piezoelectric gyroscope with a dual-axis control input is more difficult than designing a conventional gyroscope because the control input must be effective over a broad frequency range to compensate for imperfections, and the multiple mode shapes in flexural deformations complicate the relation between flexural deformation and the proof mass position. This study solved these problems by using a lead zirconate titanate (PZT) material, introducing additional electrodes for shielding, developing an optimal electrode pattern, and performing calibrations of undesired couplings. The results indicated that the fabricated device could be operated at 5.5±1 kHz to perform dual-axis actuations and position measurements. The calibration of the fabricated device was completed by system identifications of a new dynamic model including gyroscopic motions, electromechanical coupling, mechanical coupling, electrostatic coupling, and capacitive output impedance. Finally, without the assistance of control algorithms, the "open loop sensitivity" of the fabricated gyroscope was 1.82 μV/deg/s with a nonlinearity of 9.5% full-scale output. This sensitivity is comparable with those of other PZT gyroscopes with single-axis control inputs.

  2. modelling relationship between rainfall variability and yields

    African Journals Online (AJOL)

    yield models should be used for planning and forecasting the yield of millet and sorghum in the study area. Key words: modelling, rainfall, yields, millet, sorghum. INTRODUCTION. Meteorological variables, such as rainfall parameters, temperature, sunshine hours, relative humidity, and wind velocity and soil moisture are.

  3. Variability in shell models of GRBs

    Science.gov (United States)

    Sumner, M. C.; Fenimore, E. E.

    1997-01-01

    Many cosmological models of gamma-ray bursts (GRBs) assume that a single relativistic shell carries kinetic energy away from the source and later converts it into gamma rays, perhaps by interactions with the interstellar medium or by internal shocks within the shell. Although such models are able to reproduce general trends in GRB time histories, it is difficult to reproduce the high degree of variability often seen in GRBs. The authors investigate methods of achieving this variability using a simplified external shock model. Since the model emphasizes geometric and statistical considerations, rather than the detailed physics of the shell, it is applicable to any theory that relies on relativistic shells. They find that the variability in GRBs gives strong clues to the efficiency with which the shell converts its kinetic energy into gamma rays.

  4. Responses of two nonlinear microbial models to warming and increased carbon input

    Science.gov (United States)

    Wang, Y. P.; Jiang, J.; Chen-Charpentier, B.; Agusto, F. B.; Hastings, A.; Hoffman, F.; Rasmussen, M.; Smith, M. J.; Todd-Brown, K.; Wang, Y.; Xu, X.; Luo, Y. Q.

    2016-02-01

    A number of nonlinear microbial models of soil carbon decomposition have been developed. Some of them have been applied globally but have yet to be shown to realistically represent soil carbon dynamics in the field. A thorough analysis of their key differences is needed to inform future model developments. Here we compare two nonlinear microbial models of soil carbon decomposition: one based on reverse Michaelis-Menten kinetics (model A) and the other on regular Michaelis-Menten kinetics (model B). Using analytic approximations and numerical solutions, we find that the oscillatory responses of carbon pools to a small perturbation in their initial pool sizes dampen faster in model A than in model B. Soil warming always decreases carbon storage in model A, but in model B it predominantly decreases carbon storage in cool regions and increases carbon storage in warm regions. For both models, the CO2 efflux from soil carbon decomposition reaches a maximum value some time after increased carbon input (as in priming experiments). This maximum CO2 efflux (Fmax) decreases with an increase in soil temperature in both models. However, the sensitivity of Fmax to the increased amount of carbon input increases with soil temperature in model A but decreases monotonically with an increase in soil temperature in model B. These differences in the responses to soil warming and carbon input between the two nonlinear models can be used to discern which model is more realistic when compared to results from field or laboratory experiments. These insights will contribute to an improved understanding of the significance of soil microbial processes in soil carbon responses to future climate change.

  5. Development of the MARS input model for Ulchin 1/2 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Chung, B. D.; Hwang, M.

    2003-03-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes for Ulchin 1/2 plants. The MARS and RETRAN code are used as the best-estimate codes for the NSSS transient analyzer. Among the two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the input model requirements and the calculation note for the Ulchin 1/2 MARS input data generation (see the Appendix). In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 1/2

  6. Development of the MARS input model for Ulchin 3/4 transient analyzer

    International Nuclear Information System (INIS)

    Jeong, J. J.; Kim, K. D.; Lee, S. W.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Hwang, M. G.

    2003-12-01

    KAERI has been developing the NSSS transient analyzer based on best-estimate codes.The MARS and RETRAN code are adopted as the best-estimate codes for the NSSS transient analyzer. Among these two codes, the MARS code is to be used for realistic analysis of small- and large-break loss-of-coolant accidents, of which break size is greater than 2 inch diameter. This report includes the MARS input model requirements and the calculation note for the MARS input data generation (see the Appendix) for Ulchin 3/4 plant analyzer. In order to confirm the validity of the input data, we performed the calculations for a steady state at 100 % power operation condition and a double-ended cold leg break LOCA. The results of the steady-state calculation agree well with the design data. The results of the LOCA calculation seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the MARS input data can be used as a base input deck for the MARS transient analyzer for Ulchin 3/4

  7. Human upright posture control models based on multisensory inputs; in fast and slow dynamics.

    Science.gov (United States)

    Chiba, Ryosuke; Takakusaki, Kaoru; Ota, Jun; Yozu, Arito; Haga, Nobuhiko

    2016-03-01

    Posture control to maintain an upright stance is one of the most important and basic requirements in the daily life of humans. The sensory inputs involved in posture control include visual and vestibular inputs, as well as proprioceptive and tactile somatosensory inputs. These multisensory inputs are integrated to represent the body state (body schema); this is then utilized in the brain to generate the motion. Changes in the multisensory inputs result in postural alterations (fast dynamics), as well as long-term alterations in multisensory integration and posture control itself (slow dynamics). In this review, we discuss the fast and slow dynamics, with a focus on multisensory integration including an introduction of our study to investigate "internal force control" with multisensory integration-evoked posture alteration. We found that the study of the slow dynamics is lagging compared to that of fast dynamics, such that our understanding of long-term alterations is insufficient to reveal the underlying mechanisms and to propose suitable models. Additional studies investigating slow dynamics are required to expand our knowledge of this area, which would support the physical training and rehabilitation of elderly and impaired persons. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  8. ANALYSIS OF THE BANDUNG CHANGES EXCELLENT POTENTIAL THROUGH INPUT-OUTPUT MODEL USING INDEX LE MASNE

    Directory of Open Access Journals (Sweden)

    Teti Sofia Yanti

    2017-03-01

    Full Text Available Input-Output Table is arranged to present an overview of the interrelationships and interdependence between units of activity (sector production in the whole economy. Therefore the input-output models are complete and comprehensive analytical tool. The usefulness of input-output tables is an analysis of the economic structure of the national/regional level which covers the structure of production and value-added (GDP of each sector. For the purposes of planning and evaluation of the outcomes of development that is comprehensive both national and smaller scale (district/city, a model for regional development planning approach can use the model input-output analysis. Analysis of Bandung Economic Structure did use Le Masne index, by comparing the coefficients of the technology in 2003 and 2008, of which nearly 50% change. The trade sector has grown very conspicuous than other areas, followed by the services of road transport and air transport services, the development priorities and investment Bandung should be directed to these areas, this is due to these areas can be thrust and be power attraction for the growth of other areas. The areas that experienced the highest decrease was Industrial Chemicals and Goods from Chemistry, followed by Oil and Refinery Industry Textile Industry Except For Garment.

  9. Modelling groundwater discharge areas using only digital elevation models as input data

    International Nuclear Information System (INIS)

    Brydsten, Lars

    2006-10-01

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  10. Modelling groundwater discharge areas using only digital elevation models as input data

    Energy Technology Data Exchange (ETDEWEB)

    Brydsten, Lars [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science

    2006-10-15

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  11. Dynamics of a Stage Structured Pest Control Model in a Polluted Environment with Pulse Pollution Input

    OpenAIRE

    Liu, Bing; Xu, Ling; Kang, Baolin

    2013-01-01

    By using pollution model and impulsive delay differential equation, we formulate a pest control model with stage structure for natural enemy in a polluted environment by introducing a constant periodic pollutant input and killing pest at different fixed moments and investigate the dynamics of such a system. We assume only that the natural enemies are affected by pollution, and we choose the method to kill the pest without harming natural enemies. Sufficient conditions for global attractivity ...

  12. Input parameters for LEAP and analysis of the Model 22C data base

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, L.; Goldstein, M.

    1981-05-01

    The input data for the Long-Term Energy Analysis Program (LEAP) employed by EIA for projections of long-term energy supply and demand in the US were studied and additional documentation provided. Particular emphasis has been placed on the LEAP Model 22C input data base, which was used in obtaining the output projections which appear in the 1978 Annual Report to Congress. Definitions, units, associated model parameters, and translation equations are given in detail. Many parameters were set to null values in Model 22C so as to turn off certain complexities in LEAP; these parameters are listed in Appendix B along with parameters having constant values across all activities. The values of the parameters for each activity are tabulated along with the source upon which each parameter is based - and appropriate comments provided, where available. The structure of the data base is briefly outlined and an attempt made to categorize the parameters according to the methods employed for estimating the numerical values. Due to incomplete documentation and/or lack of specific parameter definitions, few of the input values could be traced and uniquely interpreted using the information provided in the primary and secondary sources. Input parameter choices were noted which led to output projections which are somewhat suspect. Other data problems encountered are summarized. Some of the input data were corrected and a revised base case was constructed. The output projections for this revised case are compared with the Model 22C output for the year 2020, for the Transportation Sector. LEAP could be a very useful tool, especially so in the study of emerging technologies over long-time frames.

  13. Influence of the spatial extent and resolution of input data on soil carbon models in Florida, USA

    Science.gov (United States)

    Vasques, Gustavo M.; Grunwald, S.; Myers, D. Brenton

    2012-12-01

    Understanding the causes of spatial variation of soil carbon (C) has important implications for regional and global C dynamics studies. Soil C predictive models can identify sources of C variation, but may be influenced by scale parameters, including the spatial extent and resolution of input data. Our objective was to investigate the influence of these scale parameters on soil C spatial predictive models in Florida, USA. We used data from three nested spatial extents (Florida, 150,000 km2; Santa Fe River watershed, 3,585 km2; and University of Florida Beef Cattle Station, 5.58 km2) to derive stepwise linear models of soil C as a function of 24 environmental properties. Models were derived within the three extents and for seven resolutions (30-1920 m) of input environmental data in Florida and in the watershed, then cross-evaluated among extents and resolutions, respectively. The quality of soil C models increased with an increase in the spatial extent (R2 from 0.10 in the cattle station to 0.61 in Florida) and with a decrease in the resolution of input data (R2 from 0.33 at 1920-m resolution to 0.61 at 30-m resolution in Florida). Soil and hydrologic variables were the most important across the seven resolutions both in Florida and in the watershed. The spatial extent and resolution of environmental covariates modulate soil C variation and soil-landscape correlations influencing soil C predictive models. Our results provide scale boundaries to observe environmental data and assess soil C spatial patterns, supporting C sequestration, budgeting and monitoring programs.

  14. CONSTRUCTION OF A DYNAMIC INPUT-OUTPUT MODEL WITH A HUMAN CAPITAL BLOCK

    Directory of Open Access Journals (Sweden)

    Baranov A. O.

    2017-03-01

    Full Text Available The accumulation of human capital is an important factor of economic growth. It seems to be useful to include «human capital» as a factor of a macroeconomic model, as it helps to take into account the quality differentiation of the workforce. Most of the models usually distinguish labor force by the levels of education, while some of the factors remain unaccounted. Among them are health status and culture development level, which influence productivity level as well as gross product reproduction. Inclusion of the human capital block to the interindustry model can help to make it more reliable for economic development forecasting. The article presents a mathematical description of the extended dynamic input-output model (DIOM with a human capital block. The extended DIOM is based on the Input-Output Model from The KAMIN system (the System of Integrated Analyses of Interindustrial Information developed at the Institute of Economics and Industrial Engineering of the Siberian Branch of the Academy of Sciences of the Russian Federation and at the Novosibirsk State University. The extended input-output model can be used to analyze and forecast development of Russian economy.

  15. Application of a Linear Input/Output Model to Tankless Water Heaters

    Energy Technology Data Exchange (ETDEWEB)

    Butcher T.; Schoenbauer, B.

    2011-12-31

    In this study, the applicability of a linear input/output model to gas-fired, tankless water heaters has been evaluated. This simple model assumes that the relationship between input and output, averaged over both active draw and idle periods, is linear. This approach is being applied to boilers in other studies and offers the potential to make a small number of simple measurements to obtain the model parameters. These parameters can then be used to predict performance under complex load patterns. Both condensing and non-condensing water heaters have been tested under a very wide range of load conditions. It is shown that this approach can be used to reproduce performance metrics, such as the energy factor, and can be used to evaluate the impacts of alternative draw patterns and conditions.

  16. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    A major challenge in next-generation industrial applications is to improve numerical analysis by quantifying uncertainties in predictions. In this work we present a formulation of a fully nonlinear and dispersive potential flow water wave model with random inputs for the probabilistic description...... of the evolution of waves. The model is analyzed using random sampling techniques and nonintrusive methods based on generalized polynomial chaos (PC). These methods allow us to accurately and efficiently estimate the probability distribution of the solution and require only the computation of the solution...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...

  17. High Resolution Modeling of the Thermospheric Response to Energy Inputs During the RENU-2 Rocket Flight

    Science.gov (United States)

    Walterscheid, R. L.; Brinkman, D. G.; Clemmons, J. H.; Hecht, J. H.; Lessard, M.; Fritz, B.; Hysell, D. L.; Clausen, L. B. N.; Moen, J.; Oksavik, K.; Yeoman, T. K.

    2017-12-01

    The Earth's magnetospheric cusp provides direct access of energetic particles to the thermosphere. These particles produce ionization and kinetic (particle) heating of the atmosphere. The increased ionization coupled with enhanced electric fields in the cusp produces increased Joule heating and ion drag forcing. These energy inputs cause large wind and temperature changes in the cusp region. The Rocket Experiment for Neutral Upwelling -2 (RENU-2) launched from Andoya, Norway at 0745UT on 13 December 2015 into the ionosphere-thermosphere beneath the magnetic cusp. It made measurements of the energy inputs (e.g., precipitating particles, electric fields) and the thermospheric response to these energy inputs (e.g., neutral density and temperature, neutral winds). Complementary ground based measurements were made. In this study, we use a high resolution two-dimensional time-dependent non hydrostatic nonlinear dynamical model driven by rocket and ground based measurements of the energy inputs to simulate the thermospheric response during the RENU-2 flight. Model simulations will be compared to the corresponding measurements of the thermosphere to see what they reveal about thermospheric structure and the nature of magnetosphere-ionosphere-thermosphere coupling in the cusp. Acknowledgements: This material is based upon work supported by the National Aeronautics and Space Administration under Grants: NNX16AH46G and NNX13AJ93G. This research was also supported by The Aerospace Corporation's Technical Investment program

  18. Input vs. Output Taxation—A DSGE Approach to Modelling Resource Decoupling

    Directory of Open Access Journals (Sweden)

    Marek Antosiewicz

    2016-04-01

    Full Text Available Environmental taxes constitute a crucial instrument aimed at reducing resource use through lower production losses, resource-leaner products, and more resource-efficient production processes. In this paper we focus on material use and apply a multi-sector dynamic stochastic general equilibrium (DSGE model to study two types of taxation: tax on material inputs used by industry, energy, construction, and transport sectors, and tax on output of these sectors. We allow for endogenous adoption of resource-saving technologies. We calibrate the model for the EU27 area using an IO matrix. We consider taxation introduced from 2021 and simulate its impact until 2050. We compare the taxes along their ability to induce reduction in material use and raise revenue. We also consider the effect of spending this revenue on reduction of labour taxation. We find that input and output taxation create contrasting incentives and have opposite effects on resource efficiency. The material input tax induces investment in efficiency-improving technology which, in the long term, results in GDP and employment by 15%–20% higher than in the case of a comparable output tax. We also find that using revenues to reduce taxes on labour has stronger beneficial effects for the input tax.

  19. Gaussian mixture model of heart rate variability.

    Directory of Open Access Journals (Sweden)

    Tommaso Costa

    Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.

  20. Integrate-and-fire models with an almost periodic input function

    Science.gov (United States)

    Kasprzak, Piotr; Nawrocki, Adam; Signerska-Rynkowska, Justyna

    2018-02-01

    We investigate leaky integrate-and-fire models (LIF models for short) driven by Stepanov and μ-almost periodic functions. Special attention is paid to the properties of the firing map and its displacement, which give information about the spiking behavior of the considered system. We provide conditions under which such maps are well-defined and are uniformly continuous. We show that the LIF models with Stepanov almost periodic inputs have uniformly almost periodic displacements. We also show that in the case of μ-almost periodic drives it may happen that the displacement map is uniformly continuous, but is not μ-almost periodic (and thus cannot be Stepanov or uniformly almost periodic). By allowing discontinuous inputs, we extend some previous results, showing, for example, that the firing rate for the LIF models with Stepanov almost periodic input exists and is unique. This is a starting point for the investigation of the dynamics of almost-periodically driven integrate-and-fire systems.

  1. Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm.

    Science.gov (United States)

    Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S

    2012-12-01

    Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Input vs. Output Taxation—A DSGE Approach to Modelling Resource Decoupling

    OpenAIRE

    Marek Antosiewicz; Piotr Lewandowski; Jan Witajewski-Baltvilks

    2016-01-01

    Environmental taxes constitute a crucial instrument aimed at reducing resource use through lower production losses, resource-leaner products, and more resource-efficient production processes. In this paper we focus on material use and apply a multi-sector dynamic stochastic general equilibrium (DSGE) model to study two types of taxation: tax on material inputs used by industry, energy, construction, and transport sectors, and tax on output of these sectors. We allow for endogenous adoption of...

  3. A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y

    2011-10-27

    Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.

  4. Urban Landscape Characterization Using Remote Sensing Data For Input into Air Quality Modeling

    Science.gov (United States)

    Quattrochi, Dale A.; Estes, Maurice G., Jr.; Crosson, William; Khan, Maudood

    2005-01-01

    The urban landscape is inherently complex and this complexity is not adequately captured in air quality models that are used to assess whether urban areas are in attainment of EPA air quality standards, particularly for ground level ozone. This inadequacy of air quality models to sufficiently respond to the heterogeneous nature of the urban landscape can impact how well these models predict ozone pollutant levels over metropolitan areas and ultimately, whether cities exceed EPA ozone air quality standards. We are exploring the utility of high-resolution remote sensing data and urban growth projections as improved inputs to meteorological and air quality models focusing on the Atlanta, Georgia metropolitan area as a case study. The National Land Cover Dataset at 30m resolution is being used as the land use/land cover input and aggregated to the 4km scale for the MM5 mesoscale meteorological model and the Community Multiscale Air Quality (CMAQ) modeling schemes. Use of these data have been found to better characterize low density/suburban development as compared with USGS 1 km land use/land cover data that have traditionally been used in modeling. Air quality prediction for future scenarios to 2030 is being facilitated by land use projections using a spatial growth model. Land use projections were developed using the 2030 Regional Transportation Plan developed by the Atlanta Regional Commission. This allows the State Environmental Protection agency to evaluate how these transportation plans will affect future air quality.

  5. Artificial neural network modelling of biological oxygen demand in rivers at the national level with input selection based on Monte Carlo simulations.

    Science.gov (United States)

    Šiljić, Aleksandra; Antanasijević, Davor; Perić-Grujić, Aleksandra; Ristić, Mirjana; Pocajt, Viktor

    2015-03-01

    Biological oxygen demand (BOD) is the most significant water quality parameter and indicates water pollution with respect to the present biodegradable organic matter content. European countries are therefore obliged to report annual BOD values to Eurostat; however, BOD data at the national level is only available for 28 of 35 listed European countries for the period prior to 2008, among which 46% of data is missing. This paper describes the development of an artificial neural network model for the forecasting of annual BOD values at the national level, using widely available sustainability and economical/industrial parameters as inputs. The initial general regression neural network (GRNN) model was trained, validated and tested utilizing 20 inputs. The number of inputs was reduced to 15 using the Monte Carlo simulation technique as the input selection method. The best results were achieved with the GRNN model utilizing 25% less inputs than the initial model and a comparison with a multiple linear regression model trained and tested using the same input variables using multiple statistical performance indicators confirmed the advantage of the GRNN model. Sensitivity analysis has shown that inputs with the greatest effect on the GRNN model were (in descending order) precipitation, rural population with access to improved water sources, treatment capacity of wastewater treatment plants (urban) and treatment of municipal waste, with the last two having an equal effect. Finally, it was concluded that the developed GRNN model can be useful as a tool to support the decision-making process on sustainable development at a regional, national and international level.

  6. Confounding of three binary-variables counterfactual model

    OpenAIRE

    Liu, Jingwei; Hu, Shuang

    2011-01-01

    Confounding of three binary-variables counterfactual model is discussed in this paper. According to the effect between the control variable and the covariate variable, we investigate three counterfactual models: the control variable is independent of the covariate variable, the control variable has the effect on the covariate variable and the covariate variable affects the control variable. Using the ancillary information based on conditional independence hypotheses, the sufficient conditions...

  7. DISSECTING MAGNETAR VARIABILITY WITH BAYESIAN HIERARCHICAL MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Huppenkothen, Daniela; Elenbaas, Chris; Watts, Anna L.; Horst, Alexander J. van der [Anton Pannekoek Institute for Astronomy, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands); Brewer, Brendon J. [Department of Statistics, The University of Auckland, Private Bag 92019, Auckland 1142 (New Zealand); Hogg, David W. [Center for Data Science, New York University, 726 Broadway, 7th Floor, New York, NY 10003 (United States); Murray, Iain [School of Informatics, University of Edinburgh, Edinburgh EH8 9AB (United Kingdom); Frean, Marcus [School of Engineering and Computer Science, Victoria University of Wellington (New Zealand); Levin, Yuri [Monash Center for Astrophysics and School of Physics, Monash University, Clayton, Victoria 3800 (Australia); Kouveliotou, Chryssa, E-mail: daniela.huppenkothen@nyu.edu [Astrophysics Office, ZP 12, NASA/Marshall Space Flight Center, Huntsville, AL 35812 (United States)

    2015-09-01

    Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behavior, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favored models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture aftershocks. Using Markov Chain Monte Carlo sampling augmented with reversible jumps between models with different numbers of parameters, we characterize the posterior distributions of the model parameters and the number of components per burst. We relate these model parameters to physical quantities in the system, and show for the first time that the variability within a burst does not conform to predictions from ideas of self-organized criticality. We also examine how well the properties of the spikes fit the predictions of simplified cascade models for the different trigger mechanisms.

  8. Responses of two nonlinear microbial models to warming or increased carbon input

    Science.gov (United States)

    Wang, Y. P.; Jiang, J.; Chen-Charpentier, B.; Agusto, F. B.; Hastings, A.; Hoffman, F.; Rasmussen, M.; Smith, M. J.; Todd-Brown, K.; Wang, Y.; Xu, X.; Luo, Y. Q.

    2015-09-01

    A number of nonlinear microbial models of soil carbon decomposition have been developed. Some of them have been applied globally but have yet to be shown to realistically represent soil carbon dynamics in the field. Therefore a thorough analysis of their key differences will be very useful for the future development of these models. Here we compare two nonlinear microbial models of soil carbon decomposition: one is based on reverse Michaelis-Menten kinetics (model A) and the other on regular Michaelis-Menten kinetics (model B). Using a combination of analytic solutions and numerical simulations, we find that the oscillatory responses of carbon pools model A to a small perturbation in the initial pool sizes have a higher frequency and damps faster than model B. In response to soil warming, soil carbon always decreases in model A; but likely decreases in cool regions and increases in warm regions in model B. Maximum CO2 efflux from soil carbon decomposition (Fmax) after an increased carbon addition decreases with an increase in soil temperature in both models, and the sensitivity of Fmax to the amount of carbon input increases with soil temperature in model A; but decreases monotonically with an increase in soil temperature in model B. These differences in the responses to soil warming and carbon input between the two nonlinear models can be used to differentiate which model is more realistic with field or laboratory experiments. This will lead to a better understanding of the significance of soil microbial processes in the responses of soil carbon to future climate change at regional or global scales.

  9. Dispersion modeling of accidental releases of toxic gases - Sensitivity study and optimization of the meteorological input

    Science.gov (United States)

    Baumann-Stanzer, K.; Stenzel, S.

    2009-04-01

    Several air dispersion models are available for prediction and simulation of the hazard areas associated with accidental releases of toxic gases. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for effective presentation of results. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. Uncertainties in the meteorological input together with incorrect estimates of the source play a critical role for the model results. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program at the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). The main tasks of this project were 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. This presentation gives a short introduction to the project and presents the results of task 1 (meteorological input). The results of task 2 are presented by Stenzel and Baumann-Stanzer in this session. For the aim of this project, the observation-based analysis and forecasting system INCA, developed in the Central Institute for Meteorology and Geodynamics (ZAMG) was used. INCA (Integrated Nowcasting through Comprehensive Analysis) data were calculated with 1 km horizontal resolution and

  10. Development of algorithm for depreciation costs allocation in dynamic input-output industrial enterprise model

    Directory of Open Access Journals (Sweden)

    Keller Alevtina

    2017-01-01

    Full Text Available The article considers the issue of allocation of depreciation costs in the dynamic inputoutput model of an industrial enterprise. Accounting the depreciation costs in such a model improves the policy of fixed assets management. It is particularly relevant to develop the algorithm for the allocation of depreciation costs in the construction of dynamic input-output model of an industrial enterprise, since such enterprises have a significant amount of fixed assets. Implementation of terms of the adequacy of such an algorithm itself allows: evaluating the appropriateness of investments in fixed assets, studying the final financial results of an industrial enterprise, depending on management decisions in the depreciation policy. It is necessary to note that the model in question for the enterprise is always degenerate. It is caused by the presence of zero rows in the matrix of capital expenditures by lines of structural elements unable to generate fixed assets (part of the service units, households, corporate consumers. The paper presents the algorithm for the allocation of depreciation costs for the model. This algorithm was developed by the authors and served as the basis for further development of the flowchart for subsequent implementation with use of software. The construction of such algorithm and its use for dynamic input-output models of industrial enterprises is actualized by international acceptance of the effectiveness of the use of input-output models for national and regional economic systems. This is what allows us to consider that the solutions discussed in the article are of interest to economists of various industrial enterprises.

  11. A time-resolved model of the mesospheric Na layer: constraints on the meteor input function

    Directory of Open Access Journals (Sweden)

    J. M. C. Plane

    2004-01-01

    Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.

  12. Natural climate variability in a coupled model

    International Nuclear Information System (INIS)

    Zebiak, S.E.; Cane, M.A.

    1990-01-01

    Multi-century simulations with a simplified coupled ocean-atmosphere model are described. These simulations reveal an impressive range of variability on decadal and longer time scales, in addition to the dominant interannual el Nino/Southern Oscillation signal that the model originally was designed to simulate. Based on a very large sample of century-long simulations, it is nonetheless possible to identify distinct model parameter sensitivities that are described here in terms of selected indices. Preliminary experiments motivated by general circulation model results for increasing greenhouse gases suggest a definite sensitivity to model global warming. While these results are not definitive, they strongly suggest that coupled air-sea dynamics figure prominently in global change and must be included in models for reliable predictions

  13. Discrete element modelling (DEM) input parameters: understanding their impact on model predictions using statistical analysis

    Science.gov (United States)

    Yan, Z.; Wilkinson, S. K.; Stitt, E. H.; Marigo, M.

    2015-09-01

    Selection or calibration of particle property input parameters is one of the key problematic aspects for the implementation of the discrete element method (DEM). In the current study, a parametric multi-level sensitivity method is employed to understand the impact of the DEM input particle properties on the bulk responses for a given simple system: discharge of particles from a flat bottom cylindrical container onto a plate. In this case study, particle properties, such as Young's modulus, friction parameters and coefficient of restitution were systematically changed in order to assess their effect on material repose angles and particle flow rate (FR). It was shown that inter-particle static friction plays a primary role in determining both final angle of repose and FR, followed by the role of inter-particle rolling friction coefficient. The particle restitution coefficient and Young's modulus were found to have insignificant impacts and were strongly cross correlated. The proposed approach provides a systematic method that can be used to show the importance of specific DEM input parameters for a given system and then potentially facilitates their selection or calibration. It is concluded that shortening the process for input parameters selection and calibration can help in the implementation of DEM.

  14. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Bagstad, Kenneth J.; Cohen, Erika; Ancona, Zachary H.; McNulty, Steven; Sun, Ge

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address these questions at national, provincial, and subwatershed scales in Rwanda. We compared results for carbon, water, and sediment as modeled using InVEST and WaSSI using (1) land cover data at 30 and 300 m resolution and (2) three different input land cover datasets. WaSSI and simpler InVEST models (carbon storage and annual water yield) were relatively insensitive to the choice of spatial resolution, but more complex InVEST models (seasonal water yield and sediment regulation) produced large differences when applied at differing resolution. Six out of nine ES metrics (InVEST annual and seasonal water yield and WaSSI) gave similar predictions for at least two different input land cover datasets. Despite differences in mean values when using different data sources and resolution, we found significant and highly correlated results when using Spearman's rank correlation, indicating consistent spatial patterns of high and low values. Our results confirm and extend conclusions of past studies, showing that in certain cases (e.g., simpler models and national-scale analyses), results can be robust to data and modeling choices. For more complex models, those with different output metrics, and subnational to site-based analyses in heterogeneous environments, data and model choices may strongly influence study findings.

  15. VSC Input-Admittance Modeling and Analysis Above the Nyquist Frequency for Passivity-Based Stability Assessment

    DEFF Research Database (Denmark)

    Harnefors, Lennart; Finger, Raphael; Wang, Xiongfei

    2017-01-01

    The interconnection stability of a gridconnected voltage-source converter (VSC) can be assessed via the dissipative properties of its input admittance. In this paper, the modeling of the current control loop is revisited with the aim to improve the accuracy of the input-admittance model above the...

  16. The Mixed Effects of Phonetic Input Variability on Relative Ease of L2 Learning: Evidence from English Learners’ Production of French and Spanish Stop-Rhotic Clusters

    Directory of Open Access Journals (Sweden)

    Laura Colantoni

    2018-04-01

    Full Text Available We examined the consequences of within-category phonetic variability in the input on non-native learners’ production accuracy. Following previous empirical research on the L2 acquisition of phonetics and the lexicon, we tested the hypothesis that phonetic variability facilitates learning by analyzing English-speaking learners’ production of French and Spanish word-medial stop-rhotic clusters, which differ from their English counterparts in terms of stop and rhotic voicing and manner. Crucially, for both the stops and rhotics, there are differences in within-language variability. Twenty native speakers per language and 39 L1 English-learners of French (N = 20 and Spanish (N = 19 of intermediate and advanced proficiency performed a carrier-sentence reading task. A given parameter was deemed to have been acquired when the learners’ production fell within the range of attested native speaker values. An acoustic analysis of the data partially supports the facilitative effect of phonetic variability. To account for the unsupported hypotheses, we discuss a number of issues, including the difficulty of measuring variability, the need to determine the extent to which learners’ perception shapes intake, and the challenge of teasing apart the effects of input variability from those of transferred L1 articulatory patterns.

  17. The MARINA model (Model to Assess River Inputs of Nutrients to seAs): Model description and results for China.

    Science.gov (United States)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-08-15

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients to seAs (MARINA) for China. The MARINA Nutrient Model quantifies river export of nutrients by source at the sub-basin scale as a function of human activities on land. MARINA is a downscaled version for China of the Global NEWS-2 (Nutrient Export from WaterSheds) model with an improved approach for nutrient losses from animal production and population. We use the model to quantify dissolved inorganic and organic nitrogen (N) and phosphorus (P) export by six large rivers draining into the Bohai Gulf (Yellow, Hai, Liao), Yellow Sea (Yangtze, Huai) and South China Sea (Pearl) in 1970, 2000 and 2050. We addressed uncertainties in the MARINA Nutrient model. Between 1970 and 2000 river export of dissolved N and P increased by a factor of 2-8 depending on sea and nutrient form. Thus, the risk for coastal eutrophication increased. Direct losses of manure to rivers contribute to 60-78% of nutrient inputs to the Bohai Gulf and 20-74% of nutrient inputs to the other seas in 2000. Sewage is an important source of dissolved inorganic P, and synthetic fertilizers of dissolved inorganic N. Over half of the nutrients exported by the Yangtze and Pearl rivers originated from human activities in downstream and middlestream sub-basins. The Yellow River exported up to 70% of dissolved inorganic N and P from downstream sub-basins and of dissolved organic N and P from middlestream sub-basins. Rivers draining into the Bohai Gulf are drier, and thus transport fewer nutrients. For the future we calculate further increases in river export of nutrients. The MARINA Nutrient model quantifies the main sources of coastal water pollution for sub-basins. This information can contribute to formulation of

  18. Assessment of input function distortions on kinetic model parameters in simulated dynamic 82Rb PET perfusion studies

    International Nuclear Information System (INIS)

    Meyer, Carsten; Peligrad, Dragos-Nicolae; Weibrecht, Martin

    2007-01-01

    Cardiac 82 rubidium dynamic PET studies allow quantifying absolute myocardial perfusion by using tracer kinetic modeling. Here, the accurate measurement of the input function, i.e. the tracer concentration in blood plasma, is a major challenge. This measurement is deteriorated by inappropriate temporal sampling, spillover, etc. Such effects may influence the measured input peak value and the measured blood pool clearance. The aim of our study is to evaluate the effect of input function distortions on the myocardial perfusion as estimated by the model. To this end, we simulate noise-free myocardium time activity curves (TACs) with a two-compartment kinetic model. The input function to the model is a generic analytical function. Distortions of this function have been introduced by varying its parameters. Using the distorted input function, the compartment model has been fitted to the simulated myocardium TAC. This analysis has been performed for various sets of model parameters covering a physiologically relevant range. The evaluation shows that ±10% error in the input peak value can easily lead to ±10-25% error in the model parameter K 1 , which relates to myocardial perfusion. Variations in the input function tail are generally less relevant. We conclude that an accurate estimation especially of the plasma input peak is crucial for a reliable kinetic analysis and blood flow estimation

  19. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    Directory of Open Access Journals (Sweden)

    Christian Vögeli

    2016-12-01

    Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.

  20. Input frequency and lexical variability in phonological development: a survival analysis of word-initial cluster production.

    Science.gov (United States)

    Ota, Mitsuhiko; Green, Sam J

    2013-06-01

    Although it has been often hypothesized that children learn to produce new sound patterns first in frequently heard words, the available evidence in support of this claim is inconclusive. To re-examine this question, we conducted a survival analysis of word-initial consonant clusters produced by three children in the Providence Corpus (0 ; 11-4 ; 0). The analysis took account of several lexical factors in addition to lexical input frequency, including the age of first production, production frequency, neighborhood density and number of phonemes. The results showed that lexical input frequency was a significant predictor of the age at which the accuracy level of cluster production in each word first reached 80%. The magnitude of the frequency effect differed across cluster types. Our findings indicate that some of the between-word variance found in the development of sound production can indeed be attributed to the frequency of words in the child's ambient language.

  1. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, Putri Wikie

    2017-01-24

    There are some events which are expected effecting CPI’s fluctuation, i.e. financial crisis 1997/1998, fuel price risings, base year changing’s, independence of Timor-Timur (October 1999), and Tsunami disaster in Aceh (December 2004). During re-search period, there were eight fuel price risings and four base year changing’s. The objective of this research is to obtain multi input intervention model which can des-cribe magnitude and duration of each event effected to CPI. Most of intervention re-searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those events were affecting CPI. Additionally, other events, such as Ied on January 1999, events on April 2002, July 2003, December 2005, and September 2008, were affecting CPI too. In general, those events gave positive effect to CPI, except events on April 2002 and July 2003 which gave negative effects.

  2. Input-Output Modeling for Urban Energy Consumption in Beijing: Dynamics and Comparison

    Science.gov (United States)

    Zhang, Lixiao; Hu, Qiuhong; Zhang, Fan

    2014-01-01

    Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect) energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce) to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making. PMID:24595199

  3. Estimating net present value variability for deterministic models

    NARCIS (Netherlands)

    van Groenendaal, W.J.H.

    1995-01-01

    For decision makers the variability in the net present value (NPV) of an investment project is an indication of the project's risk. So-called risk analysis is one way to estimate this variability. However, risk analysis requires knowledge about the stochastic character of the inputs. For large,

  4. Input data for mathematical modeling and numerical simulation of switched reluctance machines

    Directory of Open Access Journals (Sweden)

    Ali Asghar Memon

    2017-10-01

    Full Text Available The modeling and simulation of Switched Reluctance (SR machine and drives is challenging for its dual pole salient structure and magnetic saturation. This paper presents the input data in form of experimentally obtained magnetization characteristics. This data was used for computer simulation based model of SR machine, “Selecting Best Interpolation Technique for Simulation Modeling of Switched Reluctance Machine” [1], “Modeling of Static Characteristics of Switched Reluctance Motor” [2]. This data is primary source of other data tables of co energy and static torque which are also among the required data essential for the simulation and can be derived from this data. The procedure and experimental setup for collection of the data is presented in detail.

  5. Input data for mathematical modeling and numerical simulation of switched reluctance machines.

    Science.gov (United States)

    Memon, Ali Asghar; Shaikh, Muhammad Mujtaba

    2017-10-01

    The modeling and simulation of Switched Reluctance (SR) machine and drives is challenging for its dual pole salient structure and magnetic saturation. This paper presents the input data in form of experimentally obtained magnetization characteristics. This data was used for computer simulation based model of SR machine, "Selecting Best Interpolation Technique for Simulation Modeling of Switched Reluctance Machine" [1], "Modeling of Static Characteristics of Switched Reluctance Motor" [2]. This data is primary source of other data tables of co energy and static torque which are also among the required data essential for the simulation and can be derived from this data. The procedure and experimental setup for collection of the data is presented in detail.

  6. Learning atomic human actions using variable-length Markov models.

    Science.gov (United States)

    Liang, Yu-Ming; Shih, Sheng-Wen; Shih, Arthur Chun-Chieh; Liao, Hong-Yuan Mark; Lin, Cheng-Chung

    2009-02-01

    Visual analysis of human behavior has generated considerable interest in the field of computer vision because of its wide spectrum of potential applications. Human behavior can be segmented into atomic actions, each of which indicates a basic and complete movement. Learning and recognizing atomic human actions are essential to human behavior analysis. In this paper, we propose a framework for handling this task using variable-length Markov models (VLMMs). The framework is comprised of the following two modules: a posture labeling module and a VLMM atomic action learning and recognition module. First, a posture template selection algorithm, based on a modified shape context matching technique, is developed. The selected posture templates form a codebook that is used to convert input posture sequences into discrete symbol sequences for subsequent processing. Then, the VLMM technique is applied to learn the training symbol sequences of atomic actions. Finally, the constructed VLMMs are transformed into hidden Markov models (HMMs) for recognizing input atomic actions. This approach combines the advantages of the excellent learning function of a VLMM and the fault-tolerant recognition ability of an HMM. Experiments on realistic data demonstrate the efficacy of the proposed system.

  7. Modelling Effects on Grid Cells of Sensory Input During Self-motion

    Science.gov (United States)

    2016-04-20

    Olton et al. 1979, 1986; Morris et al. 1982), and hence their accurate updating on the basis of sensory features appears to be essential to memory -guided...J Physiol 000.0 (2016) pp 1–14 1 Th e Jo u rn al o f Ph ys io lo g y N eu ro sc ie nc e SYMPOS IUM REV IEW Modelling effects on grid cells of sensory ...input during self-motion Florian Raudies, James R. Hinman and Michael E. Hasselmo Center for Systems Neuroscience, Centre for Memory and Brain

  8. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation......This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP...

  9. Multimodal Similarity Gaussian Process Latent Variable Model.

    Science.gov (United States)

    Song, Guoli; Wang, Shuhui; Huang, Qingming; Tian, Qi

    2017-09-01

    Data from real applications involve multiple modalities representing content with the same semantics from complementary aspects. However, relations among heterogeneous modalities are simply treated as observation-to-fit by existing work, and the parameterized modality specific mapping functions lack flexibility in directly adapting to the content divergence and semantic complicacy in multimodal data. In this paper, we build our work based on the Gaussian process latent variable model (GPLVM) to learn the non-parametric mapping functions and transform heterogeneous modalities into a shared latent space. We propose multimodal Similarity Gaussian Process latent variable model (m-SimGP), which learns the mapping functions between the intra-modal similarities and latent representation. We further propose multimodal distance-preserved similarity GPLVM (m-DSimGP) to preserve the intra-modal global similarity structure, and multimodal regularized similarity GPLVM (m-RSimGP) by encouraging similar/dissimilar points to be similar/dissimilar in the latent space. We propose m-DRSimGP, which combines the distance preservation in m-DSimGP and semantic preservation in m-RSimGP to learn the latent representation. The overall objective functions of the four models are solved by simple and scalable gradient decent techniques. They can be applied to various tasks to discover the nonlinear correlations and to obtain the comparable low-dimensional representation for heterogeneous modalities. On five widely used real-world data sets, our approaches outperform existing models on cross-modal content retrieval and multimodal classification.

  10. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  11. Dual-input two-compartment pharmacokinetic model of dynamic contrast-enhanced magnetic resonance imaging in hepatocellular carcinoma.

    Science.gov (United States)

    Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun

    2016-04-07

    To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P

  12. A switchable light-input, light-output system modelled and constructed in yeast

    Directory of Open Access Journals (Sweden)

    Kozma-Bognar Laszlo

    2009-09-01

    Full Text Available Abstract Background Advances in synthetic biology will require spatio-temporal regulation of biological processes in heterologous host cells. We develop a light-switchable, two-hybrid interaction in yeast, based upon the Arabidopsis proteins PHYTOCHROME A and FAR-RED ELONGATED HYPOCOTYL 1-LIKE. Light input to this regulatory module allows dynamic control of a light-emitting LUCIFERASE reporter gene, which we detect by real-time imaging of yeast colonies on solid media. Results The reversible activation of the phytochrome by red light, and its inactivation by far-red light, is retained. We use this quantitative readout to construct a mathematical model that matches the system's behaviour and predicts the molecular targets for future manipulation. Conclusion Our model, methods and materials together constitute a novel system for a eukaryotic host with the potential to convert a dynamic pattern of light input into a predictable gene expression response. This system could be applied for the regulation of genetic networks - both known and synthetic.

  13. A Water-Withdrawal Input-Output Model of the Indian Economy.

    Science.gov (United States)

    Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu

    2016-02-02

    Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.

  14. An empirical model of decadal ENSO variability

    Energy Technology Data Exchange (ETDEWEB)

    Kravtsov, S. [University of Wisconsin-Milwaukee, Department of Mathematical Sciences, Atmospheric Sciences Group, P. O. Box 413, Milwaukee, WI (United States)

    2012-11-15

    This paper assesses potential predictability of decadal variations in the El Nino/Southern Oscillation (ENSO) characteristics by constructing and performing simulations using an empirical nonlinear stochastic model of an ENSO index. The model employs decomposition of global sea-surface temperature (SST) anomalies into the modes that maximize the ratio of interdecadal-to-subdecadal SST variance to define low-frequency predictors called the canonical variates (CVs). When the whole available SST time series is so processed, the leading canonical variate (CV-1) is found to be well correlated with the area-averaged SST time series which exhibits a non-uniform warming trend, while the next two (CV-2 and CV-3) describe secular variability arguably associated with a combination of Atlantic Multidecadal Oscillation (AMO) and Pacific Decadal Oscillation (PDO) signals. The corresponding ENSO model that uses either all three (CVs 1-3) or only AMO/PDO-related (CVs 2 and 3) predictors captures well the observed autocorrelation function, probability density function, seasonal dependence of ENSO, and, most importantly, the observed interdecadal modulation of ENSO variance. The latter modulation, and its dependence on CVs, is shown to be inconsistent with the null hypothesis of random decadal ENSO variations simulated by multivariate linear inverse models. Cross-validated hindcasts of ENSO variance suggest a potential useful skill at decadal lead times. These findings thus argue that decadal modulations of ENSO variability may be predictable subject to our ability to forecast AMO/PDO-type climate modes; the latter forecasts may need to be based on simulations of dynamical models, rather than on a purely statistical scheme as in the present paper. (orig.)

  15. Evapotranspiration and Precipitation inputs for SWAT model using remotely sensed observations

    Science.gov (United States)

    The ability of numerical models, such as the Soil and Water Assessment Tool (or SWAT), to accurately represent the partition of the water budget and describe sediment loads and other pollutant conditions related to water quality strongly depends on how well spatiotemporal variability in precipitatio...

  16. a Variable Resolution Global Spectral Model.

    Science.gov (United States)

    Hardiker, Vivek Manohar

    A conformal transformation suggested by F. Schimdt is followed to implement a global spectral model with variable horizontal resolution. A conformal mapping is defined between the real physical sphere (Earth) to a transformed (Computational) sphere. The model equations are discretized on the computational sphere and the conventional spectral technique is applied to solve the model equations. There are two types of transformations used in the present study, namely, the Stretching transformation and the Rotation of the horizontal grid points. Application of the stretching transformation results in finer resolution along the meridional direction. The stretching is controlled by a parameter C. The rotation transformation can be used to relocate the North Pole of the model to any point on the geographic sphere. The idea is now to rotate the pole to the area of interest and refine the resolution around the new pole by applying the stretching transformation. The stretching transformation can be applied alone without the rotation. A T-42 Spectral Shallow-Water model is transformed by applying the stretching transformation alone as well as the two transformations together. A T-42 conventional Spectral Shallow-Water model is run as the control experiment and a conventional T-85 Spectral Shallow-Water model run is treated as the benchmark (Truth) solution. RMS error analysis for the geopotential field as well as the wind field is performed to evaluate the forecast made by the transformed model. It is observed that the RMS error of the transformed model is lower than that of the control run in a latitude band, for the case of stretching transformation alone, while for the total transformation (rotation followed by stretching), similar results are obtained for a rectangular domain. A multi-level global spectral model is designed from the current FSU global spectral model in order to implement the conformal transformation. The transformed T-85 model is used to study Hurricane

  17. INPUT DATA OF BURNING WOOD FOR CFD MODELLING USING SMALL-SCALE EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Petr Hejtmánek

    2017-12-01

    Full Text Available The paper presents an option how to acquire simplified input data for modelling of burning wood in CFD programmes. The option lies in combination of data from small- and molecular-scale experiments in order to describe the material as a one-reaction material property. Such virtual material would spread fire, develop the fire according to surrounding environment and it could be extinguished without using complex reaction molecular description. Series of experiments including elemental analysis, thermogravimetric analysis and difference thermal analysis, and combustion analysis were performed. Then the FDS model of burning pine wood in a cone calorimeter was built. In the model where those values were used. The model was validated to HRR (Heat Release Rate from the real cone calorimeter experiment. The results show that for the purpose of CFD modelling the effective heat of combustion, which is one of the basic material property for fire modelling affecting the total intensity of burning, should be used. Using the net heat of combustion in the model leads to higher values of HRR in comparison to the real experiment data. Considering all the results shown in this paper, it was shown that it is possible to simulate burning of wood using the extrapolated data obtained in small-size experiments.

  18. On Input Vector Representation for the SVR model of Reactor Core Loading Pattern Critical Parameters

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2008-01-01

    Determination and optimization of reactor core loading pattern is an important factor in nuclear power plant operation. The goal is to minimize the amount of enriched uranium (fresh fuel) and burnable absorbers placed in the core, while maintaining nuclear power plant operational and safety characteristics. The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an optimization algorithm and a computer code used for evaluating proposed loading patterns. The speed of the optimization process is highly dependent on the computer code used for the evaluation. Recently, we proposed a new method for fast loading pattern evaluation based on general robust regression model relying on the state of the art research in the field of machine learning. We employed Support Vector Regression (SVR) technique. SVR is a supervised learning method in which model parameters are automatically determined by solving a quadratic optimization problem. The preliminary tests revealed a good potential of the SVR method application for fast and accurate reactor core loading pattern evaluation. However, some aspects of model development are still unresolved. The main objective of the work reported in this paper was to conduct additional tests and analyses required for full clarification of the SVR applicability for loading pattern evaluation. We focused our attention on the parameters defining input vector, primarily its structure and complexity, and parameters defining kernel functions. All the tests were conducted on the NPP Krsko reactor core, using MCRAC code for the calculation of reactor core loading pattern critical parameters. The tested input vector structures did not influence the accuracy of the models suggesting that the initially tested input vector, consisted of the number of IFBAs and the k-inf at the beginning of the cycle, is adequate. The influence of kernel function specific parameters (σ for RBF kernel

  19. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    Science.gov (United States)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  20. Targeting the right input data to improve crop modeling at global level

    Science.gov (United States)

    Adam, M.; Robertson, R.; Gbegbelegbe, S.; Jones, J. W.; Boote, K. J.; Asseng, S.

    2012-12-01

    Designed for location-specific simulations, the use of crop models at a global level raises important questions. Crop models are originally premised on small unit areas where environmental conditions and management practices are considered homogeneous. Specific information describing soils, climate, management, and crop characteristics are used in the calibration process. However, when scaling up for global application, we rely on information derived from geographical information systems and weather generators. To run crop models at broad, we use a modeling platform that assumes a uniformly generated grid cell as a unit area. Specific weather, specific soil and specific management practices for each crop are represented for each of the cell grids. Studies on the impacts of the uncertainties of weather information and climate change on crop yield at a global level have been carried out (Osborne et al, 2007, Nelson et al., 2010, van Bussel et al, 2011). Detailed information on soils and management practices at global level are very scarce but recognized to be of critical importance (Reidsma et al., 2009). Few attempts to assess the impact of their uncertainties on cropping systems performances can be found. The objectives of this study are (i) to determine sensitivities of a crop model to soil and management practices, inputs most relevant to low input rainfed cropping systems, and (ii) to define hotspots of sensitivity according to the input data. We ran DSSAT v4.5 globally (CERES-CROPSIM) to simulate wheat yields at 45arc-minute resolution. Cultivar parameters were calibrated and validated for different mega-environments (results not shown). The model was run for nitrogen-limited production systems. This setting was chosen as the most representative to simulate actual yield (especially for low-input rainfed agricultural systems) and assumes crop growth to be free of any pest and diseases damages. We conducted a sensitivity analysis on contrasting management

  1. New insights into mammalian signaling pathways using microfluidic pulsatile inputs and mathematical modeling

    Science.gov (United States)

    Sumit, M.; Takayama, S.; Linderman, J. J.

    2016-01-01

    Temporally modulated input mimics physiology. This chemical communication strategy filters the biochemical noise through entrainment and phase-locking. Under laboratory conditions, it also expands the observability space for downstream responses. A combined approach involving microfluidic pulsatile stimulation and mathematical modeling has led to deciphering of hidden/unknown temporal motifs in several mammalian signaling pathways and has provided mechanistic insights, including how these motifs combine to form distinct band-pass filters and govern fate regulation under dynamic microenvironment. This approach can be utilized to understand signaling circuit architectures and to gain mechanistic insights for several other signaling systems. Potential applications include synthetic biology and biotechnology, in developing pharmaceutical interventions, and in developing lab-on-chip models. PMID:27868126

  2. New insights into mammalian signaling pathways using microfluidic pulsatile inputs and mathematical modeling.

    Science.gov (United States)

    Sumit, M; Takayama, S; Linderman, J J

    2017-01-23

    Temporally modulated input mimics physiology. This chemical communication strategy filters the biochemical noise through entrainment and phase-locking. Under laboratory conditions, it also expands the observability space for downstream responses. A combined approach involving microfluidic pulsatile stimulation and mathematical modeling has led to deciphering of hidden/unknown temporal motifs in several mammalian signaling pathways and has provided mechanistic insights, including how these motifs combine to form distinct band-pass filters and govern fate regulation under dynamic microenvironment. This approach can be utilized to understand signaling circuit architectures and to gain mechanistic insights for several other signaling systems. Potential applications include synthetic biology and biotechnology, in developing pharmaceutical interventions, and in developing lab-on-chip models.

  3. Extended Fitts' model of pointing time in eye-gaze input system - Incorporating effects of target shape and movement direction into modeling.

    Science.gov (United States)

    Murata, Atsuo; Fukunaga, Daichi

    2018-04-01

    This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.

  4. Effect of variable annual precipitation and nutrient input on nitrogen and phosphorus transport from two Midwestern agricultural watersheds

    Science.gov (United States)

    Kalkhoff, Stephen J.; Hubbard, Laura E.; Tomer, Mark D.; James, D.E.

    2016-01-01

    Precipitation patterns and nutrient inputs affect transport of nitrate (NO3-N) and phosphorus (TP) from Midwest watersheds. Nutrient concentrations and yields from two subsurface-drained watersheds, the Little Cobb River (LCR) in southern Minnesota and the South Fork Iowa River (SFIR) in northern Iowa, were evaluated during 1996–2007 to document relative differences in timings and amounts of nutrients transported. Both watersheds are located in the prairie pothole region, but the SFIR exhibits a longer growing season and more livestock production. The SFIR yielded significantly more NO3-N than the LCR watershed (31.2 versus 21.3 kg NO3-N ha− 1 y− 1). The SFIR watershed also yielded more TP than the LCR watershed (1.13 versus 0.51 kg TP ha− 1 yr− 1), despite greater TP concentrations in the LCR. About 65% of NO3-N and 50% of TP loads were transported during April–June, and < 20% of the annual loads were transported later in the growing season from July–September. Monthly NO3-N and TP loads peaked in April from the LCR but peaked in June from the SFIR; this difference was attributed to greater snowmelt runoff in the LCR. The annual NO3-N yield increased with increasing annual runoff at a similar rate in both watersheds, but the LCR watershed yielded less annual NO3-N than the SFIR for a similar annual runoff. These two watersheds are within 150 km of one another and have similar dominant agricultural systems, but differences in climate and cropping inputs affected amounts and timing of nutrient transport.

  5. Loss of GABAergic inputs in APP/PS1 mouse model of Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Tutu Oyelami

    2014-04-01

    Full Text Available Alzheimer's disease (AD is characterized by symptoms which include seizures, sleep disruption, loss of memory as well as anxiety in patients. Of particular importance is the possibility of preventing the progressive loss of neuronal projections in the disease. Transgenic mice overexpressing EOFAD mutant PS1 (L166P and mutant APP (APP KM670/671NL Swedish (APP/PS1 develop a very early and robust Amyloid pathology and display synaptic plasticity impairments and cognitive dysfunction. Here we investigated GABAergic neurotransmission, using multi-electrode array (MEA technology and pharmacological manipulation to quantify the effect of GABA Blockers on field excitatory postsynaptic potentials (fEPSP, and immunostaining of GABAergic neurons. Using MEA technology we confirm impaired LTP induction by high frequency stimulation in APPPS1 hippocampal CA1 region that was associated with reduced alteration of the pair pulse ratio after LTP induction. Synaptic dysfunction was also observed under manipulation of external Calcium concentration and input-output curve. Electrophysiological recordings from brain slice of CA1 hippocampus area, in the presence of GABAergic receptors blockers cocktails further demonstrated significant reduction in the GABAergic inputs in APP/PS1 mice. Moreover, immunostaining of GAD65 a specific marker for GABAergic neurons revealed reduction of the GABAergic inputs in CA1 area of the hippocampus. These results might be linked to increased seizure sensitivity, premature death and cognitive dysfunction in this animal model of AD. Further in depth analysis of GABAergic dysfunction in APP/PS1 mice is required and may open new perspectives for AD therapy by restoring GABAergic function.

  6. An Approach for Generating Precipitation Input for Worst-Case Flood Modelling

    Science.gov (United States)

    Felder, Guido; Weingartner, Rolf

    2015-04-01

    There is a lack of suitable methods for creating precipitation scenarios that can be used to realistically estimate peak discharges with very low probabilities. On the one hand, existing methods are methodically questionable when it comes to physical system boundaries. On the other hand, the spatio-temporal representativeness of precipitation patterns as system input is limited. In response, this study proposes a method of deriving representative spatio-temporal precipitation patterns and presents a step towards making methodically correct estimations of infrequent floods by using a worst-case approach. A Monte-Carlo rainfall-runoff model allows for the testing of a wide range of different spatio-temporal distributions of an extreme precipitation event and therefore for the generation of a hydrograph for each of these distributions. Out of these numerous hydrographs and their corresponding peak discharges, the worst-case catchment reactions on the system input can be derived. The spatio-temporal distributions leading to the highest peak discharges are identified and can eventually be used for further investigations.

  7. Optimization model of peach production relevant to input energies – Yield function in Chaharmahal va Bakhtiari province, Iran

    International Nuclear Information System (INIS)

    Ghatrehsamani, Shirin; Ebrahimi, Rahim; Kazi, Salim Newaz; Badarudin Badry, Ahmad; Sadeghinezhad, Emad

    2016-01-01

    The aim of this study was to determine the amount of input–output energy used in peach production and to develop an optimal model of production in Chaharmahal va Bakhtiari province, Iran. Data were collected from 100 producers by administering a questionnaire in face-to-face interviews. Farms were selected based on random sampling method. Results revealed that the total energy of production is 47,951.52 MJ/ha and the highest share of energy consumption belongs to chemical fertilizers (35.37%). Consumption of direct energy was 47.4% while indirect energy was 52.6%. Also, Total energy consumption was divided into two groups; renewable and non-renewable (19.2% and 80.8% respectively). Energy use efficiency, Energy productivity, Specific energy and Net energy were calculated as 0.433, 0.228 (kg/MJ), 4.38 (MJ/kg) and −27,161.722 (MJ/ha), respectively. According to the negative sign for Net energy, if special strategy is used, energy dismiss will decrease and negative effect of some parameters could be omitted. In the present case the amount is indicating decimate of production energy. In addition, energy efficiency was not high enough. Some of the input energies were applied to machinery, chemical fertilizer, water irrigation and electricity which had significant effect on increasing production and MPP (marginal physical productivity) was determined for variables. This parameter was positive for energy groups namely; machinery, diesel fuel, chemical fertilizer, water irrigation and electricity while it was negative for other kind of energy such as chemical pesticides and human labor. Finally, there is a need to pursue a new policy to force producers to undertake energy-efficient practices to establish sustainable production systems without disrupting the natural resources. In addition, extension activities are needed to improve the efficiency of energy consumption and to sustain the natural resources. - Highlights: • Replacing non-renewable energy with renewable

  8. Solar Load Inputs for USARIEM Thermal Strain Models and the Solar Radiation-Sensitive Components of the WBGT Index

    National Research Council Canada - National Science Library

    Matthew, William

    2001-01-01

    This report describes processes we have implemented to use global pyranometer-based estimates of mean radiant temperature as the common solar load input for the Scenario model, the USARIEM heat strain...

  9. Reconstruction of rocks petrophysical properties as input data for reservoir modeling

    Science.gov (United States)

    Cantucci, B.; Montegrossi, G.; Lucci, F.; Quattrocchi, F.

    2016-11-01

    The worldwide increasing energy demand triggered studies focused on defining the underground energy potential even in areas previously discharged or neglected. Nowadays, geological gas storage (CO2 and/or CH4) and geothermal energy are considered strategic for low-carbon energy development. A widespread and safe application of these technologies needs an accurate characterization of the underground, in terms of geology, hydrogeology, geochemistry, and geomechanics. However, during prefeasibility study-stage, the limited number of available direct measurements of reservoirs, and the high costs of reopening closed deep wells must be taken into account. The aim of this work is to overcome these limits, proposing a new methodology to reconstruct vertical profiles, from surface to reservoir base, of: (i) thermal capacity, (ii) thermal conductivity, (iii) porosity, and (iv) permeability, through integration of well-log information, petrographic observations on inland outcropping samples, and flow and heat transport modeling. As case study to test our procedure we selected a deep structure, located in the medium Tyrrhenian Sea (Italy). Obtained results are consistent with measured data, confirming the validity of the proposed model. Notwithstanding intrinsic limitations due to manual calibration of the model with measured data, this methodology represents an useful tool for reservoir and geochemical modelers that need to define petrophysical input data for underground modeling before the well reopening.

  10. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    International Nuclear Information System (INIS)

    Lamboni, Matieyendou; Monod, Herve; Makowski, David

    2011-01-01

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006 ) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  11. Comparison of several climate indices as inputs in modelling of the Baltic Sea runoff

    Energy Technology Data Exchange (ETDEWEB)

    Hanninen, J.; Vuorinen, I. [Turku Univ. (Finland). Archipelaco Research Inst.], e-mail: jari.hanninen@utu.fi

    2012-11-01

    Using Transfer function (TF) models, we have earlier presented a chain of events between changes in the North Atlantic Oscillation (NAO) and their oceanographical and ecological consequences in the Baltic Sea. Here we tested whether other climate indices as inputs would improve TF models, and our understanding of the Baltic Sea ecosystem. Besides NAO, the predictors were the Arctic Oscillation (AO), sea-level air pressures at Iceland (SLP), and wind speeds at Hoburg (Gotland). All indices produced good TF models when the total riverine runoff to the Baltic Sea was used as a modelling basis. AO was not applicable in all study areas, showing a delay of about half a year between climate and runoff events, connected with freezing and melting time of ice and snow in the northern catchment area of the Baltic Sea. NAO appeared to be most useful modelling tool as its area of applicability was the widest of the tested indices, and the time lag between climate and runoff events was the shortest. SLP and Hoburg wind speeds showed largely same results as NAO, but with smaller areal applicability. Thus AO and NAO were both mostly contributing to the general understanding of climate control of runoff events in the Baltic Sea ecosystem. (orig.)

  12. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    Directory of Open Access Journals (Sweden)

    Simone Fiori

    2007-07-01

    Full Text Available Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure.

  13. Assessment of NASA's Physiographic and Meteorological Datasets as Input to HSPF and SWAT Hydrological Models

    Science.gov (United States)

    Alacron, Vladimir J.; Nigro, Joseph D.; McAnally, William H.; OHara, Charles G.; Engman, Edwin Ted; Toll, David

    2011-01-01

    This paper documents the use of simulated Moderate Resolution Imaging Spectroradiometer land use/land cover (MODIS-LULC), NASA-LIS generated precipitation and evapo-transpiration (ET), and Shuttle Radar Topography Mission (SRTM) datasets (in conjunction with standard land use, topographical and meteorological datasets) as input to hydrological models routinely used by the watershed hydrology modeling community. The study is focused in coastal watersheds in the Mississippi Gulf Coast although one of the test cases focuses in an inland watershed located in northeastern State of Mississippi, USA. The decision support tools (DSTs) into which the NASA datasets were assimilated were the Soil Water & Assessment Tool (SWAT) and the Hydrological Simulation Program FORTRAN (HSPF). These DSTs are endorsed by several US government agencies (EPA, FEMA, USGS) for water resources management strategies. These models use physiographic and meteorological data extensively. Precipitation gages and USGS gage stations in the region were used to calibrate several HSPF and SWAT model applications. Land use and topographical datasets were swapped to assess model output sensitivities. NASA-LIS meteorological data were introduced in the calibrated model applications for simulation of watershed hydrology for a time period in which no weather data were available (1997-2006). The performance of the NASA datasets in the context of hydrological modeling was assessed through comparison of measured and model-simulated hydrographs. Overall, NASA datasets were as useful as standard land use, topographical , and meteorological datasets. Moreover, NASA datasets were used for performing analyses that the standard datasets could not made possible, e.g., introduction of land use dynamics into hydrological simulations

  14. Modeling and Controller Design of PV Micro Inverter without Using Electrolytic Capacitors and Input Current Sensors

    Directory of Open Access Journals (Sweden)

    Faa Jeng Lin

    2016-11-01

    Full Text Available This paper outlines the modeling and controller design of a novel two-stage photovoltaic (PV micro inverter (MI that eliminates the need for an electrolytic capacitor (E-cap and input current sensor. The proposed MI uses an active-clamped current-fed push-pull DC-DC converter, cascaded with a full-bridge inverter. Three strategies are proposed to cope with the inherent limitations of a two-stage PV MI: (i high-speed DC bus voltage regulation using an integrator to deal with the 2nd harmonic voltage ripples found in single-phase systems; (ii inclusion of a small film capacitor in the DC bus to achieve ripple-free PV voltage; (iii improved incremental conductance (INC maximum power point tracking (MPPT without the need for current sensing by the PV module. Simulation and experimental results demonstrate the efficacy of the proposed system.

  15. Modeling variability in porescale multiphase flow experiments

    Energy Technology Data Exchange (ETDEWEB)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  16. Modeling variability in porescale multiphase flow experiments

    Science.gov (United States)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  17. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  18. Modeling Short-Range Soil Variability and its Potential Use in Variable-Rate Treatment of Experimental Plots

    Directory of Open Access Journals (Sweden)

    A Moameni

    2011-02-01

    Full Text Available Abstract In Iran, the experimental plots under fertilizer trials are managed in such a way that the whole plot area uniformly receives agricultural inputs. This could lead to biased research results and hence to suppressing of the efforts made by the researchers. This research was conducted in a selected site belonging to the Gonbad Agricultural Research Station, located in the semiarid region, northeastern Iran. The aim was to characterize the short-range spatial variability of the inherent and management-depended soil properties and to determine if this variation is large and can be managed at practical scales. The soils were sampled using a grid 55 m apart. In total, 100 composite soil samples were collected from topsoil (0-30 cm and were analyzed for calcium carbonate equivalent, organic carbon, clay, available phosphorus, available potassium, iron, copper, zinc and manganese. Descriptive statistics were applied to check data trends. Geostatistical analysis was applied to variography, model fitting and contour mapping. Sampling at 55 m made it possible to split the area of the selected experimental plot into relatively uniform areas that allow application of agricultural inputs with variable rates. Keywords: Short-range soil variability, Within-field soil variability, Interpolation, Precision agriculture, Geostatistics

  19. Influence of variable selection on partial least squares discriminant analysis models for explosive residue classification

    Energy Technology Data Exchange (ETDEWEB)

    De Lucia, Frank C., E-mail: frank.delucia@us.army.mil; Gottfried, Jennifer L.

    2011-02-15

    Using a series of thirteen organic materials that includes novel high-nitrogen energetic materials, conventional organic military explosives, and benign organic materials, we have demonstrated the importance of variable selection for maximizing residue discrimination with partial least squares discriminant analysis (PLS-DA). We built several PLS-DA models using different variable sets based on laser induced breakdown spectroscopy (LIBS) spectra of the organic residues on an aluminum substrate under an argon atmosphere. The model classification results for each sample are presented and the influence of the variables on these results is discussed. We found that using the whole spectra as the data input for the PLS-DA model gave the best results. However, variables due to the surrounding atmosphere and the substrate contribute to discrimination when the whole spectra are used, indicating this may not be the most robust model. Further iterative testing with additional validation data sets is necessary to determine the most robust model.

  20. Now it is, now it is not: variable input and 3rd person plural verbal inflection acquisition in Brazilian Portuguese

    Directory of Open Access Journals (Sweden)

    Daniele Molina

    2017-08-01

    Full Text Available Este artigo investiga a percepção do morfema flexional de 3ª pessoa do plural em verbos e a compreensão da informação de pluralidade veiculada por tal morfema por crianças adquirindo o PB, tendo em vista que a realização da flexão de número mostra-se variável nessa língua, tanto no âmbito nominal quanto verbal. Estudos prévios em diferentes línguas sugerem que, embora essa flexão seja produzida por volta dos três anos de idade, até os seis anos, crianças apresentam dificuldades na sua interpretação em tarefas de compreensão (JOHNSON; DE VILLIERS; SEYMOR, 2005; PÉREZ-LEROUX, 2005; LEGENDRE et al., 2010; BLÁHOVÁ; SMOLIK, 2014. Na presente pesquisa, foi conduzido um estudo experimental, por meio de uma tarefa de seleção de imagens, buscando verificar se, (i apesar do caráter variável da marcação de número no PB, crianças aos seis e aos cinco anos de idade identificam a forma verbal de 3ª pessoa do plural em sentenças com sujeito nulo (Comeram doce; (ii associam essa forma a uma ação praticada por mais de uma entidade e (iii se a forma verbal no singular é associada ao conceito de singularidade (Comeu doce. Enunciados contendo a informação redundante de número no sujeito e no verbo também foram testados (As crianças comeram doce vs. A criança comeu doce. Os resultados apontam para uma sistematicidade na escolha pela imagem congruente com os estímulos linguísticos no plural, apesar da preferência pela imagem plural também nas sentenças no singular. A variabilidade identificada no input parece não interferir na percepção e na compreensão do morfema verbal de plural na faixa etária avaliada. ---DOI: http://dx.doi.org/10.12957/matraga.2017.28498

  1. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  2. An extended TRANSCAR model including ionospheric convection: simulation of EISCAT observations using inputs from AMIE

    Directory of Open Access Journals (Sweden)

    P.-L. Blelly

    2005-02-01

    Full Text Available The TRANSCAR ionospheric model was extended to account for the convection of the magnetic field lines in the auroral and polar ionosphere. A mixed Eulerian-Lagrangian 13-moment approach was used to describe the dynamics of an ionospheric plasma tube. In the present study, one focuses on large scale transports in the polar ionosphere. The model was used to simulate a 35-h period of EISCAT-UHF observations on 16-17 February 1993. The first day was magnetically quiet, and characterized by elevated electron concentrations: the diurnal F2 layer reached as much as 1012m-3, which is unusual for a winter and moderate solar activity (F10.7=130 period. An intense geomagnetic event occurred on the second day, seen in the data as a strong intensification of the ionosphere convection velocities in the early afternoon (with the northward electric field reaching 150mVm-1 and corresponding frictional heating of the ions up to 2500K. The simulation used time-dependent AMIE outputs to infer flux-tube transports in the polar region, and to provide magnetospheric particle and energy inputs to the ionosphere. The overall very good agreement, obtained between the model and the observations, demonstrates the high ability of the extended TRANSCAR model for quantitative modelling of the high-latitude ionosphere; however, some differences are found which are attributed to the precipitation of electrons with very low energy. All these results are finally discussed in the frame of modelling the auroral ionosphere with space weather applications in mind.

  3. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    Directory of Open Access Journals (Sweden)

    K. Steffens

    2014-02-01

    Full Text Available Assessing climate change impacts on pesticide leaching requires careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-western Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-western Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios has the potential to provide robust probabilistic estimates of future pesticide losses.

  4. Development of a General Form CO2 and Brine Flux Input Model

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probe variability in key parameters. This report presents the procedures used to develop a generalized model for CO2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.

  5. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  6. Realistic modeling of seismic input for megacities and large urban areas

    International Nuclear Information System (INIS)

    Panza, Giuliano F.; Alvarez, Leonardo; Aoudia, Abdelkrim

    2002-06-01

    The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore

  7. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  8. Multi-Layer Perceptron (MLP)-Based Nonlinear Auto-Regressive with Exogenous Inputs (NARX) Stock Forecasting Model

    OpenAIRE

    I. M. Yassin; M. F. Abdul Khalid; S. H. Herman; I. Pasya; N. Ab Wahab; Z. Awang

    2017-01-01

    The prediction of stocks in the stock market is important in investment as it would help the investor to time buy and sell transactions to maximize profits. In this paper, a Multi-Layer Perceptron (MLP)-based Nonlinear Auto-Regressive with Exogenous Inputs (NARX) model was used to predict the prices of the Apple Inc. weekly stock prices over a time horizon of 1995 to 2013. The NARX model belongs is a system identification model that constructs a mathematical model from the dynamic input/outpu...

  9. The impacts of interannual climate variability and agricultural inputs on water footprint of crop production in an irrigation district of China.

    Science.gov (United States)

    Sun, Shikun; Wu, Pute; Wang, Yubao; Zhao, Xining; Liu, Jing; Zhang, Xiaohong

    2013-02-01

    Irrigation plays an increasing important role in agriculture of China. The assessment of water resources utilization during agricultural production process will contribute to improving agricultural water management practices for the irrigation districts. The water footprint provides a new approach to assessing the agricultural water utilization. The present paper put forward a modified calculation method to quantify the water footprint of crop. On this basis, this paper calculated the water footprint of major crop in Hetao irrigation district, China. Then, it evaluated the influencing factors that caused the variability of crop water footprint during the study period. Results showed that: 1) the annual average water footprint of integrated-crop production in Hetao irrigation district was 3.91 m(3)kg(-1) (90.91% blue water and 9.09% green water). The crop production in the Hetao irrigation district mainly relies on blue water; 2) under the integrated influences of interannual climate variability and variation of agricultural inputs, the water footprint of integrated-crop production displayed a decreasing trend; 3) the contribution rate of the climatic factors to the variation of water footprint was only -6.90%, while the total contribution rate of the agricultural inputs factors was -84.31%. The results suggest that the water footprint of crop mainly depends on agricultural management rather than the regional climate and its variation. The results indicated that the water footprint of a crop could be controlled at a reasonable level by better management of all agricultural inputs and the improvement of water use efficiency in agriculture. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Microsatellite variability reveals the necessity for genetic input from wild giant pandas (Ailuropoda melanoleuca) into the captive population.

    Science.gov (United States)

    Shen, Fujun; Zhang, Zhihe; He, Wei; Yue, Bisong; Zhang, Anju; Zhang, Liang; Hou, Rong; Wang, Chengdong; Watanabe, Toshi

    2009-03-01

    Recent success in breeding giant pandas in captivity has encouraged panda conservationists to believe that the ex situ population is ready to serve as a source for supporting the wild population. In this study, we used 11 microsatellite DNA markers to assess the amount and distribution of genetic variability present in the two largest captive populations (Chengdu Research Base of Giant Panda Breeding, Sichuan Province and the China Research and Conservation Center for the Giant Panda at Wolong, Sichuan Province). The data were compared with those samples from wild pandas living in two key giant panda nature reserves (Baoxing Nature Reserve and Wanglang Nature Reserve). The results show that the captive populations have retained lower levels of allelic diversity and heterozygosity compared to isolated wild populations. However, low inbreeding coefficients indicate that captive populations are under careful genetic management. Excessive heterozygosity suggests that the two captive populations have experienced a genetic bottleneck, presumably caused by founder effects. Moreover, evidence of increased genetic divergence demonstrates restricted breeding options within facilities. Based on these results, we conclude that the genetic diversity in the captive populations is not optimal. Introduction of genetic materials from wild pandas and improved exchange of genetic materials among institutions will be necessary for the captive pandas to be representative of the wild populations.

  11. Seasonal variability of salinity and circulation in a silled estuarine fjord: A numerical model study

    Science.gov (United States)

    Kawase, Mitsuhiro; Bang, Bohyun

    2013-12-01

    A three-dimensional hydrodynamic model is used to study seasonal variability of circulation and hydrography in Hood Canal, Washington, United States, an estuarine fjord that develops seasonally hypoxic conditions. The model is validated with data from year 2006, and is shown to be capable of quantitatively realistic simulation of hydrographic variability. Sensitivity experiments show the largest cause of seasonal variability to be that of salinity at the mouth of the fjord, which drives an annual deep water renewal in late summer-early autumn. Variability of fresh water input from the watershed also causes significant but secondary changes, especially in winter. Local wind stress has little effect over the seasonal timescale. Further experiments, in which one forcing parameter is abruptly altered while others are kept constant, show that outside salinity change induces an immediate response in the exchange circulation that, however, decays as a transient as the system equilibrates. In contrast, a change in the river input initiates gradual adjustment towards a new equilibrium value for the exchange transport. It is hypothesized that the spectral character of the system response to river variability will be redder than to salinity variability. This is demonstrated with a stochastically forced, semi-analytical model of fjord exchange circulation. While the exchange circulation in Hood Canal appears less sensitive to the river variability than to the outside hydrography at seasonal timescales, at decadal and longer timescales both could become significant factors in affecting the exchange circulation.

  12. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  13. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  14. Quantifying Amount and Variability of Cloud Water Inputs Using Active-Strand Collector, Ceilometer, Dewpoint, and Photographic Measurements

    Science.gov (United States)

    Scholl, M. A.; Bassiouni, M.; Murphy, S. F.; Gonzalez, G.; Van Beusekom, A. E.; Torres-Sanchez, A.; Estrada-Ruiz, C.

    2015-12-01

    Cloud water associated with orographic processes contributes to soil moisture and streamflow, suppresses transpiration, and moderates drought in tropical mountain forests. It is difficult to quantify, yet may be vulnerable to changes in amount and frequency due to warming climate. Cloud immersion is characterized and monitored as part of the ecohydrology research of the USGS Water, Energy and Biogeochemical Budgets (WEBB) program and the Luquillo Critical Zone Observatory (CZO). Stable-isotope studies indicated cloud water may contribute significantly to headwater streamflow, and measurements with an active-strand collector yielded estimates of overnight cloud water deposition rates on Pico del Este (1050 m); but cloud liquid water content and spatial and temporal variability are not well understood. At five sites spanning the lifting condensation level to ridge-top (600-1000 m) in the Luquillo Mountains, cloud immersion conditions are monitored using time-lapse photography and temperature/ relative humidity (T/RH) sensors. A ceilometer, installed at 99 m on the windward slope on 4/29/2013, provides longer-term data to understand variation in cloud base altitude and to detect changes that may occur with warming climate. The cloud-zone sites range from tropical wet forest (mixed species) to rain forest (sierra palm) to elfin cloud forest. T/RH sensors indicated foggy conditions when temperature measurements to the images. These complementary data sets provide quantification of spatial and temporal patterns of cloud immersion, and areal estimates of cloud water deposition will be made to determine importance in the water budget.

  15. Effects of model input data uncertainty in simulating water resources of a transnational catchment

    Science.gov (United States)

    Camargos, Carla; Breuer, Lutz

    2016-04-01

    Landscape consists of different ecosystem components and how these components affect water quantity and quality need to be understood. We start from the assumption that water resources are generated in landscapes and that rural land use (particular agriculture) has a strong impact on water resources that are used downstream for domestic and industrial supply. Partly located in the north of Luxembourg and partly in the southeast of Belgium, the Haute-Sûre catchment is about 943 km2. As part of the catchment, the Haute-Sûre Lake is an important source of drinking water for Luxembourg population, satisfying 30% of the city's demand. The objective of this study is investigate impact of spatial input data uncertainty on water resources simulations for the Haute-Sûre catchment. We apply the SWAT model for the period 2006 to 2012 and use a variety of digital information on soils, elevation and land uses with various spatial resolutions. Several objective functions are being evaluated and we consider resulting parameter uncertainty to quantify an important part of the global uncertainty in model simulations.

  16. Modeling uncertainties in workforce disruptions from influenza pandemics using dynamic input-output analysis.

    Science.gov (United States)

    El Haimar, Amine; Santos, Joost R

    2014-03-01

    Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics. © 2013 Society for Risk Analysis.

  17. Modeling imbalanced economic recovery following a natural disaster using input-output analysis.

    Science.gov (United States)

    Li, Jun; Crawford-Brown, Douglas; Syddall, Mark; Guan, Dabo

    2013-10-01

    Input-output analysis is frequently used in studies of large-scale weather-related (e.g., Hurricanes and flooding) disruption of a regional economy. The economy after a sudden catastrophe shows a multitude of imbalances with respect to demand and production and may take months or years to recover. However, there is no consensus about how the economy recovers. This article presents a theoretical route map for imbalanced economic recovery called dynamic inequalities. Subsequently, it is applied to a hypothetical postdisaster economic scenario of flooding in London around the year 2020 to assess the influence of future shocks to a regional economy and suggest adaptation measures. Economic projections are produced by a macro econometric model and used as baseline conditions. The results suggest that London's economy would recover over approximately 70 months by applying a proportional rationing scheme under the assumption of initial 50% labor loss (with full recovery in six months), 40% initial loss to service sectors, and 10-30% initial loss to other sectors. The results also suggest that imbalance will be the norm during the postdisaster period of economic recovery even though balance may occur temporarily. Model sensitivity analysis suggests that a proportional rationing scheme may be an effective strategy to apply during postdisaster economic reconstruction, and that policies in transportation recovery and in health care are essential for effective postdisaster economic recovery. © 2013 Society for Risk Analysis.

  18. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    Science.gov (United States)

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Three-scale input-output modeling for urban economy: Carbon emission by Beijing 2007

    Science.gov (United States)

    Chen, G. Q.; Guo, Shan; Shao, Ling; Li, J. S.; Chen, Zhan-Ming

    2013-09-01

    For urban economies, an ecological endowment embodiment analysis has to be supported by endowment intensities at both the international and domestic scales to reflect the international and domestic imports of increasing importance. A three-scale input-output modeling for an urban economy to give nine categories of embodiment fluxes is presented in this paper by a case study on the carbon dioxide emissions by the Beijing economy in 2007, based on the carbon intensities for the average world and national economies. The total direct emissions are estimated at 1.03E+08 t, in which 91.61% is energy-related emissions. By the modeling, emissions embodied in fixed capital formation amount to 7.20E+07 t, emissions embodied in household consumption are 1.58 times those in government consumption, and emissions in gross capital formation are 14.93% more than those in gross consumption. As a net exporter of carbon emissions, Beijing exports 5.21E+08 t carbon embodied in foreign imported commodities and 1.06E+08 t in domestic imported commodities, while emissions embodied in foreign and domestic imported commodities are 3.34E+07 and 1.75E+08 t respectively. The algorithm presented in this study is applicable to the embodiment analysis of other environmental resources for regional economies characteristic of multi-scales.

  20. Three-Verb Clusters in Interference Frisian: A Stochastic Model over Sequential Syntactic Input.

    Science.gov (United States)

    Hoekstra, Eric; Versloot, Arjen

    2016-03-01

    Abstract Interference Frisian (IF) is a variety of Frisian, spoken by mostly younger speakers, which is heavily influenced by Dutch. IF exhibits all six logically possible word orders in a cluster of three verbs. This phenomenon has been researched by Koeneman and Postma (2006), who argue for a parameter theory, which leaves frequency differences between various orders unexplained. Rejecting Koeneman and Postma's parameter theory, but accepting their conclusion that Dutch (and Frisian) data are input for the grammar of IF, we will argue that the word order preferences of speakers of IF are determined by frequency and similarity. More specifically, three-verb clusters in IF are sensitive to: their linear left-to-right similarity to two-verb clusters and three-verb clusters in Frisian and in Dutch; the (estimated) frequency of two- and three-verb clusters in Frisian and Dutch. The model will be shown to work best if Dutch and Frisian, and two- and three-verb clusters, have equal impact factors. If different impact factors are taken, the model's predictions do not change substantially, testifying to its robustness. This analysis is in line with recent ideas that the sequential nature of human speech is more important to syntactic processes than commonly assumed, and that less burden need be put on the hierarchical dimension of syntactic structure.

  1. Realistic modelling of the seismic input: Site effects and parametric studies

    International Nuclear Information System (INIS)

    Romanelli, F.; Vaccari, F.; Panza, G.F.

    2002-11-01

    We illustrate the work done in the framework of a large international cooperation, showing the very recent numerical experiments carried out within the framework of the EC project 'Advanced methods for assessing the seismic vulnerability of existing motorway bridges' (VAB) to assess the importance of non-synchronous seismic excitation of long structures. The definition of the seismic input at the Warth bridge site, i.e. the determination of the seismic ground motion due to an earthquake with a given magnitude and epicentral distance from the site, has been done following a theoretical approach. In order to perform an accurate and realistic estimate of site effects and of differential motion it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters, in realistic geological structures. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different sources and structural models, allows us the construction of damage scenarios that are out of the reach of stochastic models, at a very low cost/benefit ratio. (author)

  2. Including operational data in QMRA model: development and impact of model inputs.

    Science.gov (United States)

    Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle

    2009-03-01

    A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).

  3. Satellite, climatological, and theoretical inputs for modeling of the diurnal cycle of fire emissions

    Science.gov (United States)

    Hyer, E. J.; Reid, J. S.; Schmidt, C. C.; Giglio, L.; Prins, E.

    2009-12-01

    The diurnal cycle of fire activity is crucial for accurate simulation of atmospheric effects of fire emissions, especially at finer spatial and temporal scales. Estimating diurnal variability in emissions is also a critical problem for construction of emissions estimates from multiple sensors with variable coverage patterns. An optimal diurnal emissions estimate will use as much information as possible from satellite fire observations, compensate known biases in those observations, and use detailed theoretical models of the diurnal cycle to fill in missing information. As part of ongoing improvements to the Fire Location and Monitoring of Burning Emissions (FLAMBE) fire monitoring system, we evaluated several different methods of integrating observations with different temporal sampling. We used geostationary fire detections from WF_ABBA, fire detection data from MODIS, empirical diurnal cycles from TRMM, and simple theoretical diurnal curves based on surface heating. Our experiments integrated these data in different combinations to estimate the diurnal cycles of emissions for each location and time. Hourly emissions estimates derived using these methods were tested using an aerosol transport model. We present results of this comparison, and discuss the implications of our results for the broader problem of multi-sensor data fusion in fire emissions modeling.

  4. Discharge simulations performed with a hydrological model using bias corrected regional climate model input

    Directory of Open Access Journals (Sweden)

    S. C. van Pelt

    2009-12-01

    Full Text Available Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of precipitation change on river discharge is studied. The objective of this paper is to investigate the effect of using two different bias correction methods on output from a Regional Climate Model (RCM simulation. In this study a Regional Atmospheric Climate Model (RACMO2 run is used, forced by ECHAM5/MPIOM under the condition of the SRES-A1B emission scenario, with a 25 km horizontal resolution. The RACMO2 runs contain a systematic precipitation bias on which two bias correction methods are applied. The first method corrects for the wet day fraction and wet day average (WD bias correction and the second method corrects for the mean and coefficient of variance (MV bias correction. The WD bias correction initially corrects well for the average, but it appears that too many successive precipitation days were removed with this correction. The second method performed less well on average bias correction, but the temporal precipitation pattern was better. Subsequently, the discharge was calculated by using RACMO2 output as forcing to the HBV-96 hydrological model. A large difference was found between the simulated discharge of the uncorrected RACMO2 run, the WD bias corrected run and the MV bias corrected run. These results show the importance of an appropriate bias correction.

  5. Modeling first impressions from highly variable facial images.

    Science.gov (United States)

    Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom

    2014-08-12

    First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.

  6. Large uncertainty in soil carbon modelling related to method of calculation of plant carbon input to agriculutral systems

    DEFF Research Database (Denmark)

    Keel, S G; Leifeld, Jens; Mayer, Julius

    2017-01-01

    referred to as soil carbon inputs (C). The soil C inputs from plants are derived from measured agricultural yields using allometric equations. Here we compared the results of five previously published equations. Our goal was to test whether the choice of method is critical for modelling soil C and if so...... with the model C-TOOL showed that calculated SOC stocks were affected strongly by the choice of the allometric equation. With four equations, a decrease in SOC stocks was simulated, whereas with one equation there was no change. This considerable uncertainty in modelled soil C is attributable solely...... to the allometric equation used to estimate the soil C input. We identify the evaluation and selection of allometric equations and associated coefficients as critical steps when setting up a model-based soil C inventory for agricultural systems....

  7. Hydrological and sedimentological modeling of the Okavango Delta, Botswana, using remotely sensed input and calibration data

    Science.gov (United States)

    Milzow, C.; Kgotlhang, L.; Kinzelbach, W.; Bauer-Gottwein, P.

    2006-12-01

    medium-term. The Delta's size and limited accessibility make direct data acquisition on the ground difficult. Remote sensing methods are the most promising source of acquiring spatially distributed data for both, model input and calibration. Besides ground data, METEOSAT and NOAA data are used for precipitation and evapotranspiration inputs respectively. The topography is taken from a study from Gumbricht et al. (2004) where the SRTM shuttle mission data is refined using remotely sensed vegetation indexes. The aquifer thickness was determined with an aeromagnetic survey. For calibration, the simulated flooding patterns are compared to patterns derived from satellite imagery: recent ENVISAT ASAR and older NOAA AVHRR scenes. The final objective is to better understand the hydrological and hydraulic aspects of this complex ecosystem and eventually predict the consequences of human interventions. It will provide a tool for decision makers involved to assess the impact of possible upstream dams and water abstraction scenarios.

  8. Smoke inputs to climate models: optical properties and height distribution for nuclear winter studies

    International Nuclear Information System (INIS)

    Penner, J.E.; Haselman, L.C. Jr.

    1985-04-01

    Smoke from fires produced in the aftermath of a major nuclear exchange has been predicted to cause large decreases in land surface temperatures. The extent of the decrease and even the sign of the temperature change depend on the optical characteristics of the smoke and how it is distributed with altitude. The height distribution of smoke over a fire is determined by the amount of buoyant energy produced by the fire and the amount of energy released by the latent heat of condensation of water vapor. The optical properties of the smoke depend on the size distribution of smoke particles which changes due to coagulation within the lofted plume. We present calculations demonstrating these processes and estimate their importance for the smoke source term input for climate models. For high initial smoke densities and for absorbing smoke ( m = 1.75 - 0.3i), coagulation of smoke particles within the smoke plume is predicted to first increase, then decrease, the size-integrated extinction cross section. However, at the smoke densities predicted in our model (assuming a 3% emission rate for smoke) and for our assumed initial size distribution, the attachment rates for brownian and turbulent collision processes are not fast enough to alter the smoke size distribution enough to significantly change the integrated extinction cross section. Early-time coagulation is, however, fast enough to allow further coagulation, on longer time scales, to act to decrease the extinction cross section. On these longer time scales appropriate to climate models, coagulation can decrease the extinction cross section by almost a factor of two before the smoke becomes well mixed around the globe. This process has been neglected in past climate effect evaluations, but could have a significant effect, since the extinction cross section enters as an exponential factor in calculating the light attenuation due to smoke. 10 refs., 20 figs

  9. Structural Modeling of Institutional Variables and Undergraduates ...

    African Journals Online (AJOL)

    Peer influence and facilities for research were the major exogenous variables while students' perception of their supervisors' commitment to research supervision was a critical variable that influences their attitude towards research projects. We suggest that research supervisors be firm and discreet in the supervision and ...

  10. Effect of Flux Adjustments on Temperature Variability in Climate Models

    International Nuclear Information System (INIS)

    Duffy, P.; Bell, J.; Covey, C.; Sloan, L.

    1999-01-01

    It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability

  11. Parametric modeling of DSC-MRI data with stochastic filtration and optimal input design versus non-parametric modeling.

    Science.gov (United States)

    Kalicka, Renata; Pietrenko-Dabrowska, Anna

    2007-03-01

    In the paper MRI measurements are used for assessment of brain tissue perfusion and other features and functions of the brain (cerebral blood flow - CBF, cerebral blood volume - CBV, mean transit time - MTT). Perfusion is an important indicator of tissue viability and functioning as in pathological tissue blood flow, vascular and tissue structure are altered with respect to normal tissue. MRI enables diagnosing diseases at an early stage of their course. The parametric and non-parametric approaches to the identification of MRI models are presented and compared. The non-parametric modeling adopts gamma variate functions. The parametric three-compartmental catenary model, based on the general kinetic model, is also proposed. The parameters of the models are estimated on the basis of experimental data. The goodness of fit of the gamma variate and the three-compartmental models to the data and the accuracy of the parameter estimates are compared. Kalman filtering, smoothing the measurements, was adopted to improve the estimate accuracy of the parametric model. Parametric modeling gives a better fit and better parameter estimates than non-parametric and allows an insight into the functioning of the system. To improve the accuracy optimal experiment design related to the input signal was performed.

  12. Ecological input-output modeling for embodied resources and emissions in Chinese economy 2005

    Science.gov (United States)

    Chen, Z. M.; Chen, G. Q.; Zhou, J. B.; Jiang, M. M.; Chen, B.

    2010-07-01

    For the embodiment of natural resources and environmental emissions in Chinese economy 2005, a biophysical balance modeling is carried out based on an extension of the economic input-output table into an ecological one integrating the economy with its various environmental driving forces. Included resource flows into the primary resource sectors and environmental emission flows from the primary emission sectors belong to seven categories as energy resources in terms of fossil fuels, hydropower and nuclear energy, biomass, and other sources; freshwater resources; greenhouse gas emissions in terms of CO2, CH4, and N2O; industrial wastes in terms of waste water, waste gas, and waste solid; exergy in terms of fossil fuel resources, biological resources, mineral resources, and environmental resources; solar emergy and cosmic emergy in terms of climate resources, soil, fossil fuels, and minerals. The resulted database for embodiment intensity and sectoral embodiment of natural resources and environmental emissions is of essential implications in context of systems ecology and ecological economics in general and of global climate change in particular.

  13. The Use of an Eight-Step Instructional Model to Train School Staff in Partner-Augmented Input

    Science.gov (United States)

    Senner, Jill E.; Baud, Matthew R.

    2017-01-01

    An eight-step instruction model was used to train a self-contained classroom teacher, speech-language pathologist, and two instructional assistants in partner-augmented input, a modeling strategy for teaching augmentative and alternative communication use. With the exception of a 2-hr training session, instruction primarily was conducted during…

  14. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  15. A Variable Input-Output Model for Inflation, Growth, and Energy for the Korean Economy.

    Science.gov (United States)

    1983-12-01

    and the sales price of cukput as determinan-s of the technical coefficients were suggested by Walras [Ref. 4] and many other eco.cmis.s. (Ref. 5] Arrow...1967, 1975 and 1979. Seoul,Ko:rea: Fesearch Department. S. 4. Walras , L. Elemerts of Pure Economics, (3nglish Edition). George Allen and Urwin

  16. Urban pluvial flood prediction: a case study evaluating radar rainfall nowcasts and numerical weather prediction models as model inputs.

    Science.gov (United States)

    Thorndahl, Søren; Nielsen, Jesper Ellerbæk; Jensen, David Getreuer

    2016-12-01

    Flooding produced by high-intensive local rainfall and drainage system capacity exceedance can have severe impacts in cities. In order to prepare cities for these types of flood events - especially in the future climate - it is valuable to be able to simulate these events numerically, both historically and in real-time. There is a rather untested potential in real-time prediction of urban floods. In this paper, radar data observations with different spatial and temporal resolution, radar nowcasts of 0-2 h leadtime, and numerical weather models with leadtimes up to 24 h are used as inputs to an integrated flood and drainage systems model in order to investigate the relative difference between different inputs in predicting future floods. The system is tested on the small town of Lystrup in Denmark, which was flooded in 2012 and 2014. Results show it is possible to generate detailed flood maps in real-time with high resolution radar rainfall data, but rather limited forecast performance in predicting floods with leadtimes more than half an hour.

  17. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  18. Modeling sea-surface temperature and its variability

    Science.gov (United States)

    Sarachik, E. S.

    1985-01-01

    A brief review is presented of the temporal scales of sea surface temperature variability. Progress in modeling sea surface temperature, and remaining obstacles to the understanding of the variability is discussed.

  19. In-process tool rotational speed variation with constant heat input in friction stir welding of AZ31 sheets with variable thickness

    Science.gov (United States)

    Buffa, Gianluca; Campanella, Davide; Forcellese, Archimede; Fratini, Livan; Simoncini, Michela

    2017-10-01

    In the present work, friction stir welding experiments on AZ31 magnesium alloy sheets, characterized by a variable thickness along the welding line, were carried out. The approach adapted during welding consisted in maintaining constant the heat input to the joint. To this purpose, the rotational speed of the pin tool was increased with decreasing thickness and decreased with increasing thickness in order to obtain the same temperatures during welding. The amount by which the rotational speed was changed as a function of the sheet thickness was defined on the basis of the results given by FEM simulations of the FSW process. Finally, the effect of the in-process variation of the tool rotational speed on the mechanical and microstructural properties of FSWed joints was analysed by comparing both the nominal stress vs. nominal strain curves and microstructure of FSWed joints obtained in different process conditions. It was observed that FSW performed by keeping constant the heat input to the joint leads to almost coincident results both in terms of the curve shape, ultimate tensile strength and ultimate elongation values, and microstructure.

  20. Hypothesis: Low frequency heart rate variability (LF-HRV) is an input for undisclosed yet biological adaptive control, governing the cardiovascular regulations to assure optimal functioning.

    Science.gov (United States)

    Gabbay, Uri; Bobrovsky, Ben Zion

    2012-02-01

    Cardiovascular regulation is considered today as having three levels: autoregulations, neural regulations and hormonal regulations. We hypothesize that the cardiovascular regulation has an additional (fourth) control level which is outer, hierarchical (adaptive) loop where LF-HRV amplitude serves as a reference input which the neural cardiovascular center detects and responses in order to maintain LF-HRV around some prescribed level. Supporting evidences: LF-HRV absence during artificial cardiac pacing may be associated with "pacemaker syndrome" which had not been sufficiently understood regardless of apparently unimpaired cardiovascular performance. The hypothesis may provide an essential basis for understanding several cardiovascular morbidities and insight toward diagnostic measures and treatments (including but not limited to adding variability to the pulse generator of artificial pacemakers to eliminate "pace maker syndrome"). Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Generalized Network Psychometrics : Combining Network and Latent Variable Models

    NARCIS (Netherlands)

    Epskamp, S.; Rhemtulla, M.; Borsboom, D.

    2017-01-01

    We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between

  2. Decreased Hering-Breuer input-output entrainment in a mouse model of Rett syndrome

    Directory of Open Access Journals (Sweden)

    Rishi R Dhingra

    2013-04-01

    Full Text Available Rett syndrome, a severe X-linked neurodevelopmental disorder caused by mutations in the gene encoding methyl-CpG-binding protein 2 (Mecp2, is associated with a highly irregular respiratory pattern including severe upper-airway dysfunction. Recent work suggests that hyperexcitability of the Hering-Breuer reflex (HBR pathway contributes to respiratory dysrhythmia in Mecp2 mutant mice. To assess how enhanced HBR input impacts respiratory entrainment by sensory afferents in closed-loop in vivo-like conditions, we investigated the input (vagal stimulus trains – output (phrenic bursting entrainment via the HBR in wild-type and Mecp2-deficient mice. Using the in situ perfused brainstem preparation, which maintains an intact pontomedullary axis capable of generating an in vivo-like respiratory rhythm in the absence of the HBR, we mimicked the HBR feedback input by stimulating the vagus nerve (at threshold current, 0.5 ms pulse duration, 75 Hz pulse frequency, 100 ms train duration at an inter-burst frequency matching that of the intrinsic oscillation of the inspiratory motor output of each preparation. Using this approach, we observed significant input-output entrainment in wild-type mice as measured by the maximum of the cross-correlation function, the peak of the instantaneous relative phase distribution, and the mutual information of the instantaneous phases. This entrainment was associated with a reduction in inspiratory duration during feedback stimulation. In contrast, the strength of input-output entrainment was significantly weaker in Mecp2-/+ mice. However, Mecp2-/+ mice also had a reduced inspiratory duration during stimulation, indicating that reflex behavior in the HBR pathway was intact. Together, these observations suggest that the respiratory network compensates for enhanced sensitivity of HBR inputs by reducing HBR input-output entrainment.

  3. Evaluation of precipitation input for SWAT modeling in Alpine catchment: A case study in the Adige river basin (Italy).

    Science.gov (United States)

    Tuo, Ye; Duan, Zheng; Disse, Markus; Chiogna, Gabriele

    2016-12-15

    Precipitation is often the most important input data in hydrological models when simulating streamflow. The Soil and Water Assessment Tool (SWAT), a widely used hydrological model, only makes use of data from one precipitation gauge station that is nearest to the centroid of each subbasin, which is eventually corrected using the elevation band method. This leads in general to inaccurate representation of subbasin precipitation input data, particularly in catchments with complex topography. To investigate the impact of different precipitation inputs on the SWAT model simulations in Alpine catchments, 13years (1998-2010) of daily precipitation data from four datasets including OP (Observed precipitation), IDW (Inverse Distance Weighting data), CHIRPS (Climate Hazards Group InfraRed Precipitation with Station data) and TRMM (Tropical Rainfall Measuring Mission) has been considered. Both model performances (comparing simulated and measured streamflow data at the catchment outlet) as well as parameter and prediction uncertainties have been quantified. For all three subbasins, the use of elevation bands is fundamental to match the water budget. Streamflow predictions obtained using IDW inputs are better than those obtained using the other datasets in terms of both model performance and prediction uncertainty. Models using the CHIRPS product as input provide satisfactory streamflow estimation, suggesting that this satellite product can be applied to this data-scarce Alpine region. Comparing the performance of SWAT models using different precipitation datasets is therefore important in data-scarce regions. This study has shown that, precipitation is the main source of uncertainty, and different precipitation datasets in SWAT models lead to different best estimate ranges for the calibrated parameters. This has important implications for the interpretation of the simulated hydrological processes. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, M-Y; Liu, H-L [Graduate Institute of Medical Physics and Imaging Science, Chang Gung University, Taoyuan, Taiwan (China); Lee, T-H; Yang, S-T; Kuo, H-H [Stroke Section, Department of Neurology, Chang Gung Memorial Hospital and Chang Gung University, Taoyuan, Taiwan (China); Chyi, T-K [Molecular Imaging Center Chang Gung Memorial Hospital, Taoyuan, Taiwan (China)], E-mail: hlaliu@mail.cgu.edu.tw

    2009-05-15

    Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.

  5. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software

  6. Evaluating the efficiency of municipalities in collecting and processing municipal solid waste: a shared input DEA-model.

    Science.gov (United States)

    Rogge, Nicky; De Jaeger, Simon

    2012-10-01

    This paper proposed an adjusted "shared-input" version of the popular efficiency measurement technique Data Envelopment Analysis (DEA) that enables evaluating municipality waste collection and processing performances in settings in which one input (waste costs) is shared among treatment efforts of multiple municipal solid waste fractions. The main advantage of this version of DEA is that it not only provides an estimate of the municipalities overall cost efficiency but also estimates of the municipalities' cost efficiency in the treatment of the different fractions of municipal solid waste (MSW). To illustrate the practical usefulness of the shared input DEA-model, we apply the model to data on 293 municipalities in Flanders, Belgium, for the year 2008. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Enhancement of information transmission with stochastic resonance in hippocampal CA1 neuron models: effects of noise input location.

    Science.gov (United States)

    Kawaguchi, Minato; Mino, Hiroyuki; Durand, Dominique M

    2007-01-01

    Stochastic resonance (SR) has been shown to enhance the signal to noise ratio or detection of signals in neurons. It is not yet clear how this effect of SR on the signal to noise ratio affects signal processing in neural networks. In this paper, we investigate the effects of the location of background noise input on information transmission in a hippocampal CA1 neuron model. In the computer simulation, random sub-threshold spike trains (signal) generated by a filtered homogeneous Poisson process were presented repeatedly to the middle point of the main apical branch, while the homogeneous Poisson shot noise (background noise) was applied to a location of the dendrite in the hippocampal CA1 model consisting of the soma with a sodium, a calcium, and five potassium channels. The location of the background noise input was varied along the dendrites to investigate the effects of background noise input location on information transmission. The computer simulation results show that the information rate reached a maximum value for an optimal amplitude of the background noise amplitude. It is also shown that this optimal amplitude of the background noise is independent of the distance between the soma and the noise input location. The results also show that the location of the background noise input does not significantly affect the maximum values of the information rates generated by stochastic resonance.

  8. Rose bush leaf and internode expansion dynamics: analysis and development of a model capturing interplant variability

    Directory of Open Access Journals (Sweden)

    Sabine eDemotes-Mainard

    2013-10-01

    Full Text Available Bush rose architecture, among other factors, such as plant health, determines plant visual quality. The commercial product is the individual plant and interplant variability may be high within a crop. Thus, both mean plant architecture and interplant variability should be studied. Expansion is an important feature of architecture, but it has been little studied at the level of individual organs in bush roses. We investigated the expansion kinetics of primary shoot organs, to develop a model reproducing the organ expansion of real crops from non destructive input variables. We took interplant variability in expansion kinetics and the model’s ability to simulate this variability into account. Changes in leaflet and internode dimensions over thermal time were recorded for primary shoot expansion, on 83 plants from three crops grown in different climatic conditions and densities. An empirical model was developed, to reproduce organ expansion kinetics for individual plants of a real crop of bush rose primary shoots. Leaflet or internode length was simulated as a logistic function of thermal time. The model was evaluated by cross-validation. We found that differences in leaflet or internode expansion kinetics between phytomer positions and between plants at a given phytomer position were due mostly to large differences in time of organ expansion and expansion rate, rather than differences in expansion duration. Thus, in the model, the parameters linked to expansion duration were predicted by values common to all plants, whereas variability in final size and organ expansion time was captured by input data. The model accurately simulated leaflet and internode expansion for individual plants (RMSEP = 7.3% and 10.2% of final length, respectively. Thus, this study defines the measurements required to simulate expansion and provides the first model simulating organ expansion in rosebush to capture interplant variability.

  9. Predictor variable resolution governs modeled soil types

    Science.gov (United States)

    Soil mapping identifies different soil types by compressing a unique suite of spatial patterns and processes across multiple spatial scales. It can be quite difficult to quantify spatial patterns of soil properties with remotely sensed predictor variables. More specifically, matching the right scale...

  10. Adaptive control of a jet turboshaft engine driving a variable pitch propeller using multiple models

    Science.gov (United States)

    Ahmadian, Narjes; Khosravi, Alireza; Sarhadi, Pouria

    2017-08-01

    In this paper, a multiple model adaptive control (MMAC) method is proposed for a gas turbine engine. The model of a twin spool turbo-shaft engine driving a variable pitch propeller includes various operating points. Variations in fuel flow and propeller pitch inputs produce different operating conditions which force the controller to be adopted rapidly. Important operating points are three idle, cruise and full thrust cases for the entire flight envelope. A multi-input multi-output (MIMO) version of second level adaptation using multiple models is developed. Also, stability analysis using Lyapunov method is presented. The proposed method is compared with two conventional first level adaptation and model reference adaptive control techniques. Simulation results for JetCat SPT5 turbo-shaft engine demonstrate the performance and fidelity of the proposed method.

  11. A novel methodology improves reservoir characterization models using geologic fuzzy variables

    Energy Technology Data Exchange (ETDEWEB)

    Soto B, Rodolfo [DIGITOIL, Maracaibo (Venezuela); Soto O, David A. [Texas A and M University, College Station, TX (United States)

    2004-07-01

    One of the research projects carried out in Cusiana field to explain its rapid decline during the last years was to get better permeability models. The reservoir of this field has a complex layered system that it is not easy to model using conventional methods. The new technique included the development of porosity and permeability maps from cored wells following the same trend of the sand depositions for each facie or layer according to the sedimentary facie and the depositional system models. Then, we used fuzzy logic to reproduce those maps in three dimensions as geologic fuzzy variables. After multivariate statistical and factor analyses, we found independence and a good correlation coefficient between the geologic fuzzy variables and core permeability and porosity. This means, the geologic fuzzy variable could explain the fabric, the grain size and the pore geometry of the reservoir rock trough the field. Finally, we developed a neural network permeability model using porosity, gamma ray and the geologic fuzzy variable as input variables. This model has a cross-correlation coefficient of 0.873 and average absolute error of 33% compared with the actual model with a correlation coefficient of 0.511 and absolute error greater than 250%. We tested different methodologies, but this new one showed dramatically be a promiser way to get better permeability models. The use of the models have had a high impact in the explanation of well performance and workovers, and reservoir simulation models. (author)

  12. Variable Fidelity Aeroelastic Toolkit - Structural Model, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is a methodology to incorporate variable fidelity structural models into steady and unsteady aeroelastic and aeroservoelastic analyses in...

  13. The Neurobiological Basis of Cognition: Identification by Multi-Input, Multioutput Nonlinear Dynamic Modeling

    Science.gov (United States)

    Berger, Theodore W.; Song, Dong; Chan, Rosa H. M.; Marmarelis, Vasilis Z.

    2010-01-01

    The successful development of neural prostheses requires an understanding of the neurobiological bases of cognitive processes, i.e., how the collective activity of populations of neurons results in a higher level process not predictable based on knowledge of the individual neurons and/or synapses alone. We have been studying and applying novel methods for representing nonlinear transformations of multiple spike train inputs (multiple time series of pulse train inputs) produced by synaptic and field interactions among multiple subclasses of neurons arrayed in multiple layers of incompletely connected units. We have been applying our methods to study of the hippocampus, a cortical brain structure that has been demonstrated, in humans and in animals, to perform the cognitive function of encoding new long-term (declarative) memories. Without their hippocampi, animals and humans retain a short-term memory (memory lasting approximately 1 min), and long-term memory for information learned prior to loss of hippocampal function. Results of more than 20 years of studies have demonstrated that both individual hippocampal neurons, and populations of hippocampal cells, e.g., the neurons comprising one of the three principal subsystems of the hippocampus, induce strong, higher order, nonlinear transformations of hippocampal inputs into hippocampal outputs. For one synaptic input or for a population of synchronously active synaptic inputs, such a transformation is represented by a sequence of action potential inputs being changed into a different sequence of action potential outputs. In other words, an incoming temporal pattern is transformed into a different, outgoing temporal pattern. For multiple, asynchronous synaptic inputs, such a transformation is represented by a spatiotemporal pattern of action potential inputs being changed into a different spatiotemporal pattern of action potential outputs. Our primary thesis is that the encoding of short-term memories into new, long

  14. Multi-wheat-model ensemble responses to interannual climatic variability

    DEFF Research Database (Denmark)

    Ruane, A C; Hudson, N I; Asseng, S

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and ......-term warming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.......We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981–2010 grain yield, and we...... evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...

  15. Application of Context Input Process and Product Model in Curriculum Evaluation: Case Study of a Call Centre

    Science.gov (United States)

    Kavgaoglu, Derya; Alci, Bülent

    2016-01-01

    The goal of this research which was carried out in reputable dedicated call centres within the Turkish telecommunication sector aims is to evaluate competence-based curriculums designed by means of internal funding through Stufflebeam's context, input, process, product (CIPP) model. In the research, a general scanning pattern in the scope of…

  16. Automatic individual arterial input functions calculated from PCA outperform manual and population-averaged approaches for the pharmacokinetic modeling of DCE-MR images.

    Science.gov (United States)

    Sanz-Requena, Roberto; Prats-Montalbán, José Manuel; Martí-Bonmatí, Luis; Alberich-Bayarri, Ángel; García-Martí, Gracián; Pérez, Rosario; Ferrer, Alberto

    2015-08-01

    To introduce a segmentation method to calculate an automatic arterial input function (AIF) based on principal component analysis (PCA) of dynamic contrast enhanced MR (DCE-MR) imaging and compare it with individual manually selected and population-averaged AIFs using calculated pharmacokinetic parameters. The study included 65 individuals with prostate examinations (27 tumors and 38 controls). Manual AIFs were individually extracted and also averaged to obtain a population AIF. Automatic AIFs were individually obtained by applying PCA to volumetric DCE-MR imaging data and finding the highest correlation of the PCs with a reference AIF. Variability was assessed using coefficients of variation and repeated measures tests. The different AIFs were used as inputs to the pharmacokinetic model and correlation coefficients, Bland-Altman plots and analysis of variance tests were obtained to compare the results. Automatic PCA-based AIFs were successfully extracted in all cases. The manual and PCA-based AIFs showed good correlation (r between pharmacokinetic parameters ranging from 0.74 to 0.95), with differences below the manual individual variability (RMSCV up to 27.3%). The population-averaged AIF showed larger differences (r from 0.30 to 0.61). The automatic PCA-based approach minimizes the variability associated to obtaining individual volume-based AIFs in DCE-MR studies of the prostate. © 2014 Wiley Periodicals, Inc.

  17. ABOUT PSYCHOLOGICAL VARIABLES IN APPLICATION SCORING MODELS

    Directory of Open Access Journals (Sweden)

    Pablo Rogers

    2015-01-01

    Full Text Available The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a negative dimensions related to money (suffering, inequality and conflict; b high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c buyers classified as compulsive; d individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.

  18. Enhancement of regional wet deposition estimates based on modeled precipitation inputs

    Science.gov (United States)

    James A. Lynch; Jeffery W. Grimm; Edward S. Corbett

    1996-01-01

    Application of a variety of two-dimensional interpolation algorithms to precipitation chemistry data gathered at scattered monitoring sites for the purpose of estimating precipitation- born ionic inputs for specific points or regions have failed to produce accurate estimates. The accuracy of these estimates is particularly poor in areas of high topographic relief....

  19. Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability

    Science.gov (United States)

    Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.

  20. Variable Width Riparian Model Enhances Landscape and Watershed Condition

    Science.gov (United States)

    Abood, S. A.; Spencer, L.

    2017-12-01

    Riparian areas are ecotones that represent about 1% of USFS administered landscape and contribute to numerous valuable ecosystem functions such as wildlife habitat, stream water quality and flows, bank stability and protection against erosion, and values related to diversity, aesthetics and recreation. Riparian zones capture the transitional area between terrestrial and aquatic ecosystems with specific vegetation and soil characteristics which provide critical values/functions and are very responsive to changes in land management activities and uses. Two staff areas at the US Forest Service have coordinated on a two phase project to support the National Forests in their planning revision efforts and to address rangeland riparian business needs at the Forest Plan and Allotment Management Plan levels. The first part of the project will include a national fine scale (USGS HUC-12 digits watersheds) inventory of riparian areas on National Forest Service lands in western United States with riparian land cover, utilizing GIS capabilities and open source geospatial data. The second part of the project will include the application of riparian land cover change and assessment based on selected indicators to assess and monitor riparian areas on annual/5-year cycle basis.This approach recognizes the dynamic and transitional nature of riparian areas by accounting for hydrologic, geomorphic and vegetation data as inputs into the delineation process. The results suggest that incorporating functional variable width riparian mapping within watershed management planning can improve riparian protection and restoration. The application of Riparian Buffer Delineation Model (RBDM) approach can provide the agency Watershed Condition Framework (WCF) with observed riparian area condition on an annual basis and on multiple scales. The use of this model to map moderate to low gradient systems of sufficient width in conjunction with an understanding of the influence of distinctive landscape

  1. High resolution variability in the Quaternary Indian monsoon inferred from records of clastic input and paleo-production recovered during IODP Expedition 355

    Science.gov (United States)

    Hahn, Annette; Lyle, Mitchell; Kulhanek, Denise; Ando, Sergio; Clift, Peter

    2016-04-01

    The sediment cores obtained from the Indus fan at Site U1457 during Expedition 355 of the International Ocean Discovery Program (IODP) contain a ca. 100m spliced section covering the past ca. 1Ma. We aim to make use of this unique long, mostly continuous climate archive to unravel the millennial scale atmospheric and oceanic processes linked to changes in the Indian monsoon climate over the Quaternary glacial-interglacial cycles. Our aim is to fill this gap using fast, cost-efficient methods (Fourier Transform Infrared Spectroscopy [FTIRS] and X-ray Fluorescence [XRF] scanning) which allow us to study this sequence at a millennial scale resolution (2-3cm sampling interval). An important methodological aspect of this study is developing FTIRS as a method for the simultaneous estimation of the sediment total inorganic carbon and organic carbon content by using the specific fingerprint absorption spectra of minerals (e.g. calcite) and organic sediment components. The resulting paleo-production proxies give indications of oceanic circulation patterns and serve as a direct comparison to the XRF scanning data. Initial results show that variability in paleo-production is accompanied by changes in the quantity and composition of clastic input to the site. Phases of increased deposition of terrigenous material are enriched in K, Al, Fe and Si. Both changes in the weathering and erosion focus areas affect the mineralogy and elemental composition of the clastic input as grain size and mineralogical changes are reflected in the ratios of lighter to heavier elements. Furthermore, trace element compositions (Zn, Cu, Mn) give indications of diagenetic processes and contribute to the understanding of the depositional environment. The resulting datasets will lead to a more comprehensive understanding of the interplay of the local atmospheric and oceanic circulation processes over glacial-interglacial cycles; an essential prerequisite for regional predictions of global climate

  2. Stochastic modeling of interannual variation of hydrologic variables

    Science.gov (United States)

    Dralle, David; Karst, Nathaniel; Müller, Marc; Vico, Giulia; Thompson, Sally E.

    2017-07-01

    Quantifying the interannual variability of hydrologic variables (such as annual flow volumes, and solute or sediment loads) is a central challenge in hydrologic modeling. Annual or seasonal hydrologic variables are themselves the integral of instantaneous variations and can be well approximated as an aggregate sum of the daily variable. Process-based, probabilistic techniques are available to describe the stochastic structure of daily flow, yet estimating interannual variations in the corresponding aggregated variable requires consideration of the autocorrelation structure of the flow time series. Here we present a method based on a probabilistic streamflow description to obtain the interannual variability of flow-derived variables. The results provide insight into the mechanistic genesis of interannual variability of hydrologic processes. Such clarification can assist in the characterization of ecosystem risk and uncertainty in water resources management. We demonstrate two applications, one quantifying seasonal flow variability and the other quantifying net suspended sediment export.

  3. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  4. Development of a MODIS-Derived Surface Albedo Data Set: An Improved Model Input for Processing the NSRDB

    Energy Technology Data Exchange (ETDEWEB)

    Maclaurin, Galen [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sengupta, Manajit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Xie, Yu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gilroy, Nicholas [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-12-01

    A significant source of bias in the transposition of global horizontal irradiance to plane-of-array (POA) irradiance arises from inaccurate estimations of surface albedo. The current physics-based model used to produce the National Solar Radiation Database (NSRDB) relies on model estimations of surface albedo from a reanalysis climatalogy produced at relatively coarse spatial resolution compared to that of the NSRDB. As an input to spectral decomposition and transposition models, more accurate surface albedo data from remotely sensed imagery at finer spatial resolutions would improve accuracy in the final product. The National Renewable Energy Laboratory (NREL) developed an improved white-sky (bi-hemispherical reflectance) broadband (0.3-5.0 ..mu..m) surface albedo data set for processing the NSRDB from two existing data sets: a gap-filled albedo product and a daily snow cover product. The Moderate Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua satellites have provided high-quality measurements of surface albedo at 30 arc-second spatial resolution and 8-day temporal resolution since 2001. The high spatial and temporal resolutions and the temporal coverage of the MODIS sensor will allow for improved modeling of POA irradiance in the NSRDB. However, cloud and snow cover interfere with MODIS observations of ground surface albedo, and thus they require post-processing. The MODIS production team applied a gap-filling methodology to interpolate observations obscured by clouds or ephemeral snow. This approach filled pixels with ephemeral snow cover because the 8-day temporal resolution is too coarse to accurately capture the variability of snow cover and its impact on albedo estimates. However, for this project, accurate representation of daily snow cover change is important in producing the NSRDB. Therefore, NREL also used the Integrated Multisensor Snow and Ice Mapping System data set, which provides daily snow cover observations of the

  5. Uncertainty into statistical landslide susceptibility models resulting from terrain mapping units and landslide input data

    Science.gov (United States)

    Zêzere, José Luis; Pereira, Susana; Melo, Raquel; Oliveira, Sérgio; Garcia, Ricardo

    2017-04-01

    There are multiple sources of uncertainty within statistically-based landslide susceptibility assessment that needs to be accounted and monitored. In this work we evaluate and discuss differences observed on landslide susceptibility maps resulting from the selection of the terrain mapping unit and the selection of the feature type to represent landslides (polygon vs point). The work is performed in the Silveira Basin (18.2 square kilometres) located north of Lisbon, Portugal, using a unique database of geo-environmental landslide predisposing factors and an inventory of 81 shallow translational slides. The Logistic Regression is the statistical method selected to combine the predictive factors with the dependent variable. Four landslide susceptibility models were computed using the complete landslide inventory and considering the total landslide area over four different terrain mapping units: Slope Terrain Units (STU), Geo-Hydrological Terrain Units (GHTU), Census Terrain Units (CTU) and Grid Cell Terrain Units (GCTU). Four additional landslide susceptibility models were made over the same four terrain mapping units using a landslide training group (50% of the inventory randomly selected). These models were independently validated with the other 50% of the landslide inventory (landslide test group). Lastly, two additional landslide susceptibility models were computed over GCTU, one using the landslide training group represented as point features corresponding to the centroid of landslide, and other using the centroid of landslide rupture zone. In total, 10 landslide susceptibility maps were constructed and classified in 10 classes of equal number of terrain units to allow comparison. The evaluation of the prediction skills of susceptibility models was made using ROC metrics and Success and Prediction rate curves. Lastly, the landslide susceptibility maps computed over GCTU were compared using the Kappa statistics. With this work we conclude that large differences

  6. Modeling Domain Variability in Requirements Engineering with Contexts

    Science.gov (United States)

    Lapouchnian, Alexei; Mylopoulos, John

    Various characteristics of the problem domain define the context in which the system is to operate and thus impact heavily on its requirements. However, most requirements specifications do not consider contextual properties and few modeling notations explicitly specify how domain variability affects the requirements. In this paper, we propose an approach for using contexts to model domain variability in goal models. We discuss the modeling of contexts, the specification of their effects on system goals, and the analysis of goal models with contextual variability. The approach is illustrated with a case study.

  7. Usability Evaluation of Variability Modeling by means of Common Variability Language

    Directory of Open Access Journals (Sweden)

    Jorge Echeverria

    2015-12-01

    Full Text Available Common Variability Language (CVL is a recent proposal for OMG's upcoming Variability Modeling standard. CVL models variability in terms of Model Fragments.  Usability is a widely-recognized quality criterion essential to warranty the successful use of tools that put these ideas in practice. Facing the need of evaluating the usability of CVL modeling tools, this paper presents a Usability Evaluation of CVL applied to a Modeling Tool for firmware code of Induction Hobs. This evaluation addresses the configuration, scoping and visualization facets. The evaluation involved the end users of the tool whom are engineers of our Induction Hob industrial partner. Effectiveness and efficiency results indicate that model configuration in terms of model fragment substitutions is intuitive enough but both scoping and visualization require improved tool support. Results also enabled us to identify a list of usability problems which may contribute to alleviate scoping and visualization issues in CVL.

  8. A grey neural network and input-output combined forecasting model. Primary energy consumption forecasts in Spanish economic sectors

    International Nuclear Information System (INIS)

    Liu, Xiuli; Moreno, Blanca; García, Ana Salomé

    2016-01-01

    A combined forecast of Grey forecasting method and neural network back propagation model, which is called Grey Neural Network and Input-Output Combined Forecasting Model (GNF-IO model), is proposed. A real case of energy consumption forecast is used to validate the effectiveness of the proposed model. The GNF-IO model predicts coal, crude oil, natural gas, renewable and nuclear primary energy consumption volumes by Spain's 36 sub-sectors from 2010 to 2015 according to three different GDP growth scenarios (optimistic, baseline and pessimistic). Model test shows that the proposed model has higher simulation and forecasting accuracy on energy consumption than Grey models separately and other combination methods. The forecasts indicate that the primary energies as coal, crude oil and natural gas will represent on average the 83.6% percent of the total of primary energy consumption, raising concerns about security of supply and energy cost and adding risk for some industrial production processes. Thus, Spanish industry must speed up its transition to an energy-efficiency economy, achieving a cost reduction and increase in the level of self-supply. - Highlights: • Forecasting System Using Grey Models combined with Input-Output Models is proposed. • Primary energy consumption in Spain is used to validate the model. • The grey-based combined model has good forecasting performance. • Natural gas will represent the majority of the total of primary energy consumption. • Concerns about security of supply, energy cost and industry competitiveness are raised.

  9. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Science.gov (United States)

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  10. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby

    2015-04-22

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  11. Coevolution of variability models and related software artifacts

    DEFF Research Database (Denmark)

    Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas

    2015-01-01

    models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source...

  12. Realistic modeling of seismic input for megacities and large urban areas

    Science.gov (United States)

    Panza, G. F.; Unesco/Iugs/Igcp Project 414 Team

    2003-04-01

    The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore

  13. Variability of concrete properties: experimental characterisation and probabilistic modelling for calcium leaching

    International Nuclear Information System (INIS)

    De Larrard, Th.

    2010-09-01

    Evaluating structures durability requires taking into account the variability of material properties. The thesis has two main aspects: on the one hand, an experimental campaign aimed at quantifying the variability of many indicators of concrete behaviour; on the other hand, a simple numerical model for calcium leaching is developed in order to implement probabilistic methods so as to estimate the lifetime of structures such as those related to radioactive waste disposal. The experimental campaign consisted in following up two real building sites, and quantifying the variability of these indicators, studying their correlation, and characterising the random fields variability for the considered variables (especially the correlation length). To draw any conclusion from the accelerated leaching tests with ammonium nitrate by overcoming the effects of temperature, an inverse analysis tool based on the theory of artificial neural networks was developed. Simple numerical tools are presented to investigate the propagation of variability in durability issues, quantify the influence of this variability on the lifespan of structures and explain the variability of the input parameters of the numerical model and the physical measurable quantities of the material. (author)

  14. Uncertainty and variability in computational and mathematical models of cardiac physiology.

    Science.gov (United States)

    Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H

    2016-12-01

    Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for

  15. A study on the multi-dimensional spectral analysis for response of a piping model with two-seismic inputs

    International Nuclear Information System (INIS)

    Suzuki, K.; Sato, H.

    1975-01-01

    The power and the cross power spectrum analysis by which the vibration characteristic of structures, such as natural frequency, mode of vibration and damping ratio, can be identified would be effective for the confirmation of the characteristics after the construction is completed by using the response for small earthquakes or the micro-tremor under the operating condition. This method of analysis previously utilized only from the view point of systems with single input so far, is extensively applied for the analysis of a medium scale model of a piping system subjected to two seismic inputs. The piping system attached to a three storied concrete structure model which is constructed on a shaking table was excited due to earthquake motions. The inputs to the piping system were recorded at the second floor and the ceiling of the third floor where the system was attached to. The output, the response of the piping system, was instrumented at a middle point on the system. As a result, the multi-dimensional power spectrum analysis is effective for a more reliable identification of the vibration characteristics of the multi-input structure system

  16. Assessing geotechnical centrifuge modelling in addressing variably saturated flow in soil and fractured rock.

    Science.gov (United States)

    Jones, Brendon R; Brouwers, Luke B; Van Tonder, Warren D; Dippenaar, Matthys A

    2017-05-01

    The vadose zone typically comprises soil underlain by fractured rock. Often, surface water and groundwater parameters are readily available, but variably saturated flow through soil and rock are oversimplified or estimated as input for hydrological models. In this paper, a series of geotechnical centrifuge experiments are conducted to contribute to the knowledge gaps in: (i) variably saturated flow and dispersion in soil and (ii) variably saturated flow in discrete vertical and horizontal fractures. Findings from the research show that the hydraulic gradient, and not the hydraulic conductivity, is scaled for seepage flow in the geotechnical centrifuge. Furthermore, geotechnical centrifuge modelling has been proven as a viable experimental tool for the modelling of hydrodynamic dispersion as well as the replication of similar flow mechanisms for unsaturated fracture flow, as previously observed in literature. Despite the imminent challenges of modelling variable saturation in the vadose zone, the geotechnical centrifuge offers a powerful experimental tool to physically model and observe variably saturated flow. This can be used to give valuable insight into mechanisms associated with solid-fluid interaction problems under these conditions. Findings from future research can be used to validate current numerical modelling techniques and address the subsequent influence on aquifer recharge and vulnerability, contaminant transport, waste disposal, dam construction, slope stability and seepage into subsurface excavations.

  17. Variable amplitude fatigue, modelling and testing

    International Nuclear Information System (INIS)

    Svensson, Thomas.

    1993-01-01

    Problems related to metal fatigue modelling and testing are here treated in four different papers. In the first paper different views of the subject are summarised in a literature survey. In the second paper a new model for fatigue life is investigated. Experimental results are established which are promising for further development of the mode. In the third paper a method is presented that generates a stochastic process, suitable to fatigue testing. The process is designed in order to resemble certain fatigue related features in service life processes. In the fourth paper fatigue problems in transport vibrations are treated

  18. Modeling of the impact of Rhone River nutrient inputs on the dynamics of planktonic diversity

    Science.gov (United States)

    Alekseenko, Elena; Baklouti, Melika; Garreau, Pierre; Guyennon, Arnaud; Carlotti, François

    2014-05-01

    conditions (for which the sea surface layer is well mixed). As a first step, these scenarios will allow to investigate the impact of changes in the N:P ratios of the Rhone River on the structure of the planktonic community at short time scale (two years). Acknowledgements The present research is a contribution to the Labex OT-Med (n° ANR-11-LABX-0061) funded by the French Government «Investissements d'Avenir» program of the French National Research Agency (ANR) through the A*MIDEX project (n° ANR-11-IDEX-0001-02). We thank our collegue P. Raimbault for the access to the MOOSE project dataset about the nutrient composition of the Rhone River . References Alekseenko E., Raybaud V., Espinasse B., Carlotti F., Queguiner B., Thouvenin B., Garreau P., Baklouti M. (2014) Seasonal dynamics and stoichiometry of the planktonic community in the NW Mediterranean Sea: a 3D modeling approach. Ocean Dynamics IN PRESS. http://dx.doi.org/10.1007/s10236-013-0669-2 Baklouti M, Diaz F, Pinazo C, Faure V, Quequiner B (2006a) Investigation of mechanistic formulations depicting phytoplankton dynamics for models of marine pelagic ecosystems and description of a new model. Prog Oceanogr 71:1-33 Baklouti M, Faure V, Pawlowski L, Sciandra A (2006b) Investigation and sensitivity analysis of a mechanistic phytoplankton model implemented in a new modular tool (Eco3M) dedicated to biogeochemical modelling. Prog Oceanogr 71:34-58 Lazure P, Dumas F (2008) An external-internal mode coupling for a 3D hydrodynamical model for applications at regional scale (MARS). Adv Water Resour 31(2):233-250 Ludwig, W., Dumont, E., Meybeck, M., Heussner, S. (2009). River discharges of water and nutrients to the Mediterranean and Black Sea: Major drivers for ecosystem changes during past and future decades? Progress in Oceanography 80, pp. 199-217 Malanotte-Rizoli, P. and Pan-Med Group. (2012) Physical forcing and physical/biochemical variability of the Mediterranean Sea : A review of unresolved issues and directions of

  19. An introduction to latent variable growth curve modeling concepts, issues, and application

    CERN Document Server

    Duncan, Terry E; Strycker, Lisa A

    2013-01-01

    This book provides a comprehensive introduction to latent variable growth curve modeling (LGM) for analyzing repeated measures. It presents the statistical basis for LGM and its various methodological extensions, including a number of practical examples of its use. It is designed to take advantage of the reader's familiarity with analysis of variance and structural equation modeling (SEM) in introducing LGM techniques. Sample data, syntax, input and output, are provided for EQS, Amos, LISREL, and Mplus on the book's CD. Throughout the book, the authors present a variety of LGM techniques that are useful for many different research designs, and numerous figures provide helpful diagrams of the examples.Updated throughout, the second edition features three new chapters-growth modeling with ordered categorical variables, growth mixture modeling, and pooled interrupted time series LGM approaches. Following a new organization, the book now covers the development of the LGM, followed by chapters on multiple-group is...

  20. Terrestrial ecosystem recovery - Modelling the effects of reduced acidic inputs and increased inputs of sea-salts induced by global change

    DEFF Research Database (Denmark)

    Beier, C.; Moldan, F.; Wright, R.F.

    2003-01-01

    to 3 large-scale "clean rain" experiments, the so-called roof experiments at Risdalsheia, Norway; Gardsjon, Sweden, and Klosterhede, Denmark. Implementation of the Gothenburg protocol will initiate recovery of the soils at all 3 sites by rebuilding base saturation. The rate of recovery is small...... and base saturation increases less than 5% over the next 30 years. A climate-induced increase in storm severity will increase the sea-salt input to the ecosystems. This will provide additional base cations to the soils and more than double the rate of the recovery, but also lead to strong acid pulses...... following high sea-salt inputs as the deposited base cations exchange with the acidity stored in the soil. Future recovery of soils and runoff at acidified catchments will thus depend on the amount and rate of reduction of acid deposition, and in the case of systems near the coast, the frequency...

  1. SISTEM KONTROL OTOMATIK DENGAN MODEL SINGLE-INPUT-DUAL-OUTPUT DALAM KENDALI EFISIENSI UMUR-PEMAKAIAN INSTRUMEN

    Directory of Open Access Journals (Sweden)

    S.N.M.P. Simamora

    2014-10-01

    Full Text Available Efficiency condition occurs when the value of the used outputs compared to the resource total that has been used almost close to the value 1 (absolute environment. An instrument to achieve efficiency if the power output level has decreased significantly in the life of the instrument used, if it compared to the previous condition, when the instrument is not equipped with additional systems (or proposed model improvement. Even more effective if the inputs model that are used in unison to achieve a homogeneous output. On this research has been designed and implemented the automatic control system for models of single input-dual-output, wherein the sampling instruments used are lamp and fan. Source voltage used is AC (alternate-current and tested using quantitative research methods and instrumentation (with measuring instruments are observed. The results obtained demonstrate the efficiency of the instrument experienced a significant current model of single-input-dual-output applied separately instrument trials such as lamp and fan when it compared to the condition or state before. And the result show that the design has been built, can also run well.

  2. Performance assessment of retrospective meteorological inputs for use in air quality modeling during TexAQS 2006

    Science.gov (United States)

    Ngan, Fong; Byun, Daewon; Kim, Hyuncheol; Lee, Daegyun; Rappenglück, Bernhard; Pour-Biazar, Arastoo

    2012-07-01

    To achieve more accurate meteorological inputs than was used in the daily forecast for studying the TexAQS 2006 air quality, retrospective simulations were conducted using objective analysis and 3D/surface analysis nudging with surface and upper observations. Model ozone using the assimilated meteorological fields with improved wind fields shows better agreement with the observation compared to the forecasting results. In the post-frontal conditions, important factors for ozone modeling in terms of wind patterns are the weak easterlies in the morning for bringing in industrial emissions to the city and the subsequent clockwise turning of the wind direction induced by the Coriolis force superimposing the sea breeze, which keeps pollutants in the urban area. Objective analysis and nudging employed in the retrospective simulation minimize the wind bias but are not able to compensate for the general flow pattern biases inherited from large scale inputs. By using an alternative analyses data for initializing the meteorological simulation, the model can re-produce the flow pattern and generate the ozone peak location closer to the reality. The inaccurate simulation of precipitation and cloudiness cause over-prediction of ozone occasionally. Since there are limitations in the meteorological model to simulate precipitation and cloudiness in the fine scale domain (less than 4-km grid), the satellite-based cloud is an alternative way to provide necessary inputs for the retrospective study of air quality.

  3. Tool for obtaining projected future climate inputs for the WEPP and SWAT models

    Science.gov (United States)

    Climate change is an increasingly important issue affecting natural resources. Rising temperatures, reductions in snow cover, and variability in precipitation depths and intensities are altering the accepted normal approaches for predicting runoff, soil erosion, and chemical losses from upland areas...

  4. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...... are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation...

  5. Loss-efficiency model of single and variable-speed compressors using neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Liang [Institute of Refrigeration and Cryogenics, Shanghai Jiaotong University, Shanghai 200240 (China); China R and D Center, Carrier Corporation, No.3239 Shen Jiang Road, Shanghai 201206 (China); Zhao, Ling-Xiao; Gu, Bo [Institute of Refrigeration and Cryogenics, Shanghai Jiaotong University, Shanghai 200240 (China); Zhang, Chun-Lu [China R and D Center, Carrier Corporation, No.3239 Shen Jiang Road, Shanghai 201206 (China)

    2009-09-15

    Compressor is the critical component to the performance of a vapor-compression refrigeration system. The loss-efficiency model including the volumetric efficiency and the isentropic efficiency is widely used for representing the compressor performance. A neural network loss-efficiency model is developed to simulate the performance of positive displacement compressors like the reciprocating, screw and scroll compressors. With one more input, frequency, it can be easily extended to the variable speed compressors. The three-layer polynomial perceptron network is developed because the polynomial transfer function is found very effective in training and free of over-learning. The selection of input parameters of neural networks is also found critical to the network prediction accuracy. The proposed neural networks give less than 0.4% standard deviations and {+-}1.3% maximum deviations against the manufacturer data. (author)

  6. Dynamics of a Birth-Pulse Single-Species Model with Restricted Toxin Input and Pulse Harvesting

    Directory of Open Access Journals (Sweden)

    Yi Ma

    2010-01-01

    Full Text Available We consider a birth-pulses single-species model with restricted toxin input and pulse harvesting in a polluted environment. Pollution accumulates as a slowly decaying stock and is assumed to affect the growth of the renewable resource population. Firstly, by using the discrete dynamical system determined by the stroboscopic map, we obtain an exact 1-period solution of system whose birth function is Ricker function or Beverton-Holt function and obtain the threshold conditions for their stability. Furthermore, we show that the timing of harvesting has a strong impact on the maximum annual sustainable yield. The best timing of harvesting is immediately after the birth pulses. Finally, we investigate the effect of the amount of toxin input on the stable resource population size. We find that when the birth rate is comparatively lower, the population size is decreasing with the increase of toxin input; that when the birth rate is high, the population size may begin to rise and then drop with the increase of toxin input.

  7. Synaptic inputs compete during rapid formation of the calyx of Held: a new model system for neural development.

    Science.gov (United States)

    Holcomb, Paul S; Hoffpauir, Brian K; Hoyson, Mitchell C; Jackson, Dakota R; Deerinck, Thomas J; Marrs, Glenn S; Dehoff, Marlin; Wu, Jonathan; Ellisman, Mark H; Spirou, George A

    2013-08-07

    Hallmark features of neural circuit development include early exuberant innervation followed by competition and pruning to mature innervation topography. Several neural systems, including the neuromuscular junction and climbing fiber innervation of Purkinje cells, are models to study neural development in part because they establish a recognizable endpoint of monoinnervation of their targets and because the presynaptic terminals are large and easily monitored. We demonstrate here that calyx of Held (CH) innervation of its target, which forms a key element of auditory brainstem binaural circuitry, exhibits all of these characteristics. To investigate CH development, we made the first application of serial block-face scanning electron microscopy to neural development with fine temporal resolution and thereby accomplished the first time series for 3D ultrastructural analysis of neural circuit formation. This approach revealed a growth spurt of added apposed surface area (ASA)>200 μm2/d centered on a single age at postnatal day 3 in mice and an initial rapid phase of growth and competition that resolved to monoinnervation in two-thirds of cells within 3 d. This rapid growth occurred in parallel with an increase in action potential threshold, which may mediate selection of the strongest input as the winning competitor. ASAs of competing inputs were segregated on the cell body surface. These data suggest mechanisms to select "winning" inputs by regional reinforcement of postsynaptic membrane to mediate size and strength of competing synaptic inputs.

  8. A Polynomial Term Structure Model with Macroeconomic Variables

    Directory of Open Access Journals (Sweden)

    José Valentim Vicente

    2007-06-01

    Full Text Available Recently, a myriad of factor models including macroeconomic variables have been proposed to analyze the yield curve. We present an alternative factor model where term structure movements are captured by Legendre polynomials mimicking the statistical factor movements identified by Litterman e Scheinkmam (1991. We estimate the model with Brazilian Foreign Exchange Coupon data, adopting a Kalman filter, under two versions: the first uses only latent factors and the second includes macroeconomic variables. We study its ability to predict out-of-sample term structure movements, when compared to a random walk. We also discuss results on the impulse response function of macroeconomic variables.

  9. Selecting candidate predictor variables for the modelling of post ...

    African Journals Online (AJOL)

    Selecting candidate predictor variables for the modelling of post-discharge mortality from sepsis: a protocol development project. Afri. Health Sci. .... Initial list of candidate predictor variables, N=17. Clinical. Laboratory. Social/Demographic. Vital signs (HR, RR, BP, T). Hemoglobin. Age. Oxygen saturation. Blood culture. Sex.

  10. Variable-Structure Control of a Model Glider Airplane

    Science.gov (United States)

    Waszak, Martin R.; Anderson, Mark R.

    2008-01-01

    A variable-structure control system designed to enable a fuselage-heavy airplane to recover from spin has been demonstrated in a hand-launched, instrumented model glider airplane. Variable-structure control is a high-speed switching feedback control technique that has been developed for control of nonlinear dynamic systems.

  11. Harmonize input selection for sediment transport prediction

    Science.gov (United States)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  12. The role of additive neurogenesis and synaptic plasticity in a hippocampal memory model with grid-cell like input.

    Directory of Open Access Journals (Sweden)

    Peter A Appleby

    Full Text Available Recently, we presented a study of adult neurogenesis in a simplified hippocampal memory model. The network was required to encode and decode memory patterns despite changing input statistics. We showed that additive neurogenesis was a more effective adaptation strategy compared to neuronal turnover and conventional synaptic plasticity as it allowed the network to respond to changes in the input statistics while preserving representations of earlier environments. Here we extend our model to include realistic, spatially driven input firing patterns in the form of grid cells in the entorhinal cortex. We compare network performance across a sequence of spatial environments using three distinct adaptation strategies: conventional synaptic plasticity, where the network is of fixed size but the connectivity is plastic; neuronal turnover, where the network is of fixed size but units in the network may die and be replaced; and additive neurogenesis, where the network starts out with fewer initial units but grows over time. We confirm that additive neurogenesis is a superior adaptation strategy when using realistic, spatially structured input patterns. We then show that a more biologically plausible neurogenesis rule that incorporates cell death and enhanced plasticity of new granule cells has an overall performance significantly better than any one of the three individual strategies operating alone. This adaptation rule can be tailored to maximise performance of the network when operating as either a short- or long-term memory store. We also examine the time course of adult neurogenesis over the lifetime of an animal raised under different hypothetical rearing conditions. These growth profiles have several distinct features that form a theoretical prediction that could be tested experimentally. Finally, we show that place cells can emerge and refine in a realistic manner in our model as a direct result of the sparsification performed by the dentate gyrus

  13. Interdecadal variability in a global coupled model

    International Nuclear Information System (INIS)

    Storch, J.S. von.

    1994-01-01

    Interdecadal variations are studied in a 325-year simulation performed by a coupled atmosphere - ocean general circulation model. The patterns obtained in this study may be considered as characteristic patterns for interdecadal variations. 1. The atmosphere: Interdecadal variations have no preferred time scales, but reveal well-organized spatial structures. They appear as two modes, one is related with variations of the tropical easterlies and the other with the Southern Hemisphere westerlies. Both have red spectra. The amplitude of the associated wind anomalies is largest in the upper troposphere. The associated temperature anomalies are in thermal-wind balance with the zonal winds and are out-of-phase between the troposphere and the lower stratosphere. 2. The Pacific Ocean: The dominant mode in the Pacific appears to be wind-driven in the midlatitudes and is related to air-sea interaction processes during one stage of the oscillation in the tropics. Anomalies of this mode propagate westward in the tropics and the northward (southwestward) in the North (South) Pacific on a time scale of about 10 to 20 years. (orig.)

  14. Using Enthalpy as a Prognostic Variable in Atmospheric Modelling with Variable Composition

    Science.gov (United States)

    2016-04-14

    InterScience (www.interscience.wiley.com) DOI: 10.1002/qj.345 Using enthalpy as a prognostic variable in atmospheric modelling with variable composition† R...Maryland, USA cNow at NOAA/NCEP, Space Weather Prediction Centre, Boulder, Colorado, USA ABSTRACT: Specific enthalpy emerges from a general form of the...trajectories depend- ing on sources, sinks, and fluxes of individual tracers. Specific enthalpy , h = cpT , (1) where cp is the specific heat capacity at

  15. A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs.

    Science.gov (United States)

    Rosenbaum, Robert

    2016-01-01

    Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public.

  16. Nonlinear neural network for hemodynamic model state and input estimation using fMRI data

    KAUST Repository

    Karam, Ayman M.

    2014-11-01

    Originally inspired by biological neural networks, artificial neural networks (ANNs) are powerful mathematical tools that can solve complex nonlinear problems such as filtering, classification, prediction and more. This paper demonstrates the first successful implementation of ANN, specifically nonlinear autoregressive with exogenous input (NARX) networks, to estimate the hemodynamic states and neural activity from simulated and measured real blood oxygenation level dependent (BOLD) signals. Blocked and event-related BOLD data are used to test the algorithm on real experiments. The proposed method is accurate and robust even in the presence of signal noise and it does not depend on sampling interval. Moreover, the structure of the NARX networks is optimized to yield the best estimate with minimal network architecture. The results of the estimated neural activity are also discussed in terms of their potential use.

  17. Realistic modelling of the seismic input Site effects and parametric studies

    CERN Document Server

    Romanelli, F; Vaccari, F

    2002-01-01

    We illustrate the work done in the framework of a large international cooperation, showing the very recent numerical experiments carried out within the framework of the EC project 'Advanced methods for assessing the seismic vulnerability of existing motorway bridges' (VAB) to assess the importance of non-synchronous seismic excitation of long structures. The definition of the seismic input at the Warth bridge site, i.e. the determination of the seismic ground motion due to an earthquake with a given magnitude and epicentral distance from the site, has been done following a theoretical approach. In order to perform an accurate and realistic estimate of site effects and of differential motion it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters, in realistic geological structures. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different sources and stru...

  18. The sensitivity of ecosystem service models to choices of input data and spatial resolution

    Science.gov (United States)

    Kenneth J. Bagstad; Erika Cohen; Zachary H. Ancona; Steven. G. McNulty; Ge   Sun

    2018-01-01

    Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address...

  19. Modeling moisture content of fine dead wildland fuels: Input to the BEHAVE fire prediction system

    Science.gov (United States)

    Richard C. Rothermel; Ralph A. Wilson; Glen A. Morris; Stephen S. Sackett

    1986-01-01

    Describes a model for predicting moisture content of fine fuels for use with the BEHAVE fire behavior and fuel modeling system. The model is intended to meet the need for more accurate predictions of fine fuel moisture, particularly in northern conifer stands and on days following rain. The model is based on the Canadian Fine Fuel Moisture Code (FFMC), modified to...

  20. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  1. Embodied water analysis for Hebei Province, China by input-output modelling

    Science.gov (United States)

    Liu, Siyuan; Han, Mengyao; Wu, Xudong; Wu, Xiaofang; Li, Zhi; Xia, Xiaohua; Ji, Xi

    2018-03-01

    With the accelerating coordinated development of the Beijing-Tianjin-Hebei region, regional economic integration is recognized as a national strategy. As water scarcity places Hebei Province in a dilemma, it is of critical importance for Hebei Province to balance water resources as well as make full use of its unique advantages in the transition to sustainable development. To our knowledge, related embodied water accounting analysis has been conducted for Beijing and Tianjin, while similar works with the focus on Hebei are not found. In this paper, using the most complete and recent statistics available for Hebei Province, the embodied water use in Hebei Province is analyzed in detail. Based on input-output analysis, it presents a complete set of systems accounting framework for water resources. In addition, a database of embodied water intensity is proposed which is applicable to both intermediate inputs and final demand. The result suggests that the total amount of embodied water in final demand is 10.62 billion m3, of which the water embodied in urban household consumption accounts for more than half. As a net embodied water importer, the water embodied in the commodity trade in Hebei Province is 17.20 billion m3. The outcome of this work implies that it is particularly urgent to adjust industrial structure and trade policies for water conservation, to upgrade technology and to improve water utilization. As a result, to relieve water shortages in Hebei Province, it is of crucial importance to regulate the balance of water use within the province, thus balancing water distribution in the various industrial sectors.

  2. Multiple Imputation of Predictor Variables Using Generalized Additive Models

    NARCIS (Netherlands)

    de Jong, Roel; van Buuren, Stef; Spiess, Martin

    2016-01-01

    The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The

  3. Higher-dimensional cosmological model with variable gravitational ...

    Indian Academy of Sciences (India)

    com. MS received 9 February 2004; revised 19 June 2004; accepted 12 August 2004. Abstract. We have studied five-dimensional homogeneous cosmological models with variable G and bulk viscosity in Lyra geometry. Exact solutions for the field ...

  4. Higher-dimensional cosmological model with variable gravitational ...

    Indian Academy of Sciences (India)

    We have studied five-dimensional homogeneous cosmological models with variable and bulk viscosity in Lyra geometry. Exact solutions for the field equations have been obtained and physical properties of the models are discussed. It has been observed that the results of new models are well within the observational ...

  5. Input-dependent life-cycle inventory model of industrial wastewater-treatment processes in the chemical sector.

    Science.gov (United States)

    Köhler, Annette; Hellweg, Stefanie; Recan, Ercan; Hungerbühler, Konrad

    2007-08-01

    Industrial wastewater-treatment systems need to ensure a high level of protection for the environment as a whole. Life-cycle assessment (LCA) comprehensively evaluates the environmental impacts of complex treatment systems, taking into account impacts from auxiliaries and energy consumption as well as emissions. However, the application of LCA is limited by a scarcity of wastewater-specific life-cycle inventory (LCI) data. This study presents a modular gate-to-gate inventory model for industrial wastewater purification in the chemical and related sectors. It enables the calculation of inventory parameters as a function of the wastewater composition and the technologies applied. Forthis purpose, data on energy and auxiliaries' consumption, wastewater composition, and process parameters was collected from chemical industry. On this basis, causal relationships between wastewater input, emissions, and technical inputs were identified. These causal relationships were translated into a generic inventory model. Generic and site-specific data ranges for LCI parameters are provided for the following processes: mechanical-biological treatment, high-pressure wet-air oxidation, nanofiltration, and extraction. The input- and technology-dependent process inventories help to bridge data gaps where primary data are not available. Thus, they substantially help to perform an environmental assessment of industrial wastewater purification in the chemical and associated industries, which may be used, for instance, for technology choices.

  6. Methodology for deriving hydrogeological input parameters for safety-analysis models - application to fractured crystalline rocks of Northern Switzerland

    International Nuclear Information System (INIS)

    Vomvoris, S.; Andrews, R.W.; Lanyon, G.W.; Voborny, O.; Wilson, W.

    1996-04-01

    Switzerland is one of many nations with nuclear power that is seeking to identify rock types and locations that would be suitable for the underground disposal of nuclear waste. A common challenge among these programs is to provide engineering designers and safety analysts with a reasonably representative hydrogeological input dataset that synthesizes the relevant information from direct field observations as well as inferences and model results derived from those observations. Needed are estimates of the volumetric flux through a volume of rock and the distribution of that flux into discrete pathways between the repository zones and the biosphere. These fluxes are not directly measurable but must be derived based on understandings of the range of plausible hydrogeologic conditions expected at the location investigated. The methodology described in this report utilizes conceptual and numerical models at various scales to derive the input dataset. The methodology incorporates an innovative approach, called the geometric approach, in which field observations and their associated uncertainty, together with a conceptual representation of those features that most significantly affect the groundwater flow regime, were rigorously applied to generate alternative possible realizations of hydrogeologic features in the geosphere. In this approach, the ranges in the output values directly reflect uncertainties in the input values. As a demonstration, the methodology is applied to the derivation of the hydrogeological dataset for the crystalline basement of Northern Switzerland. (author) figs., tabs., refs

  7. INTEGRATION OF INPUT - OUTPUT APPROACH INTO AGENT-BASED MODELING. PART 2. INTERREGIONAL ANALYSIS IN AN ARTIFICIAL ECONOMY

    Directory of Open Access Journals (Sweden)

    Domozhirov D. A.

    2017-06-01

    Full Text Available The article demonstrates the possibilities of spatial analysis provided by the Agent-Based Multiregional Input - Output Model (ABMIOM of the Russian economy. The basic hypothesis of the ABMIOM is that agents’ decisions at the microeconomic level lead to spatial changes at the macro level. Confirmation of this hypothesis requires experimental calculations with changes in various parameters that influence agents’ decisions (such as prices, taxes, tariffs, etc.. Analyzing the results of these calculations requires moving from microeconomic data to the macro level. The paper proposes a method for the structural analysis of the model simulation results using input-output tables. The method involves statistical aggregation of calculation results, construction of regional, national and interregional input-output tables and structural analysis of the obtained tables including calculation of regional Leontief multipliers. The method proposed is used to study the influence of the level of transport costs on the geographical structure of trade flows. The results of the experiments confirmed that with the increase of transportation costs economic agents prefer to interact with nearest agents, which leads to a decreased interregional commodity exchange and to economic «insulation» of the regions.

  8. Subjective bias in PRA - the role of judgement in the selection of plant modeling input data for establishing safety goals

    International Nuclear Information System (INIS)

    Haenni, H.P.; Smith, A.L.

    1986-01-01

    The sources of the uncertainties are generally accepted as modeling deficiencies, lack of completeness in the analysis and the input data deficiencies. The role of judgement in selecting input data for establishing safety goals will be discussed. As an example, a safety goal for unacceptable radioactivity release will be considered. Two analysts are discussing the introduction of an emergency service water system, applying a different way of engineering judgement. Using PRA combined with safety goals as a decision-making tool it could have an important influence on the design and the costs of the plant. The suitability of the methodology has to be generally accepted before it will be established as a regulatory requirement. (orig.)

  9. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  10. Simulation of a Classically Conditioned Response: Components of the Input Trace and a Cerebellar Neural Network Implementation of the Sutton-Barto-Desmond Model.

    Science.gov (United States)

    1987-09-14

    inputs. Tesauro (1986) has criticized the SB model on the grounds that it is only applicable in situations where inputs are represented locally...Barto, A.G. A temporal-difference model of classical conditioning. , Technical Report TR87-509.2, GTE Labs, Waltham, Mass. (1987). Tesauro , G. Simple

  11. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  12. Classification criteria of syndromes by latent variable models

    DEFF Research Database (Denmark)

    Petersen, Janne

    2010-01-01

    are shown to be superior depending on whether the latent variable is a dependent or an independent variable. Both these types of scores are extended to the situation of differential item functioning. Analytically I have showed that the scores result in consistent estimates when used properly in subsequent...... of the syndrome. Thus, the results suggested that peripheral lipoatrophy and central lipohypertophy are interrelated phenotypes rather than two independent phenotypes. Part 2: Latent class regression relates explanatory variables to latent classes. In this model no measure of the latent class variable is obtained......The thesis has two parts; one clinical part: studying the dimensions of human immunodeficiency virus associated lipodystrophy syndrome (HALS) by latent class models, and a more statistical part: investigating how to predict scores of latent variables so these can be used in subsequent regression...

  13. Instantaneous-to-daily GPP upscaling schemes based on a coupled photosynthesis-stomatal conductance model: correcting the overestimation of GPP by directly using daily average meteorological inputs.

    Science.gov (United States)

    Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin

    2014-11-01

    Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.

  14. Adaptive Fault-Tolerant Control for Flight Systems with Input Saturation and Model Mismatch

    Directory of Open Access Journals (Sweden)

    Man Wang

    2013-01-01

    the original reference model may not be appropriate. Under this circumstance, an adaptive reference model which can also provide satisfactory performance is designed. Simulations of a flight control example are given to illustrate the effectiveness of the proposed scheme.

  15. Effect of Manure vs. Fertilizer Inputs on Productivity of Forage Crop Models

    Directory of Open Access Journals (Sweden)

    Pasquale Martiniello

    2011-06-01

    Full Text Available Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV. The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha−1, respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha−1 of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha−1 under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.

  16. 'Fingerprints' of four crop models as affected by soil input data aggregation

    DEFF Research Database (Denmark)

    Angulo, Carlos; Gaiser, Thomas; Rötter, Reimund P

    2014-01-01

    . In this study we used four crop models (SIMPLACE, DSSAT-CSM, EPIC and DAISY) differing in the detail of modeling above-ground biomass and yield as well as of modeling soil water dynamics, water uptake and drought effects on plants to simulate winter wheat in two (agro-climatologically and geo-morphologically...

  17. Modelling the tongue-of-ionisation using CTIP with SuperDARN electric potential input: verification by radiotomography

    Directory of Open Access Journals (Sweden)

    S. E. Pryse

    2009-03-01

    Full Text Available Electric potential patterns obtained by the SuperDARN radar network are used as input to the Coupled Thermosphere-Ionosphere-Plasmasphere model, in an attempt to improve the modelling of the spatial distribution of the ionospheric plasma at high latitudes. Two case studies are considered, one under conditions of stable IMF Bz negative and the other under stable IMF Bz positive. The modelled plasma distributions are compared with sets of well-established tomographic reconstructions, which have been interpreted previously in multi-instrument studies. For IMF Bz negative both the model and observations show a tongue-of-ionisation on the nightside, with good agreement between the electron density and location of the tongue. Under Bz positive, the SuperDARN input allows the model to reproduce a spatial plasma distribution akin to that observed. In this case plasma, unable to penetrate the polar cap boundary into the polar cap, is drawn by the convective flow in a tongue-of-ionisation around the periphery of the polar cap.

  18. Modelling the tongue-of-ionisation using CTIP with SuperDARN electric potential input: verification by radiotomography

    Directory of Open Access Journals (Sweden)

    S. E. Pryse

    2009-03-01

    Full Text Available Electric potential patterns obtained by the SuperDARN radar network are used as input to the Coupled Thermosphere-Ionosphere-Plasmasphere model, in an attempt to improve the modelling of the spatial distribution of the ionospheric plasma at high latitudes. Two case studies are considered, one under conditions of stable IMF Bz negative and the other under stable IMF Bz positive. The modelled plasma distributions are compared with sets of well-established tomographic reconstructions, which have been interpreted previously in multi-instrument studies. For IMF Bz negative both the model and observations show a tongue-of-ionisation on the nightside, with good agreement between the electron density and location of the tongue. Under Bz positive, the SuperDARN input allows the model to reproduce a spatial plasma distribution akin to that observed. In this case plasma, unable to penetrate the polar cap boundary into the polar cap, is drawn by the convective flow in a tongue-of-ionisation around the periphery of the polar cap.

  19. A biologically inspired model of bat echolocation in a cluttered environment with inputs designed from field Recordings

    Science.gov (United States)

    Loncich, Kristen Teczar

    Bat echolocation strategies and neural processing of acoustic information, with a focus on cluttered environments, is investigated in this study. How a bat processes the dense field of echoes received while navigating and foraging in the dark is not well understood. While several models have been developed to describe the mechanisms behind bat echolocation, most are based in mathematics rather than biology, and focus on either peripheral or neural processing---not exploring how these two levels of processing are vitally connected. Current echolocation models also do not use habitat specific acoustic input, or account for field observations of echolocation strategies. Here, a new approach to echolocation modeling is described capturing the full picture of echolocation from signal generation to a neural picture of the acoustic scene. A biologically inspired echolocation model is developed using field research measurements of the interpulse interval timing used by a frequency modulating (FM) bat in the wild, with a whole method approach to modeling echolocation including habitat specific acoustic inputs, a biologically accurate peripheral model of sound processing by the outer, middle, and inner ear, and finally a neural model incorporating established auditory pathways and neuron types with echolocation adaptations. Field recordings analyzed underscore bat sonar design differences observed in the laboratory and wild, and suggest a correlation between interpulse interval groupings and increased clutter. The scenario model provides habitat and behavior specific echoes and is a useful tool for both modeling and behavioral studies, and the peripheral and neural model show that spike-time information and echolocation specific neuron types can produce target localization in the midbrain.

  20. Short-term to seasonal variability in factors driving primary productivity in a shallow estuary: Implications for modeling production

    Science.gov (United States)

    Canion, Andy; MacIntyre, Hugh L.; Phipps, Scott

    2013-10-01

    The inputs of primary productivity models may be highly variable on short timescales (hourly to daily) in turbid estuaries, but modeling of productivity in these environments is often implemented with data collected over longer timescales. Daily, seasonal, and spatial variability in primary productivity model parameters: chlorophyll a concentration (Chla), the downwelling light attenuation coefficient (kd), and photosynthesis-irradiance response parameters (Pmchl, αChl) were characterized in Weeks Bay, a nitrogen-impacted shallow estuary in the northern Gulf of Mexico. Variability in primary productivity model parameters in response to environmental forcing, nutrients, and microalgal taxonomic marker pigments were analysed in monthly and short-term datasets. Microalgal biomass (as Chla) was strongly related to total phosphorus concentration on seasonal scales. Hourly data support wind-driven resuspension as a major source of short-term variability in Chla and light attenuation (kd). The empirical relationship between areal primary productivity and a combined variable of biomass and light attenuation showed that variability in the photosynthesis-irradiance response contributed little to the overall variability in primary productivity, and Chla alone could account for 53-86% of the variability in primary productivity. Efforts to model productivity in similar shallow systems with highly variable microalgal biomass may benefit the most by investing resources in improving spatial and temporal resolution of chlorophyll a measurements before increasing the complexity of models used in productivity modeling.

  1. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  2. Bayesian approach to errors-in-variables in regression models

    Science.gov (United States)

    Rozliman, Nur Aainaa; Ibrahim, Adriana Irawati Nur; Yunus, Rossita Mohammad

    2017-05-01

    In many applications and experiments, data sets are often contaminated with error or mismeasured covariates. When at least one of the covariates in a model is measured with error, Errors-in-Variables (EIV) model can be used. Measurement error, when not corrected, would cause misleading statistical inferences and analysis. Therefore, our goal is to examine the relationship of the outcome variable and the unobserved exposure variable given the observed mismeasured surrogate by applying the Bayesian formulation to the EIV model. We shall extend the flexible parametric method proposed by Hossain and Gustafson (2009) to another nonlinear regression model which is the Poisson regression model. We shall then illustrate the application of this approach via a simulation study using Markov chain Monte Carlo sampling methods.

  3. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  4. Linear and Non-linear Multi-Input Multi-Output Model Predictive Control of Continuous Stirred Tank Reactor

    Directory of Open Access Journals (Sweden)

    Muayad Al-Qaisy

    2015-02-01

    Full Text Available In this article, multi-input multi-output (MIMO linear model predictive controller (LMPC based on state space model and nonlinear model predictive controller based on neural network (NNMPC are applied on a continuous stirred tank reactor (CSTR. The idea is to have a good control system that will be able to give optimal performance, reject high load disturbance, and track set point change. In order to study the performance of the two model predictive controllers, MIMO Proportional-Integral-Derivative controller (PID strategy is used as benchmark. The LMPC, NNMPC, and PID strategies are used for controlling the residual concentration (CA and reactor temperature (T. NNMPC control shows a superior performance over the LMPC and PID controllers by presenting a smaller overshoot and shorter settling time.

  5. Better temperature predictions in geothermal modelling by improved quality of input parameters

    DEFF Research Database (Denmark)

    Fuchs, Sven; Bording, Thue Sylvester; Balling, N.

    2015-01-01

    Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties...

  6. Modeling spray drift and runoff-related inputs of pesticides to receiving water.

    Science.gov (United States)

    Zhang, Xuyang; Luo, Yuzhou; Goh, Kean S

    2018-03-01

    Pesticides move to surface water via various pathways including surface runoff, spray drift and subsurface flow. Little is known about the relative contributions of surface runoff and spray drift in agricultural watersheds. This study develops a modeling framework to address the contribution of spray drift to the total loadings of pesticides in receiving water bodies. The modeling framework consists of a GIS module for identifying drift potential, the AgDRIFT model for simulating spray drift, and the Soil and Water Assessment Tool (SWAT) for simulating various hydrological and landscape processes including surface runoff and transport of pesticides. The modeling framework was applied on the Orestimba Creek Watershed, California. Monitoring data collected from daily samples were used for model evaluation. Pesticide mass deposition on the Orestimba Creek ranged from 0.08 to 6.09% of applied mass. Monitoring data suggests that surface runoff was the major pathway for pesticide entering water bodies, accounting for 76% of the annual loading; the rest 24% from spray drift. The results from the modeling framework showed 81 and 19%, respectively, for runoff and spray drift. Spray drift contributed over half of the mass loading during summer months. The slightly lower spray drift contribution as predicted by the modeling framework was mainly due to SWAT's under-prediction of pesticide mass loading during summer and over-prediction of the loading during winter. Although model simulations were associated with various sources of uncertainties, the overall performance of the modeling framework was satisfactory as evaluated by multiple statistics: for simulation of daily flow, the Nash-Sutcliffe Efficiency Coefficient (NSE) ranged from 0.61 to 0.74 and the percent bias (PBIAS) modeling framework will be useful for assessing the relative exposure from pesticides related to spray drift and runoff in receiving waters and the design of management practices for mitigating pesticide

  7. Financial applications of a Tabu search variable selection model

    Directory of Open Access Journals (Sweden)

    Zvi Drezner

    2001-01-01

    Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.

  8. Random Regression Forest Model using Technical Analysis Variables: An application on Turkish Banking Sector in Borsa Istanbul (BIST)

    OpenAIRE

    Senol Emir; Hasan Dincer; Umit Hacioglu; Serhat Yuksel

    2016-01-01

    The purpose of this study is to explore the importance and ranking of technical analysis variables in Turkish banking sector. Random Forest method is used for determining importance scores of inputs for eight banks in Borsa Istanbul. Then two predictive models utilizing Random Forest (RF) and Artificial Neural Networks (ANN) are built for predicting BIST-100 index and bank closing prices. Results of the models are compared by three metrics namely Mean Absolute Error (MAE), Mean Square Error (...

  9. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  10. Interacting ghost dark energy models with variable G and Λ

    Science.gov (United States)

    Sadeghi, J.; Khurshudyan, M.; Movsisyan, A.; Farahani, H.

    2013-12-01

    In this paper we consider several phenomenological models of variable Λ. Model of a flat Universe with variable Λ and G is accepted. It is well known, that varying G and Λ gives rise to modified field equations and modified conservation laws, which gives rise to many different manipulations and assumptions in literature. We will consider two component fluid, which parameters will enter to Λ. Interaction between fluids with energy densities ρ1 and ρ2 assumed as Q = 3Hb(ρ1+ρ2). We have numerical analyze of important cosmological parameters like EoS parameter of the composed fluid and deceleration parameter q of the model.

  11. Dependence of Computational Models on Input Dimension: Tractability of Approximation and Optimization Tasks

    Czech Academy of Sciences Publication Activity Database

    Kainen, P.C.; Kůrková, Věra; Sanguineti, M.

    2012-01-01

    Roč. 58, č. 2 (2012), s. 1203-1214 ISSN 0018-9448 R&D Projects: GA MŠk(CZ) ME10023; GA ČR GA201/08/1744; GA ČR GAP202/11/1368 Grant - others:CNR-AV ČR(CZ-IT) Project 2010–2012 Complexity of Neural-Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : dictionary -based computational models * high-dimensional approximation and optimization * model complexity * polynomial upper bounds Subject RIV: IN - Informatics, Computer Science Impact factor: 2.621, year: 2012

  12. Model for expressing leaf photosynthesis in terms of weather variables

    African Journals Online (AJOL)

    A theoretical mathematical model for describing photosynthesis in individual leaves in terms of weather variables is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of potential photosynthetic rate permitted by the different environmental elements. These parameters are useful ...

  13. Simple model for crop photosynthesis in terms of weather variables ...

    African Journals Online (AJOL)

    A theoretical mathematical model for describing crop photosynthetic rate in terms of the weather variables and crop characteristics is proposed. The model utilizes a series of efficiency parameters, each of which reflect the fraction of possible photosynthetic rate permitted by the different weather elements or crop architecture.

  14. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    Dimitrakakis, C.

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

  15. Modeling, analysis and control of a variable geometry actuator

    NARCIS (Netherlands)

    Evers, W.J.; Knaap, A. van der; Besselink, I.J.M.; Nijmeijer, H.

    2008-01-01

    A new design of variable geometry force actuator is presented in this paper. Based upon this design, a model is derived which is used for steady-state analysis, as well as controller design in the presence of friction. The controlled actuator model is finally used to evaluate the power consumption

  16. Evaluation of the impact of explanatory variables on the accuracy of prediction of daily inflow to the sewage treatment plant by selected models nonlinear

    Directory of Open Access Journals (Sweden)

    Szeląg Bartosz

    2017-09-01

    Full Text Available The aim of the study was to evaluate the possibility of applying different methods of data mining to model the inflow of sewage into the municipal sewage treatment plant. Prediction models were elaborated using methods of support vector machines (SVM, random forests (RF, k-nearest neighbour (k-NN and of Kernel regression (K. Data consisted of the time series of daily rainfalls, water level measurements in the clarified sewage recipient and the wastewater inflow into the Rzeszow city plant. Results indicate that the best models with one input delayed by 1 day were obtained using the k-NN method while the worst with the K method. For the models with two input variables and one explanatory one the smallest errors were obtained if model inputs were sewage inflow and rainfall data delayed by 1 day and the best fit is provided using RF method while the worst with the K method. In the case of models with three inputs and two explanatory variables, the best results were reported for the SVM and the worst for the K method. In the most of the modelling runs the smallest prediction errors are obtained using the SVM method and the biggest ones with the K method. In the case of the simplest model with one input delayed by 1 day the best results are provided using k-NN method and by the models with two inputs in two modelling runs the RF method appeared as the best.

  17. Sensitivity of modeled estuarine circulation to spatial and temporal resolution of input meteorological forcing of a cold frontal passage

    Science.gov (United States)

    Weaver, Robert J.; Taeb, Peyman; Lazarus, Steven; Splitt, Michael; Holman, Bryan P.; Colvin, Jeffrey

    2016-12-01

    In this study, a four member ensemble of meteorological forcing is generated using the Weather Research and Forecasting (WRF) model in order to simulate a frontal passage event that impacted the Indian River Lagoon (IRL) during March 2015. The WRF model is run to provide high and low, spatial (0.005° and 0.1°) and temporal (30 min and 6 h) input wind and pressure fields. The four member ensemble is used to force the Advanced Circulation model (ADCIRC) coupled with Simulating Waves Nearshore (SWAN) and compute the hydrodynamic and wave response. Results indicate that increasing the spatial resolution of the meteorological forcing has a greater impact on the results than increasing the temporal resolution in coastal systems like the IRL where the length scales are smaller than the resolution of the operational meteorological model being used to generate the forecast. Changes in predicted water elevations are due in part to the upwind and downwind behavior of the input wind forcing. The significant wave height is more sensitive to the meteorological forcing, exhibited by greater ensemble spread throughout the simulation. It is important that the land mask, seen by the meteorological model, is representative of the geography of the coastal estuary as resolved by the hydrodynamic model. As long as the temporal resolution of the wind field captures the bulk characteristics of the frontal passage, computational resources should be focused so as to ensure that the meteorological model resolves the spatial complexities, such as the land-water interface, that drive the land use responsible for dynamic downscaling of the winds.

  18. Phase-field modeling of coring during solidification of Au–Ni alloy using quaternions and CALPHAD input

    International Nuclear Information System (INIS)

    Fattebert, J.-L.; Wickett, M.E.; Turchi, P.E.A.

    2014-01-01

    A numerical method for the simulation of microstructure evolution during the solidification of an alloy is presented. The approach is based on a phase-field model including a phase variable, an orientation variable given by a quaternion, the alloy composition and a uniform temperature field. Energies and diffusion coefficients used in the model rely on thermodynamic and kinetic databases in the framework of the CALPHAD methodology. The numerical approach is based on a finite volume discretization and an implicit time-stepping algorithm. Numerical results for solidification and accompanying coring effect in a Au–Ni alloy are used to illustrate the methodology

  19. Industrial and ecological cumulative exergy consumption of the United States via the 1997 input-output benchmark model

    International Nuclear Information System (INIS)

    Ukidwe, Nandan U.; Bakshi, Bhavik R.

    2007-01-01

    This paper develops a thermodynamic input-output (TIO) model of the 1997 United States economy that accounts for the flow of cumulative exergy in the 488-sector benchmark economic input-output model in two different ways. Industrial cumulative exergy consumption (ICEC) captures the exergy of all natural resources consumed directly and indirectly by each economic sector, while ecological cumulative exergy consumption (ECEC) also accounts for the exergy consumed in ecological systems for producing each natural resource. Information about exergy consumed in nature is obtained from the thermodynamics of biogeochemical cycles. As used in this work, ECEC is analogous to the concept of emergy, but does not rely on any of its controversial claims. The TIO model can also account for emissions from each sector and their impact and the role of labor. The use of consistent exergetic units permits the combination of various streams to define aggregate metrics that may provide insight into aspects related to the impact of economic sectors on the environment. Accounting for the contribution of natural capital by ECEC has been claimed to permit better representation of the quality of ecosystem goods and services than ICEC. The results of this work are expected to permit evaluation of these claims. If validated, this work is expected to lay the foundation for thermodynamic life cycle assessment, particularly of emerging technologies and with limited information

  20. Uncertainty and variability of infiltration at Yucca Mountain: Part 1. Numerical model development

    Science.gov (United States)

    Stothoff, Stuart A.

    2013-06-01

    The U.S. Nuclear Regulatory Commission investigated climate and infiltration at Yucca Mountain to (i) understand important controls and uncertainties influencing percolation through the unsaturated zone on multimillennial time scales and (ii) provide flux boundary conditions for up to 1 million years in performance assessment models of the proposed Yucca Mountain repository. This first part of a two-part series describes a procedure for abstracting the results from detailed numerical simulations of local-scale infiltration into a site-scale model considering uncertainty and variability in distributed net infiltration. Part 2 describes site-scale model results and corroboration. A detailed one-dimensional numerical model was used to estimate bare-soil net infiltration at the scales of hours and meters for 442 soil, bedrock, and climate combinations. The set of results are abstracted into three parametric response functions for decadal-average bare-soil infiltration given hydraulic and climatic parameters. The three abstractions describe deep soil, shallow soil over a coarser layer, and shallow soil over a finer layer. The site-scale model considers spatial variability and uncertainty of the input parameters on a 30 m grid, using the abstractions independently in each cell. Two additional abstractions account for overland flow and vegetation. The model uses Monte Carlo simulation, with all input parameters uncertain and spatially variable, to calculate the mean and standard deviation of net infiltration in each grid cell for selected climate states. Using abstractions rather than detailed simulations speeds calculation of infiltration realizations by many orders of magnitude relative to a detailed simulation.

  1. Groundwater travel time uncertainty analysis. Sensitivity of results to model geometry, and correlations and cross correlations among input parameters

    International Nuclear Information System (INIS)

    Clifton, P.M.

    1985-03-01

    This study examines the sensitivity of the travel time distribution predicted by a reference case model to (1) scale of representation of the model parameters, (2) size of the model domain, (3) correlation range of log-transmissivity, and (4) cross correlations between transmissivity and effective thickness. The basis for the reference model is the preliminary stochastic travel time model previously documented by the Basalt Waste Isolation Project. Results of this study show the following. The variability of the predicted travel times can be adequately represented when the ratio between the size of the zones used to represent the model parameters and the log-transmissivity correlation range is less than about one-fifth. The size of the model domain and the types of boundary conditions can have a strong impact on the distribution of travel times. Longer log-transmissivity correlation ranges cause larger variability in the predicted travel times. Positive cross correlation between transmissivity and effective thickness causes a decrease in the travel time variability. These results demonstrate the need for a sound conceptual model prior to conducting a stochastic travel time analysis

  2. Analyzing the Effects of the Iranian Energy Subsidy Reform Plan on Short- Run Marginal Generation Cost of Electricity Using Extended Input-Output Price Model

    Directory of Open Access Journals (Sweden)

    Zohreh Salimian

    2012-01-01

    Full Text Available Subsidizing energy in Iran has imposed high costs on country's economy. Thus revising energy prices, on the basis of a subsidy reform plan, is a vital remedy to boost up the economy. While the direct consequence of cutting subsidies on electricity generation costs can be determined in a simple way, identifying indirect effects, which reflect higher costs for input factors such as labor, is a challenging problem. In this paper, variables such as compensation of employees and private consumption are endogenized by using extended Input-Output (I-O price model to evaluate direct and indirect effects of electricity and fuel prices increase on economic subsectors. The determination of the short-run marginal generation cost of electricity using I-O technique with taken into account the Iranian targeted subsidy plan's influences is the main goal of this paper. Marginal cost of electricity, in various scenarios of price adjustment of energy, is estimated for three conventional categories of thermal power plants. Our results show that the raising the price of energy leads to an increase in the electricity production costs. Accordingly, the production costs will be higher than 1000 Rials per kWh until 2014 as predicted in the beginning of the reform plan by electricity suppliers.

  3. A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews

    Science.gov (United States)

    Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi

    2018-03-01

    Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.

  4. Differential effects of isoflurane and halothane on aortic input impedance quantified using a three-element Windkessel model.

    Science.gov (United States)

    Hettrick, D A; Pagel, P S; Warltier, D C

    1995-08-01

    Systemic vascular resistance (the ratio of mean aortic pressure [AP] and mean aortic blood flow [AQ]) does not completely describe left ventricular (LV) afterload because of the phasic nature of pressure and blood flow. Aortic input impedance (Zin) is an established experimental description of LV afterload that incorporates the frequency-dependent characteristics and viscoelastic properties of the arterial system. Zin is most often interpreted through an analytical model known as the three-element Windkessel. This investigation examined the effects of isoflurane, halothane, and sodium nitroprusside (SNP) on Zin. Changes in Zin were quantified using three variables derived from the Windkessel: characteristic aortic impedance (Zc), total arterial compliance (C), and total arterial resistance (R). Sixteen experiments were conducted in eight dogs chronically instrumented for measurement of AP, LV pressure, maximum rate of change in left ventricular pressure, subendocardial segment length, and AQ. AP and AQ waveforms were recorded in the conscious state and after 30 min equilibration at 1.25, 1.5, and 1.75 minimum alveolar concentration (MAC) isoflurane and halothane. Zin spectra were obtained by power spectral analysis of AP and AQ waveforms and corrected for the phase responses of the transducers. Zc and R were calculated as the mean of Zin between 2 and 15 Hz and the difference between Zin at zero frequency and Zc, respectively. C was determined using the formula C = (Ad.MAP).[MAQ.(Pes-Ped)]-1, where Ad = diastolic AP area; MAP and MAQ = mean AP and mean AQ, respectively; and Pes and Ped = end-systolic and end-diastolic AP, respectively. Parameters describing the net site and magnitude of arterial wave reflection were also calculated from Zin. Eight additional dogs were studied in the conscious state before and after 15 min equilibration at three equihypotensive infusions of SNP. Isoflurane decreased R (3,205 +/- 315 during control to 2,340 +/- 2.19 dyn.s.cm-5 during

  5. Comparison of squashing and self-consistent input-output models of quantum feedback

    Science.gov (United States)

    Peřinová, V.; Lukš, A.; Křepelka, J.

    2018-03-01

    The paper (Yanagisawa and Hope, 2010) opens with two ways of analysis of a measurement-based quantum feedback. The scheme of the feedback includes, along with the homodyne detector, a modulator and a beamsplitter, which does not enable one to extract the nonclassical field. In the present scheme, the beamsplitter is replaced by the quantum noise evader, which makes it possible to extract the nonclassical field. We re-approach the comparison of two models related to the same scheme. The first one admits that in the feedback loop between the photon annihilation and creation operators, unusual commutation relations hold. As a consequence, in the feedback loop, squashing of the light occurs. In the second one, the description arrives at the feedback loop via unitary transformations. But it is obvious that the unitary transformation which describes the modulator changes even the annihilation operator of the mode which passes by the modulator which is not natural. The first model could be called "squashing model" and the second one could be named "self-consistent model". Although the predictions of the two models differ only a little and both the ways of analysis have their advantages, they have also their drawbacks and further investigation is possible.

  6. Understanding and forecasting polar stratospheric variability with statistical models

    Directory of Open Access Journals (Sweden)

    C. Blume

    2012-07-01

    Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely the multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. The period from 2005 to 2011 can be hindcasted to a certain extent, where MLP performs significantly better than the remaining models. However, variability remains that cannot be statistically hindcasted within the current framework, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a winter 2011/12 with warm and weak vortex conditions. A vortex breakdown is predicted for late January, early February 2012.

  7. Modeling microstructure of incudostapedial joint and the effect on cochlear input

    Science.gov (United States)

    Gan, Rong Z.; Wang, Xuelin

    2015-12-01

    The incudostapedial joint (ISJ) connects the incus to stapes in human ear and plays an important role for sound transmission from the tympanic membrane (TM) to cochlea. ISJ is a synovial joint composed of articular cartilage on the lenticular process and stapes head with the synovial fluid between them. However, there is no study on how the synovial ISJ affects the middle ear and cochlear functions. Recently, we have developed a 3-dimensinal finite element (FE) model of synovial ISJ and connected the model to our comprehensive FE model of the human ear. The motions of TM, stapes footplate, and basilar membrane and the pressures in scala vestibule and scala tympani were derived over frequencies and compared with experimental measurements. Results show that the synovial ISJ affects sound transmission into cochlea and the frequency-dependent viscoelastic behavior of ISJ provides protection for cochlea from high intensity sound.

  8. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    Science.gov (United States)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  9. Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters

    DEFF Research Database (Denmark)

    Falkenberg, Thea Vilstrup; Vršnak, B.; Taktakishvili, A.

    2010-01-01

    Understanding space weather is not only important for satellite operations and human exploration of the solar system but also to phenomena here on Earth that may potentially disturb and disrupt electrical signals. Some of the most violent space weather effects are caused by coronal mass ejections...... investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time‐dependent 3‐D MHD model that can simulate the propagation of cone‐shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position...

  10. A comprehensive probabilistic solution of random SIS-type epidemiological models using the random variable transformation technique

    Science.gov (United States)

    Casabán, M.-C.; Cortés, J.-C.; Navarro-Quiles, A.; Romero, J.-V.; Roselló, M.-D.; Villanueva, R.-J.

    2016-03-01

    This paper provides a complete probabilistic description of SIS-type epidemiological models where all the input parameters (contagion rate, recovery rate and initial conditions) are assumed to be random variables. By applying the Random Variable Transformation technique, the first probability density function, the mean and the variance functions, as well as confidence intervals associated with the solution of SIS-type epidemiological models, are determined. It is done under the general hypothesis that model random inputs have any joint probability density function. The distributions to describe the time until a given proportion of the population remains susceptible and infected are also determined. Finally, a probabilistic description of the so-called basic reproductive number is included. The theoretical results are applied to an illustrative example showing good fitting.

  11. Cross-country transferability of multi-variable damage models

    Science.gov (United States)

    Wagenaar, Dennis; Lüdtke, Stefan; Kreibich, Heidi; Bouwer, Laurens

    2017-04-01

    Flood damage assessment is often done with simple damage curves based only on flood water depth. Additionally, damage models are often transferred in space and time, e.g. from region to region or from one flood event to another. Validation has shown that depth-damage curve estimates are associated with high uncertainties, particularly when applied in regions outside the area where the data for curve development was collected. Recently, progress has been made with multi-variable damage models created with data-mining techniques, i.e. Bayesian Networks and random forest. However, it is still unknown to what extent and under which conditions model transfers are possible and reliable. Model validations in different countries will provide valuable insights into the transferability of multi-variable damage models. In this study we compare multi-variable models developed on basis of flood damage datasets from Germany as well as from The Netherlands. Data from several German floods was collected using computer aided telephone interviews. Data from the 1993 Meuse flood in the Netherlands is available, based on compensations paid by the government. The Bayesian network and random forest based models are applied and validated in both countries on basis of the individual datasets. A major challenge was the harmonization of the variables between both datasets due to factors like differences in variable definitions, and regional and temporal differences in flood hazard and exposure characteristics. Results of model validations and comparisons in both countries are discussed, particularly in respect to encountered challenges and possible solutions for an improvement of model transferability.

  12. GALEV evolutionary synthesis models – I. Code, input physics and web

    NARCIS (Netherlands)

    Kotulla, R.; Fritze, U.; Weilbacher, P.; Anders, P.

    2009-01-01

    GALEV (GALaxy EVolution) evolutionary synthesis models describe the evolution of stellar populations in general, of star clusters as well as of galaxies, both in terms of resolved stellar populations and of integrated light properties over cosmological time-scales of ≥13 Gyr from the onset of star

  13. Modeling chronic diseases: the diabetes module. Justification of (new) input data

    NARCIS (Netherlands)

    Baan CA; Bos G; Jacobs-van der Bruggen MAM; Baan CA; Bos G; Jacobs-van der Bruggen MAM; PZO

    2005-01-01

    The RIVM chronic disease model (CDM) is an instrument designed to estimate the effects of changes in the prevalence of risk factors for chronic diseases on disease burden and mortality. To enable the computation of the effects of various diabetes prevention scenarios, the CDM has been updated and

  14. Multiscale Deterministic Wave Modeling with Wind Input and Wave Breaking Dissipation

    Science.gov (United States)

    2009-01-01

    Kudryavtsev , V. N., Makin, V. K. & Meirink, J. F. 2001 “Simplified model of the air flow above the waves,” Boundary-Layer Meteorol. 100, 63-90. 5 Li...Figure 6. Comparison of pressure profiles with exponential decays: solid line, the Kudryavtsev et al. (2001) profile estimated by Donelan et al

  15. Packaging tomorrow : modelling the material input for European packaging in the 21st century

    NARCIS (Netherlands)

    Hekkert, M.P.; Joosten, L.A.J.; Worrell, E.

    2006-01-01

    This report is a result of the MATTER project (MATerials Technology for CO2 Emission Reduction). The project focuses on CO2 emission reductions that are related to the Western European materials system. The total impact of the reduction options for different scenario's will be modeled in MARKAL

  16. Input Harmonic Analysis on the Slim DC-Link Drive Using Harmonic State Space Model

    DEFF Research Database (Denmark)

    Yang, Feng; Kwon, Jun Bum; Wang, Xiongfei

    2017-01-01

    variation according to the switching instant, the harmonics at the steady-state condition, as well as the coupling between the multiple harmonic impedances. By using this model, the impaction on the harmonics performance by the film capacitor and the grid inductance is derived. Simulation and experimental...

  17. Model-based extraction of input and organ functions in dynamic scintigraphic imaging

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav; Šámal, M.

    2016-01-01

    Roč. 4, 3-4 (2016), s. 135-145 ISSN 2168-1171 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : blind source separation * convolution * dynamic medical imaging * compartment modelling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2014/AS/tichy-0428540.pdf

  18. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled

  19. A stock-flow consistent input-output model with applications to energy price shocks, interest rates, and heat emissions

    Science.gov (United States)

    Berg, Matthew; Hartley, Brian; Richters, Oliver

    2015-01-01

    By synthesizing stock-flow consistent models, input-output models, and aspects of ecological macroeconomics, a method is developed to simultaneously model monetary flows through the financial system, flows of produced goods and services through the real economy, and flows of physical materials through the natural environment. This paper highlights the linkages between the physical environment and the economic system by emphasizing the role of the energy industry. A conceptual model is developed in general form with an arbitrary number of sectors, while emphasizing connections with the agent-based, econophysics, and complexity economics literature. First, we use the model to challenge claims that 0% interest rates are a necessary condition for a stationary economy and conduct a stability analysis within the parameter space of interest rates and consumption parameters of an economy in stock-flow equilibrium. Second, we analyze the role of energy price shocks in contributing to recessions, incorporating several propagation and amplification mechanisms. Third, implied heat emissions from energy conversion and the effect of anthropogenic heat flux on climate change are considered in light of a minimal single-layer atmosphere climate model, although the model is only implicitly, not explicitly, linked to the economic model.

  20. Future-year ozone prediction for the United States using updated models and inputs.

    Science.gov (United States)

    Collet, Susan; Kidokoro, Toru; Karamchandani, Prakash; Shah, Tejas; Jung, Jaegun

    2017-08-01

    The relationship between emission reductions and changes in ozone can be studied using photochemical grid models. These models are updated with new information as it becomes available. The primary objective of this study was to update the previous Collet et al. studies by using the most up-to-date (at the time the study was done) modeling emission tools, inventories, and meteorology available to conduct ozone source attribution and sensitivity studies. Results show future-year, 2030, design values for 8-hr ozone concentrations were lower than base-year values, 2011. The ozone source attribution results for selected cities showed that boundary conditions were the dominant contributors to ozone concentrations at the western U.S. locations, and were important for many of the eastern U.S. Point sources were generally more important in the eastern United States than in the western United States. The contributions of on-road mobile emissions were less than 5 ppb at a majority of the cities selected for analysis. The higher-order decoupled direct method (HDDM) results showed that in most of the locations selected for analysis, NOx emission reductions were more effective than VOC emission reductions in reducing ozone levels. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies. The relationship between emission reductions and changes in ozone can be studied using photochemical grid models, which are updated with new available information. This study was to update the previous Collet et al. studies by using the most current, at the time the study was done, models and inventory to conduct ozone source attribution and sensitivity studies. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies.

  1. Mediterranean climate modelling: variability and climate change scenarios

    International Nuclear Information System (INIS)

    Somot, S.

    2005-12-01

    Air-sea fluxes, open-sea deep convection and cyclo-genesis are studied in the Mediterranean with the development of a regional coupled model (AORCM). It accurately simulates these processes and their climate variabilities are quantified and studied. The regional coupling shows a significant impact on the number of winter intense cyclo-genesis as well as on associated air-sea fluxes and precipitation. A lower inter-annual variability than in non-coupled models is simulated for fluxes and deep convection. The feedbacks driving this variability are understood. The climate change response is then analysed for the 21. century with the non-coupled models: cyclo-genesis decreases, associated precipitation increases in spring and autumn and decreases in summer. Moreover, a warming and salting of the Mediterranean as well as a strong weakening of its thermohaline circulation occur. This study also concludes with the necessity of using AORCMs to assess climate change impacts on the Mediterranean. (author)

  2. Classification criteria of syndromes by latent variable models

    DEFF Research Database (Denmark)

    Petersen, Janne

    2010-01-01

    analyses. Part 1: HALS engages different phenotypic changes of peripheral lipoatrophy and central lipohypertrophy.  There are several different definitions of HALS and no consensus on the number of phenotypes. Many of the definitions consist of counting fulfilled criteria on markers and do not include......, although this is often desired. I have proposed a new method for predicting class membership that, in contrast to methods based on posterior probabilities of class membership, yields consistent estimates when regressed on explanatory variables in a subsequent analysis. There are four different basic models...... within latent variable models: factor analysis, latent class analysis, latent profile analysis and latent trait analysis. I have given a general overview of how to predict scores of latent variables so these can be used in subsequent regression models. Two different principles of predicting scores...

  3. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  4. Designing the input vector to ANN-based models for short-term load forecast in electricity distribution systems

    International Nuclear Information System (INIS)

    Santos, P.J.; Martins, A.G.; Pires, A.J.

    2007-01-01

    The present trend to electricity market restructuring increases the need for reliable short-term load forecast (STLF) algorithms, in order to assist electric utilities in activities such as planning, operating and controlling electric energy systems. Methodologies such as artificial neural networks (ANN) have been widely used in the next hour load forecast horizon with satisfactory results. However, this type of approach has had some shortcomings. Usually, the input vector (IV) is defined in a arbitrary way, mainly based on experience, on engineering judgment criteria and on concern about the ANN dimension, always taking into consideration the apparent correlations within the available endogenous and exogenous data. In this paper, a proposal is made of an approach to define the IV composition, with the main focus on reducing the influence of trial-and-error and common sense judgments, which usually are not based on sufficient evidence of comparative advantages over previous alternatives. The proposal includes the assessment of the strictly necessary instances of the endogenous variable, both from the point of view of the contiguous values prior to the forecast to be made, and of the past values representing the trend of consumption at homologous time intervals of the past. It also assesses the influence of exogenous variables, again limiting their presence at the IV to the indispensable minimum. A comparison is made with two alternative IV structures previously proposed in the literature, also applied to the distribution sector. The paper is supported by a real case study at the distribution sector. (author)

  5. Designing the input vector to ANN-based models for short-term load forecast in electricity distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Santos, P.J. [LabSEI-ESTSetubal-Department of Electrical Engineering at Escola Superior de Tecnologia, Polytechnic Institute of Setubal Rua Vale de Chaves Estefanilha, 2910-761 Setubal (Portugal); Martins, A.G. [Department of Electrical Engineering, FCTUC/INESC, Polo 2 University of Coimbra, Pinhal de Marrocos, 3030 Coimbra (Portugal); Pires, A.J. [LabSEI-ESTSetubal-Department of Electrical Engineering at Escola Superior de Tecnologia, Polytechnic Institute of Setubal Rua Vale de, Chaves Estefanilha, 2910-761 Setubal (Portugal)

    2007-05-15

    The present trend to electricity market restructuring increases the need for reliable short-term load forecast (STLF) algorithms, in order to assist electric utilities in activities such as planning, operating and controlling electric energy systems. Methodologies such as artificial neural networks (ANN) have been widely used in the next hour load forecast horizon with satisfactory results. However, this type of approach has had some shortcomings. Usually, the input vector (IV) is defined in a arbitrary way, mainly based on experience, on engineering judgment criteria and on concern about the ANN dimension, always taking into consideration the apparent correlations within the available endogenous and exogenous data. In this paper, a proposal is made of an approach to define the IV composition, with the main focus on reducing the influence of trial-and-error and common sense judgments, which usually are not based on sufficient evidence of comparative advantages over previous alternatives. The proposal includes the assessment of the strictly necessary instances of the endogenous variable, both from the point of view of the contiguous values prior to the forecast to be made, and of the past values representing the trend of consumption at homologous time intervals of the past. It also assesses the influence of exogenous variables, again limiting their presence at the IV to the indispensable minimum. A comparison is made with two alternative IV structures previously proposed in the literature, also applied to the distribution sector. The paper is supported by a real case study at the distribution sector. (author)

  6. PLEXOS Input Data Generator

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-01

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  7. Output from Statistical Predictive Models as Input to eLearning Dashboards

    Directory of Open Access Journals (Sweden)

    Marlene A. Smith

    2015-06-01

    Full Text Available We describe how statistical predictive models might play an expanded role in educational analytics by giving students automated, real-time information about what their current performance means for eventual success in eLearning environments. We discuss how an online messaging system might tailor information to individual students using predictive analytics. The proposed system would be data-driven and quantitative; e.g., a message might furnish the probability that a student will successfully complete the certificate requirements of a massive open online course. Repeated messages would prod underperforming students and alert instructors to those in need of intervention. Administrators responsible for accreditation or outcomes assessment would have ready documentation of learning outcomes and actions taken to address unsatisfactory student performance. The article’s brief introduction to statistical predictive models sets the stage for a description of the messaging system. Resources and methods needed to develop and implement the system are discussed.

  8. Characteristic 'fingerprints' of crop model responses data at different spatial resolutions to weather input

    Czech Academy of Sciences Publication Activity Database

    Angulo, C.; Rotter, R.; Trnka, Miroslav; Pirttioja, N. K.; Gaiser, T.; Hlavinka, Petr; Ewert, F.

    2013-01-01

    Roč. 49, AUG 2013 (2013), s. 104-114 ISSN 1161-0301 R&D Projects: GA MŠk(CZ) EE2.3.20.0248; GA MŠk(CZ) EE2.4.31.0056 Institutional support: RVO:67179843 Keywords : Crop model * Weather data resolution * Aggregation * Yield distribution Subject RIV: EH - Ecology, Behaviour Impact factor: 2.918, year: 2013

  9. Fingerprints of four crop models as affected by soil input data aggregation

    Czech Academy of Sciences Publication Activity Database

    Angulo, C.; Gaiser, T.; Rötter, R. P.; Børgesen, C. D.; Hlavinka, Petr; Trnka, Miroslav; Ewert, F.

    2014-01-01

    Roč. 61, NOV 2014 (2014), s. 35-48 ISSN 1161-0301 R&D Projects: GA MŠk(CZ) EE2.3.20.0248; GA MŠk(CZ) EE2.4.31.0056; GA MZe QJ1310123 Institutional support: RVO:67179843 Keywords : crop model * soil data * spatial resolution * yield distribution * aggregation Subject RIV: EH - Ecology, Behaviour Impact factor: 2.704, year: 2014

  10. Errors in estimation of the input signal for integrate-and-fire neuronal models

    Czech Academy of Sciences Publication Activity Database

    Bibbona, E.; Lánský, Petr; Sacerdote, L.; Sirovich, R.

    2008-01-01

    Roč. 78, č. 1 (2008), s. 1-10 ISSN 1539-3755 R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) 1ET400110401 Grant - others:EC(XE) MIUR PRIN 2005 Institutional research plan: CEZ:AV0Z50110509 Keywords : parameter estimation * stochastic neuronal model Subject RIV: BO - Biophysics Impact factor: 2.508, year: 2008 http://link.aps.org/abstract/PRE/v78/e011918

  11. Computation of geographic variables for air pollution prediction models in South Korea.

    Science.gov (United States)

    Eum, Youngseob; Song, Insang; Kim, Hwan-Cheol; Leem, Jong-Han; Kim, Sun-Young

    2015-01-01

    Recent cohort studies have relied on exposure prediction models to estimate individuallevel air pollution concentrations because individual air pollution measurements are not available for cohort locations. For such prediction models, geographic variables related to pollution sources are important inputs. We demonstrated the computation process of geographic variables mostly recorded in 2010 at regulatory air pollution monitoring sites in South Korea. On the basis of previous studies, we finalized a list of 313 geographic variables related to air pollution sources in eight categories including traffic, demographic characteristics, land use, transportation facilities, physical geography, emissions, vegetation, and altitude. We then obtained data from different sources such as the Statistics Geographic Information Service and Korean Transport Database. After integrating all available data to a single database by matching coordinate systems and converting non-spatial data to spatial data, we computed geographic variables at 294 regulatory monitoring sites in South Korea. The data integration and variable computation were performed by using ArcGIS version 10.2 (ESRI Inc., Redlands, CA, USA). For traffic, we computed the distances to the nearest roads and the sums of road lengths within different sizes of circular buffers. In addition, we calculated the numbers of residents, households, housing buildings, companies, and employees within the buffers. The percentages of areas for different types of land use compared to total areas were calculated within the buffers. For transportation facilities and physical geography, we computed the distances to the closest public transportation depots and the boundary lines. The vegetation index and altitude were estimated at a given location by using satellite data. The summary statistics of geographic variables in Seoul across monitoring sites showed different patterns between urban background and urban roadside sites. This study

  12. Computation of geographic variables for air pollution prediction models in South Korea

    Directory of Open Access Journals (Sweden)

    Youngseob Eum

    2015-10-01

    Full Text Available Recent cohort studies have relied on exposure prediction models to estimate individuallevel air pollution concentrations because individual air pollution measurements are not available for cohort locations. For such prediction models, geographic variables related to pollution sources are important inputs. We demonstrated the computation process of geographic variables mostly recorded in 2010 at regulatory air pollution monitoring sites in South Korea. On the basis of previous studies, we finalized a list of 313 geographic variables related to air pollution sources in eight categories including traffic, demographic characteristics, land use, transportation facilities, physical geography, emissions, vegetation, and altitude. We then obtained data from different sources such as the Statistics Geographic Information Service and Korean Transport Database. After integrating all available data to a single database by matching coordinate systems and converting non-spatial data to spatial data, we computed geographic variables at 294 regulatory monitoring sites in South Korea. The data integration and variable computation were performed by using ArcGIS version 10.2 (ESRI Inc., Redlands, CA, USA. For traffic, we computed the distances to the nearest roads and the sums of road lengths within different sizes of circular buffers. In addition, we calculated the numbers of residents, households, housing buildings, companies, and employees within the buffers. The percentages of areas for different types of land use compared to total areas were calculated within the buffers. For transportation facilities and physical geography, we computed the distances to the closest public transportation depots and the boundary lines. The vegetation index and altitude were estimated at a given location by using satellite data. The summary statistics of geographic variables in Seoul across monitoring sites showed different patterns between urban background and urban roadside

  13. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  14. Modeling of Fluctuating Mass Flux in Variable Density Flows

    Science.gov (United States)

    So, R. M. C.; Mongia, H. C.; Nikjooy, M.

    1983-01-01

    The approach solves for both Reynolds and Favre averaged quantities and calculates the scalar pdf. Turbulent models used to close the governing equations are formulated to account for complex mixing and variable density effects. In addition, turbulent mass diffusivities are not assumed to be in constant proportion to turbulent momentum diffusivities. The governing equations are solved by a combination of finite-difference technique and Monte-Carlo simulation. Some preliminary results on simple variable density shear flows are presented. The differences between these results and those obtained using conventional models are discussed.

  15. SST Diurnal Variability: Regional Extent & Implications in Atmospheric Modelling

    DEFF Research Database (Denmark)

    Karagali, Ioanna; Høyer, Jacob L.

    2013-01-01

    The project Sea Surface Temperature Diurnal Variability: Regional Extent and Implications in Atmospheric Modeling (SSTDV: R.EX.- IM.A.M.) was initiated within the framework of the European Space Agency's Support to Science Element (ESA STSE). The main focus is twofold: i) to characterize...... and quantify regional diurnal warming from the experimental MSG/SEVIRI hourly SST fields, for the period 2006-2012. ii) To investigate the impact of the increased SST temporal resolution in the atmospheric model WRF, in terms of modeled 10-m winds and surface heat fluxes. Withing this context, 3 main tasks...... SST variability on atmospheric modeling is the prime goal of the third and final task. This will be examined by increasing the temporal resolution of the SST initial conditions in WRF and by evaluating the WRF included diurnal scheme. Validation of the modeled winds will be performed against 10m ASAR...

  16. A Novel Approach to Develop the Lower Order Model of Multi-Input Multi-Output System

    Science.gov (United States)

    Rajalakshmy, P.; Dharmalingam, S.; Jayakumar, J.

    2017-10-01

    A mathematical model is a virtual entity that uses mathematical language to describe the behavior of a system. Mathematical models are used particularly in the natural sciences and engineering disciplines like physics, biology, and electrical engineering as well as in the social sciences like economics, sociology and political science. Physicists, Engineers, Computer scientists, and Economists use mathematical models most extensively. With the advent of high performance processors and advanced mathematical computations, it is possible to develop high performing simulators for complicated Multi Input Multi Ouptut (MIMO) systems like Quadruple tank systems, Aircrafts, Boilers etc. This paper presents the development of the mathematical model of a 500 MW utility boiler which is a highly complex system. A synergistic combination of operational experience, system identification and lower order modeling philosophy has been effectively used to develop a simplified but accurate model of a circulation system of a utility boiler which is a MIMO system. The results obtained are found to be in good agreement with the physics of the process and with the results obtained through design procedure. The model obtained can be directly used for control system studies and to realize hardware simulators for boiler testing and operator training.

  17. Consumer input into health care: Time for a new active and comprehensive model of consumer involvement.

    Science.gov (United States)

    Hall, Alix E; Bryant, Jamie; Sanson-Fisher, Rob W; Fradgley, Elizabeth A; Proietto, Anthony M; Roos, Ian

    2018-03-07

    To ensure the provision of patient-centred health care, it is essential that consumers are actively involved in the process of determining and implementing health-care quality improvements. However, common strategies used to involve consumers in quality improvements, such as consumer membership on committees and collection of patient feedback via surveys, are ineffective and have a number of limitations, including: limited representativeness; tokenism; a lack of reliable and valid patient feedback data; infrequent assessment of patient feedback; delays in acquiring feedback; and how collected feedback is used to drive health-care improvements. We propose a new active model of consumer engagement that aims to overcome these limitations. This model involves the following: (i) the development of a new measure of consumer perceptions; (ii) low cost and frequent electronic data collection of patient views of quality improvements; (iii) efficient feedback to the health-care decision makers; and (iv) active involvement of consumers that fosters power to influence health system changes. © 2018 The Authors Health Expectations published by John Wiley & Sons Ltd.

  18. Analytical model of reactive transport processes with spatially variable coefficients.

    Science.gov (United States)

    Simpson, Matthew J; Morrow, Liam C

    2015-05-01

    Analytical solutions of partial differential equation (PDE) models describing reactive transport phenomena in saturated porous media are often used as screening tools to provide insight into contaminant fate and transport processes. While many practical modelling scenarios involve spatially variable coefficients, such as spatially variable flow velocity, v(x), or spatially variable decay rate, k(x), most analytical models deal with constant coefficients. Here we present a framework for constructing exact solutions of PDE models of reactive transport. Our approach is relevant for advection-dominant problems, and is based on a regular perturbation technique. We present a description of the solution technique for a range of one-dimensional scenarios involving constant and variable coefficients, and we show that the solutions compare well with numerical approximations. Our general approach applies to a range of initial conditions and various forms of v(x) and k(x). Instead of simply documenting specific solutions for particular cases, we present a symbolic worksheet, as supplementary material, which enables the solution to be evaluated for different choices of the initial condition, v(x) and k(x). We also discuss how the technique generalizes to apply to models of coupled multispecies reactive transport as well as higher dimensional problems.

  19. Global Nitrous Oxide Emissions from Agricultural Soils: Magnitude and Uncertainties Associated with Input Data and Model Parameters

    Science.gov (United States)

    Xu, R.; Tian, H.; Pan, S.; Yang, J.; Lu, C.; Zhang, B.

    2016-12-01

    Human activities have caused significant perturbations of the nitrogen (N) cycle, resulting in about 21% increase of atmospheric N2O concentration since the pre-industrial era. This large increase is mainly caused by intensive agricultural activities including the application of nitrogen fertilizer and the expansion of leguminous crops. Substantial efforts have been made to quantify the global and regional N2O emission from agricultural soils in the last several decades using a wide variety of approaches, such as ground-based observation, atmospheric inversion, and process-based model. However, large uncertainties exist in those estimates as well as methods themselves. In this study, we used a coupled biogeochemical model (DLEM) to estimate magnitude, spatial, and temporal patterns of N2O emissions from global croplands in the past five decades (1961-2012). To estimate uncertainties associated with input data and model parameters, we have implemented a number of simulation experiments with DLEM, accounting for key parameter values that affect calculation of N2O fluxes (i.e., maximum nitrification and denitrification rates, N fixation rate, and the adsorption coefficient for soil ammonium and nitrate), different sets of input data including climate, land management practices (i.e., nitrogen fertilizer types, application rates and timings, with/without irrigation), N deposition, and land use and land cover change. This work provides a robust estimate of global N2O emissions from agricultural soils as well as identifies key gaps and limitations in the existing model and data that need to be investigated in the future.

  20. The Model Intercomparison Project on the Climatic Response to Volcanic Forcing (VolMIP): Experimental Design and Forcing Input Data for CMIP6

    Science.gov (United States)

    Zanchettin, Davide; Khodri, Myriam; Timmreck, Claudia; Toohey, Matthew; Schmidt, Anja; Gerber, Edwin P.; Hegerl, Gabriele; Robock, Alan; Pausata, Francesco; Ball, William T.; hide

    2016-01-01

    The enhancement of the stratospheric aerosol layer by volcanic eruptions induces a complex set of responses causing global and regional climate effects on a broad range of timescales. Uncertainties exist regarding the climatic response to strong volcanic forcing identified in coupled climate simulations that contributed to the fifth phase of the Coupled Model Intercomparison Project (CMIP5). In order to better understand the sources of these model diversities, the Model Intercomparison Project on the climatic response to Volcanic forcing (VolMIP) has defined a coordinated set of idealized volcanic perturbation experiments to be carried out in alignment with the CMIP6 protocol. VolMIP provides a common stratospheric aerosol data set for each experiment to minimize differences in the applied volcanic forcing. It defines a set of initial conditions to assess how internal climate variability contributes to determining the response. VolMIP will assess to what extent volcanically forced responses of the coupled ocean-atmosphere system are robustly simulated by state-of-the-art coupled climate models and identify the causes that limit robust simulated behavior, especially differences in the treatment of physical processes. This paper illustrates the design of the idealized volcanic perturbation experiments in the VolMIP protocol and describes the common aerosol forcing input data sets to be used.

  1. Remote sensing inputs to National Model Implementation Program for water resources quality improvement

    Science.gov (United States)

    Eidenshink, J. C.; Schmer, F. A.

    1979-01-01

    The Lake Herman watershed in southeastern South Dakota has been selected as one of seven water resources systems in the United States for involvement in the National Model Implementation Program (MIP). MIP is a pilot program initiated to illustrate the effectiveness of existing water resources quality improvement programs. The Remote Sensing Institute (RSI) at South Dakota State University has produced a computerized geographic information system for the Lake Herman watershed. All components necessary for the monitoring and evaluation process were included in the data base. The computerized data were used to produce thematic maps and tabular data for the land cover and soil classes within the watershed. These data are being utilized operationally by SCS resource personnel for planning and management purposes.

  2. MODEL AND METHOD FOR SYNTHESIS OF PROJECT MANAGEMENT METHODOLOGY WITH FUZZY INPUT DATA

    Directory of Open Access Journals (Sweden)

    Igor V. KONONENKO

    2016-02-01

    Full Text Available Literature analysis concerning the selection or creation a project management methodology is performed. Creating a "complete" methodology is proposed which can be applied to managing projects with any complexity, various degrees of responsibility for results and different predictability of the requirements. For the formation of a "complete" methodology, it is proposed to take the PMBOK standard as the basis, which would be supplemented by processes of the most demanding plan driven and flexible Agile Methodologies. For each knowledge area of the PMBOK standard, The following groups of processes should be provided: initiation, planning, execution, reporting, and forecasting, controlling, analysis, decision making and closing. The method for generating a methodology for the specific project is presented. The multiple criteria mathematical model and method aredeveloped for the synthesis of methodology when initial data about the project and its environment are fuzzy.

  3. Effect of delayed response in growth on the dynamics of a chemostat model with impulsive input

    International Nuclear Information System (INIS)

    Jiao Jianjun; Yang Xiaosong; Chen Lansun; Cai Shaohong

    2009-01-01

    In this paper, a chemostat model with delayed response in growth and impulsive perturbations on the substrate is considered. Using the discrete dynamical system determined by the stroboscopic map, we obtain a microorganism-extinction periodic solution, further, the globally attractive condition of the microorganism-extinction periodic solution is obtained. By the use of the theory on delay functional and impulsive differential equation, we also obtain the permanent condition of the investigated system. Our results indicate that the discrete time delay has influence to the dynamics behaviors of the investigated system, and provide tactical basis for the experimenters to control the outcome of the chemostat. Furthermore, numerical analysis is inserted to illuminate the dynamics of the system affected by the discrete time delay.

  4. Modeling heart rate variability including the effect of sleep stages

    Science.gov (United States)

    Soliński, Mateusz; Gierałtowski, Jan; Żebrowski, Jan

    2016-02-01

    We propose a model for heart rate variability (HRV) of a healthy individual during sleep with the assumption that the heart rate variability is predominantly a random process. Autonomic nervous system activity has different properties during different sleep stages, and this affects many physiological systems including the cardiovascular system. Different properties of HRV can be observed during each particular sleep stage. We believe that taking into account the sleep architecture is crucial for modeling the human nighttime HRV. The stochastic model of HRV introduced by Kantelhardt et al. was used as the initial starting point. We studied the statistical properties of sleep in healthy adults, analyzing 30 polysomnographic recordings, which provided realistic information about sleep architecture. Next, we generated synthetic hypnograms and included them in the modeling of nighttime RR interval series. The results of standard HRV linear analysis and of nonlinear analysis (Shannon entropy, Poincaré plots, and multiscale multifractal analysis) show that—in comparison with real data—the HRV signals obtained from our model have very similar properties, in particular including the multifractal characteristics at different time scales. The model described in this paper is discussed in the context of normal sleep. However, its construction is such that it should allow to model heart rate variability in sleep disorders. This possibility is briefly discussed.

  5. Deriving input parameters for cost-effectiveness modeling: taxonomy of data types and approaches to their statistical synthesis.

    Science.gov (United States)

    Saramago, Pedro; Manca, Andrea; Sutton, Alex J

    2012-01-01

    The evidence base informing economic evaluation models is rarely derived from a single source. Researchers are typically expected to identify and combine available data to inform the estimation of model parameters for a particular decision problem. The absence of clear guidelines on what data can be used and how to effectively synthesize this evidence base under different scenarios inevitably leads to different approaches being used by different modelers. The aim of this article is to produce a taxonomy that can help modelers identify the most appropriate methods to use when synthesizing the available data for a given model parameter. This article developed a taxonomy based on possible scenarios faced by the analyst when dealing with the available evidence. While mainly focusing on clinical effectiveness parameters, this article also discusses strategies relevant to other key input parameters in any economic model (i.e., disease natural history, resource use/costs, and preferences). The taxonomy categorizes the evidence base for health economic modeling according to whether 1) single or multiple data sources are available, 2) individual or aggregate data are available (or both), or 3) individual or multiple decision model parameters are to be estimated from the data. References to examples of the key methodological developments for each entry in the taxonomy together with citations to where such methods have been used in practice are provided throughout. The use of the taxonomy developed in this article hopes to improve the quality of the synthesis of evidence informing decision models by bringing to the attention of health economics modelers recent methodological developments in this field. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Modeling mud flocculation using variable collision and breakup efficiencies

    Science.gov (United States)

    Strom, K.; Keyvani, A.

    2013-12-01

    Solution of the Winterwerp (1998) floc growth and breakup equation yields time dependent median floc size as an outcome of collision driven floc growth and shear induced floc breakage. The formulation is quite nice in that it is an ODE that yields fast solution for median floc size and can be incorporated into sediment transport models. The Winterwerp (1998) floc size equation was used to model floc growth and breakup data from laboratory experiments conducted under both constant and variable turbulent shear rate (Keyvani 2013). The data showed that floc growth rate starts out very high and then reduces with size to asymptotically approach an equilibrium size. In modeling the data, the Winterwerp (1998) model and the Son and Hsu (2008) variant were found to be able to capture the initial fast growth phase and the equilibrium state, but were not able to well capture the slow growing phase. This resulted in flocs reaching the equilibrium state in the models much faster than the experimental data. The objective of this work was to improve the ability of the general Winterwerp (1998) formulation to better capture the slow growth phase and more accurately predict the time to equilibrium. To do this, a full parameter sensitivity analysis was conducted using the Winterwerp (1998) model. Several modifications were tested, including the variable fractal dimension and yield strength extensions of Son and Hsu (2008, 2009). The best match with the in-house data, and data from the literature, was achieved using floc collision and breakup efficiency coefficients that decrease with floc size. The net result of the decrease in both of these coefficients is that floc growth slows without modification to the equilibrium size. Inclusion of these new functions allows for substantial improvement in modeling the growth phase of flocs in both steady and variable turbulence conditions. The improvement is particularly noticeable when modeling continual growth in a decaying turbulence field

  7. Evaluation of a Regional Australian Nurse-Led Parkinson's Service Using the Context, Input, Process, and Product Evaluation Model.

    Science.gov (United States)

    Jones, Belinda; Hopkins, Genevieve; Wherry, Sally-Anne; Lueck, Christian J; Das, Chandi P; Dugdale, Paul

    2016-01-01

    A nurse-led Parkinson's service was introduced at Canberra Hospital and Health Services in 2012 with the primary objective of improving the care and self-management of people with a diagnosis of Parkinson's disease (PD) and related movement disorders. Other objectives of the Service included improving the quality of life of patients with PD and reducing their caregiver burden, improving the knowledge and understanding of PD among healthcare professionals, and reducing unnecessary hospital admissions. This article evaluates the first 2 years of this Service. The Context, Input, Process, and Product Evaluation Model was used to evaluate the Parkinson's and Movement Disorder Service. The context evaluation was conducted through discussions with stakeholders, review of PD guidelines and care pathways, and assessment of service gaps. Input: The input evaluation was carried out by reviewing the resources and strategies used in the development of the Service. The process evaluation was undertaken by reviewing the areas of the implementation that went well and identifying issues and ongoing gaps in service provision. Product: Finally, product evaluation was undertaken by conducting stakeholder interviews and surveying patients in order to assess their knowledge and perception of value, and the patient experience of the Service. Admission data before and after implementation of the Parkinson's and Movement Disorder Service were also compared for any notable trends. Several gaps in service provision for patients with PD in the Australian Capital Territory were identified, prompting the development of a PD Service to address some of them. Input: Funding for a Parkinson's disease nurse specialist was made available, and existing resources were used to develop clinics, education sessions, and outreach services. Clinics and education sessions were implemented successfully, with positive feedback from patients and healthcare professionals. However, outreach services were limited

  8. Sensitivity analysis of a forest gap model concerning current and future climate variability

    Energy Technology Data Exchange (ETDEWEB)

    Lasch, P.; Suckow, F.; Buerger, G.; Lindner, M.

    1998-07-01

    The ability of a forest gap model to simulate the effects of climate variability and extreme events depends on the temporal resolution of the weather data that are used and the internal processing of these data for growth, regeneration and mortality. The climatological driving forces of most current gap models are based on monthly means of weather data and their standard deviations, and long-term monthly means are used for calculating yearly aggregated response functions for ecological processes. In this study, the results of sensitivity analyses using the forest gap model FORSKA{sub -}P and involving climate data of different resolutions, from long-term monthly means to daily time series, including extreme events, are presented for the current climate and for a climate change scenario. The model was applied at two sites with differing soil conditions in the federal state of Brandenburg, Germany. The sensitivity of the model concerning climate variations and different climate input resolutions is analysed and evaluated. The climate variability used for the model investigations affected the behaviour of the model substantially. (orig.)

  9. Energy balance in the solar transition region. II - Effects of pressure and energy input on hydrostatic models

    Science.gov (United States)

    Fontenla, J. M.; Avrett, E. H.; Loeser, R.

    1991-01-01

    The radiation of energy by hydrogen lines and continua in hydrostatic energy-balance models of the transition region between the solar chromosphere and corona is studied using models which assume that mechanical or magnetic energy is dissipated in the hot corona and is then transported toward the chromosphere down the steep temperature gradient of the transition region. These models explain the average quiet sun and also the entire range of variability of the Ly-alpha lines. The relations between the downward energy flux, the pressure of the transition region, and the different hydrogen emission are described.

  10. Efficient family-based model checking via variability abstractions

    DEFF Research Database (Denmark)

    Dimovski, Aleksandar; Al-Sibahi, Ahmad Salim; Brabrand, Claus

    2016-01-01

    Many software systems are variational: they can be configured to meet diverse sets of requirements. They can produce a (potentially huge) number of related systems, known as products or variants, by systematically reusing common parts. For variational models (variational systems or families...... with the abstract model checking of the concrete high-level variational model. This allows the use of Spin with all its accumulated optimizations for efficient verification of variational models without any knowledge about variability. We have implemented the transformations in a prototype tool, and we illustrate...

  11. Internal variability in a regional climate model over West Africa

    Energy Technology Data Exchange (ETDEWEB)

    Vanvyve, Emilie; Ypersele, Jean-Pascal van [Universite catholique de Louvain, Institut d' astronomie et de geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Hall, Nicholas [Laboratoire d' Etudes en Geophysique et Oceanographie Spatiales/Centre National d' Etudes Spatiales, Toulouse Cedex 9 (France); Messager, Christophe [University of Leeds, Institute for Atmospheric Science, Environment, School of Earth and Environment, Leeds (United Kingdom); Leroux, Stephanie [Universite Joseph Fourier, Laboratoire d' etude des Transferts en Hydrologie et Environnement, BP53, Grenoble Cedex 9 (France)

    2008-02-15

    Sensitivity studies with regional climate models are often performed on the basis of a few simulations for which the difference is analysed and the statistical significance is often taken for granted. In this study we present some simple measures of the confidence limits for these types of experiments by analysing the internal variability of a regional climate model run over West Africa. Two 1-year long simulations, differing only in their initial conditions, are compared. The difference between the two runs gives a measure of the internal variability of the model and an indication of which timescales are reliable for analysis. The results are analysed for a range of timescales and spatial scales, and quantitative measures of the confidence limits for regional model simulations are diagnosed for a selection of study areas for rainfall, low level temperature and wind. As the averaging period or spatial scale is increased, the signal due to internal variability gets smaller and confidence in the simulations increases. This occurs more rapidly for variations in precipitation, which appear essentially random, than for dynamical variables, which show some organisation on larger scales. (orig.)

  12. Viscous cosmological models with a variable cosmological term ...

    African Journals Online (AJOL)

    Einstein's field equations for a Friedmann-Lamaitre Robertson-Walker universe filled with a dissipative fluid with a variable cosmological term L described by full Israel-Stewart theory are considered. General solutions to the field equations for the flat case have been obtained. The solution corresponds to the dust free model ...

  13. Appraisal and Reliability of Variable Engagement Model Prediction ...

    African Journals Online (AJOL)

    The variable engagement model based on the stress - crack opening displacement relationship and, which describes the behaviour of randomly oriented steel fibres composite subjected to uniaxial tension has been evaluated so as to determine the safety indices associated when the fibres are subjected to pullout and with ...

  14. Channel responses to varying sediment input: A flume experiment modeled after Redwood Creek, California

    Science.gov (United States)

    Madej, M.A.; Sutherland, D.G.; Lisle, T.E.; Pryor, B.

    2009-01-01

    At the reach scale, a channel adjusts to sediment supply and flow through mutual interactions among channel form, bed particle size, and flow dynamics that govern river bed mobility. Sediment can impair the beneficial uses of a river, but the timescales for studying recovery following high sediment loading in the field setting make flume experiments appealing. We use a flume experiment, coupled with field measurements in a gravel-bed river, to explore sediment transport, storage, and mobility relations under various sediment supply conditions. Our flume experiment modeled adjustments of channel morphology, slope, and armoring in a gravel-bed channel. Under moderate sediment increases, channel bed elevation increased and sediment output increased, but channel planform remained similar to pre-feed conditions. During the following degradational cycle, most of the excess sediment was evacuated from the flume and the bed became armored. Under high sediment feed, channel bed elevation increased, the bed became smoother, mid-channel bars and bedload sheets formed, and water surface slope increased. Concurrently, output increased and became more poorly sorted. During the last degradational cycle, the channel became armored and channel incision ceased before all excess sediment was removed. Selective transport of finer material was evident throughout the aggradational cycles and became more pronounced during degradational cycles as the bed became armored. Our flume results of changes in bed elevation, sediment storage, channel morphology, and bed texture parallel those from field surveys of Redwood Creek, northern California, which has exhibited channel bed degradation for 30??years following a large aggradation event in the 1970s. The flume experiment suggested that channel recovery in terms of reestablishing a specific morphology may not occur, but the channel may return to a state of balancing sediment supply and transport capacity.

  15. Modelling of uranium inputs and its fate in soil; Modellierung von Uraneintraegen aus Duengern und ihr Verbleib im Boden

    Energy Technology Data Exchange (ETDEWEB)

    Achatz, M. [Bundesamt fuer Strahlenschutz, Berlin (Germany); Urso, L. [Bundesamt fuer Strahlenschutz, Oberschleissheim (Germany)

    2016-07-01

    87 % of mineral phosphate fertilizers are produced of sedimentary rock phosphate, which generally contains heavy metals, like uranium. The solution and migration behavior of uranium is apart from its redox ratio, determined by its pH conditions as well as its ligand quality and quantity. A further important role in sorption is played by soil components like clay minerals, pedogenic oxides and soil organic matter. To provide a preferably detailed speciation model of U in soil several physical and chemical components have to be included to be able to state distribution coefficients (k{sub D}) and sorption processes. The model of Hormann and Fischer served as the basis of modelling uranium mobility in soil by using the program PhreeqC. The usage of real soil and soil water measurements may contribute to identify factors and processes influencing the mobility of uranium under preferably realistic conditions. Additionally, the assessment of further predictions towards uranium migration in soil can be made based on a modelling with PhreeqC. The modelling of uranium inputs and its fate in soil can help to elucidate the human caused occurrence or geogenic origin of uranium in soil.

  16. Effects of degraded sensory input on memory for speech: behavioral data and a test of biologically constrained computational models.

    Science.gov (United States)

    Piquado, Tepring; Cousins, Katheryn A Q; Wingfield, Arthur; Miller, Paul

    2010-12-13

    Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. A metric for attributing variability in modelled streamflows

    Science.gov (United States)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2016-10-01

    Significant gaps in our present understanding of hydrological systems lead to enhanced uncertainty in key modelling decisions. This study proposes a method, namely ;Quantile Flow Deviation (QFD);, for the attribution of forecast variability to different sources across different streamflow regimes. By using a quantile based metric, we can assess the change in uncertainty across individual percentiles, thereby allowing uncertainty to be expressed as a function of magnitude and time. As a result, one can address selective sources of uncertainty depending on whether low or high flows (say) are of interest. By way of a case study, we demonstrate the usefulness of the approach for estimating the relative importance of model parameter identification, objective functions and model structures as sources of stream flow forecast uncertainty. We use FUSE (Framework for Understanding Structural Errors) to implement our methods, allowing selection of multiple different model structures. Cross-catchment comparison is done for two different catchments: Leaf River in Mississippi, USA and Bass River of Victoria, Australia. Two different approaches to parameter estimation are presented that demonstrate the statistic- one based on GLUE, the other one based on optimization. The results presented in this study suggest that the determination of the model structure with the design catchment should be given priority but that objective function selection with parameter identifiability can lead to significant variability in results. By examining the QFD across multiple flow quantiles, the ability of certain models and optimization routines to constrain variability for different flow conditions is demonstrated.

  18. Global modeling of land water and energy balances. Part III: Interannual variability

    Science.gov (United States)

    Shmakin, A.B.; Milly, P.C.D.; Dunne, K.A.

    2002-01-01

    The Land Dynamics (LaD) model is tested by comparison with observations of interannual variations in discharge from 44 large river basins for which relatively accurate time series of monthly precipitation (a primary model input) have recently been computed. When results are pooled across all basins, the model explains 67% of the interannual variance of annual runoff ratio anomalies (i.e., anomalies of annual discharge volume, normalized by long-term mean precipitation volume). The new estimates of basin precipitation appear to offer an improvement over those from a state-of-the-art analysis of global precipitation (the Climate Prediction Center Merged Analysis of Precipitation, CMAP), judging from comparisons of parallel model runs and of analyses of precipitation-discharge correlations. When the new precipitation estimates are used, the performance of the LaD model is comparable to, but not significantly better than, that of a simple, semiempirical water-balance relation that uses only annual totals of surface net radiation and precipitation. This implies that the LaD simulations of interannual runoff variability do not benefit substantially from information on geographical variability of land parameters or seasonal structure of interannual variability of precipitation. The aforementioned analyses necessitated the development of a method for downscaling of long-term monthly precipitation data to the relatively short timescales necessary for running the model. The method merges the long-term data with a reference dataset of 1-yr duration, having high temporal resolution. The success of the method, for the model and data considered here, was demonstrated in a series of model-model comparisons and in the comparisons of modeled and observed interannual variations of basin discharge.

  19. Sparse modeling of spatial environmental variables associated with asthma.

    Science.gov (United States)

    Chang, Timothy S; Gangnon, Ronald E; David Page, C; Buckingham, William R; Tandias, Aman; Cowan, Kelly J; Tomasallo, Carrie D; Arndt, Brian G; Hanrahan, Lawrence P; Guilbert, Theresa W

    2015-02-01

    Geographically distributed environmental factors influence the burden of diseases such as asthma. Our objective was to identify sparse environmental variables associated with asthma diagnosis gathered from a large electronic health record (EHR) dataset while controlling for spatial variation. An EHR dataset from the University of Wisconsin's Family Medicine, Internal Medicine and Pediatrics Departments was obtained for 199,220 patients aged 5-50years over a three-year period. Each patient's home address was geocoded to one of 3456 geographic census block groups. Over one thousand block group variables were obtained from a commercial database. We developed a Sparse Spatial Environmental Analysis (SASEA). Using this method, the environmental variables were first dimensionally reduced with sparse principal component analysis. Logistic thin plate regression spline modeling was then used to identify block group variables associated with asthma from sparse principal components. The addresses of patients from the EHR dataset were distributed throughout the majority of Wisconsin's geography. Logistic thin plate regression spline modeling captured spatial variation of asthma. Four sparse principal components identified via model selection consisted of food at home, dog ownership, household size, and disposable income variables. In rural areas, dog ownership and renter occupied housing units from significant sparse principal components were associated with asthma. Our main contribution is the incorporation of sparsity in spatial modeling. SASEA sequentially added sparse principal components to Logistic thin plate regression spline modeling. This method allowed association of geographically distributed environmental factors with asthma using EHR and environmental datasets. SASEA can be applied to other diseases with environmental risk factors. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. A comparison of numerical and machine-learning modeling of soil water content with limited input data

    Science.gov (United States)

    Karandish, Fatemeh; Šimůnek, Jiří

    2016-12-01

    Soil water content (SWC) is a key factor in optimizing the usage of water resources in agriculture since it provides information to make an accurate estimation of crop water demand. Methods for predicting SWC that have simple data requirements are needed to achieve an optimal irrigation schedule, especially for various water-saving irrigation strategies that are required to resolve both food and water security issues under conditions of water shortages. Thus, a two-year field investigation was carried out to provide a dataset to compare the effectiveness of HYDRUS-2D, a physically-based numerical model, with various machine-learning models, including Multiple Linear Regressions (MLR), Adaptive Neuro-Fuzzy Inference Systems (ANFIS), and Support Vector Machines (SVM), for simulating time series of SWC data under water stress conditions. SWC was monitored using TDRs during the maize growing seasons of 2010 and 2011. Eight combinations of six, simple, independent parameters, including pan evaporation and average air temperature as atmospheric parameters, cumulative growth degree days (cGDD) and crop coefficient (Kc) as crop factors, and water deficit (WD) and irrigation depth (In) as crop stress factors, were adopted for the estimation of SWCs in the machine-learning models. Having Root Mean Square Errors (RMSE) in the range of 0.54-2.07 mm, HYDRUS-2D ranked first for the SWC estimation, while the ANFIS and SVM models with input datasets of cGDD, Kc, WD and In ranked next with RMSEs ranging from 1.27 to 1.9 mm and mean bias errors of -0.07 to 0.27 mm, respectively. However, the MLR models did not perform well for SWC forecasting, mainly due to non-linear changes of SWCs under the irrigation process. The results demonstrated that despite requiring only simple input data, the ANFIS and SVM models could be favorably used for SWC predictions under water stress conditions, especially when there is a lack of data. However, process-based numerical models are undoubtedly a

  1. A variable projection approach for efficient estimation of RBF-ARX model.

    Science.gov (United States)

    Gan, Min; Li, Han-Xiong; Peng, Hui

    2015-03-01

    The radial basis function network-based autoregressive with exogenous inputs (RBF-ARX) models have much more linear parameters than nonlinear parameters. Taking advantage of this special structure, a variable projection algorithm is proposed to estimate the model parameters more efficiently by eliminating the linear parameters through the orthogonal projection. The proposed method not only substantially reduces the dimension of parameter space of RBF-ARX model but also results in a better-conditioned problem. In this paper, both the full Jacobian matrix of Golub and Pereyra and the Kaufman's simplification are used to test the performance of the algorithm. An example of chaotic time series modeling is presented for the numerical comparison. It clearly demonstrates that the proposed approach is computationally more efficient than the previous structured nonlinear parameter optimization method and the conventional Levenberg-Marquardt algorithm without the parameters separated. Finally, the proposed method is also applied to a simulated nonlinear single-input single-output process, a time-varying nonlinear process and a real multiinput multioutput nonlinear industrial process to illustrate its usefulness.

  2. Analysis models for variables associated with breastfeeding duration

    Directory of Open Access Journals (Sweden)

    Edson Theodoro dos S. Neto

    2013-09-01

    Full Text Available OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78% children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages. RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55 and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1 increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3 and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5. However, protective factors (maternal age and family income differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.

  3. Multiple Discrete Endogenous Variables in Weakly-Separable Triangular Models

    Directory of Open Access Journals (Sweden)

    Sung Jae Jun

    2016-02-01

    Full Text Available We consider a model in which an outcome depends on two discrete treatment variables, where one treatment is given before the other. We formulate a three-equation triangular system with weak separability conditions. Without assuming assignment is random, we establish the identification of an average structural function using two-step matching. We also consider decomposing the effect of the first treatment into direct and indirect effects, which are shown to be identified by the proposed methodology. We allow for both of the treatment variables to be non-binary and do not appeal to an identification-at-infinity argument.

  4. Quantum ring models and action-angle variables

    OpenAIRE

    Bellucci, Stefano; Nersessian, Armen; Saghatelian, Armen; Yeghikyan, Vahagn

    2010-01-01

    We suggest to use the action-angle variables for the study of properties of (quasi)particles in quantum rings. For this purpose we present the action-angle variables for three two-dimensional singular oscillator systems. The first one is the usual (Euclidean) singular oscillator, which plays the role of the confinement potential for the quantum ring. We also propose two singular spherical oscillator models for the role of the confinement system for the spherical ring. The first one is based o...

  5. Parameters and variables appearing in repository design models

    International Nuclear Information System (INIS)

    Curtis, R.H.; Wart, R.J.

    1983-12-01

    This report defines the parameters and variables appearing in repository design models and presents typical values and ranges of values of each. Areas covered by this report include thermal, geomechanical, and coupled stress and flow analyses in rock. Particular emphasis is given to conductivity, radiation, and convection parameters for thermal analysis and elastic constants, failure criteria, creep laws, and joint properties for geomechanical analysis. The data in this report were compiled to help guide the selection of values of parameters and variables to be used in code benchmarking. 102 references, 33 figures, 51 tables

  6. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  7. A Review of Variable Slicing in Fused Deposition Modeling

    Science.gov (United States)

    Nadiyapara, Hitesh Hirjibhai; Pande, Sarang

    2017-06-01

    The paper presents a literature survey in the field of fused deposition of plastic wires especially in the field of slicing and deposition using extrusion of thermoplastic wires. Various researchers working in the field of computation of deposition path have used their algorithms for variable slicing. In the study, a flowchart has also been proposed for the slicing and deposition process. The algorithm already been developed by previous researcher will be used to be implemented on the fused deposition modelling machine. To demonstrate the capabilities of the fused deposition modeling machine a case study has been taken. It uses a manipulated G-code to be fed to the fused deposition modeling machine. Two types of slicing strategies, namely uniform slicing and variable slicing have been evaluated. In the uniform slicing, the slice thickness has been used for deposition is varying from 0.1 to 0.4 mm. In the variable slicing, thickness has been varied from 0.1 in the polar region to 0.4 in the equatorial region Time required and the number of slices required to deposit a hemisphere of 20 mm diameter have been compared with that using the variable slicing.

  8. Hidden Markov latent variable models with multivariate longitudinal data.

    Science.gov (United States)

    Song, Xinyuan; Xia, Yemao; Zhu, Hongtu

    2017-03-01

    Cocaine addiction is chronic and persistent, and has become a major social and health problem in many countries. Existing studies have shown that cocaine addicts often undergo episodic periods of addiction to, moderate dependence on, or swearing off cocaine. Given its reversible feature, cocaine use can be formulated as a stochastic process that transits from one state to another, while the impacts of various factors, such as treatment received and individuals' psychological problems on cocaine use, may vary across states. This article develops a hidden Markov latent variable model to study multivariate longitudinal data concerning cocaine use from a California Civil Addict Program. The proposed model generalizes conventional latent variable models to allow bidirectional transition between cocaine-addiction states and conventional hidden Markov models to allow latent variables and their dynamic interrelationship. We develop a maximum-likelihood approach, along with a Monte Carlo expectation conditional maximization (MCECM) algorithm, to conduct parameter estimation. The asymptotic properties of the parameter estimates and statistics for testing the heterogeneity of model parameters are investigated. The finite sample performance of the proposed methodology is demonstrated by simulation studies. The application to cocaine use study provides insights into the prevention of cocaine use. © 2016, The International Biometric Society.

  9. Neonatal intensive care nursing curriculum challenges based on context, input, process, and product evaluation model: A qualitative study

    Directory of Open Access Journals (Sweden)

    Mansoureh Ashghali-Farahani

    2018-01-01

    Full Text Available Background: Weakness of curriculum development in nursing education results in lack of professional skills in graduates. This study was done on master's students in nursing to evaluate challenges of neonatal intensive care nursing curriculum based on context, input, process, and product (CIPP evaluation model. Materials and Methods: This study was conducted with qualitative approach, which was completed according to the CIPP evaluation model. The study was conducted from May 2014 to April 2015. The research community included neonatal intensive care nursing master's students, the graduates, faculty members, neonatologists, nurses working in neonatal intensive care unit (NICU, and mothers of infants who were hospitalized in such wards. Purposeful sampling was applied. Results: The data analysis showed that there were two main categories: “inappropriate infrastructure” and “unknown duties,” which influenced the context formation of NICU master's curriculum. The input was formed by five categories, including “biomedical approach,” “incomprehensive curriculum,” “lack of professional NICU nursing mentors,” “inappropriate admission process of NICU students,” and “lack of NICU skill labs.” Three categories were extracted in the process, including “more emphasize on theoretical education,” “the overlap of credits with each other and the inconsistency among the mentors,” and “ineffective assessment.” Finally, five categories were extracted in the product, including “preferring routine work instead of professional job,” “tendency to leave the job,” “clinical incompetency of graduates,” “the conflict between graduates and nursing staff expectations,” and “dissatisfaction of graduates.” Conclusions: Some changes are needed in NICU master's curriculum by considering the nursing experts' comments and evaluating the consequences of such program by them.

  10. Selection Input Output by Restriction Using DEA Models Based on a Fuzzy Delphi Approach and Expert Information

    Science.gov (United States)

    Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi

    2017-09-01

    Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.

  11. Review of Literature for Inputs to the National Water Savings Model and Spreadsheet Tool-Commercial/Institutional

    Energy Technology Data Exchange (ETDEWEB)

    Whitehead, Camilla Dunham; Melody, Moya; Lutz, James

    2009-05-29

    Lawrence Berkeley National Laboratory (LBNL) is developing a computer model and spreadsheet tool for the United States Environmental Protection Agency (EPA) to help estimate the water savings attributable to their WaterSense program. WaterSense has developed a labeling program for three types of plumbing fixtures commonly used in commercial and institutional settings: flushometer valve toilets, urinals, and pre-rinse spray valves. This National Water Savings-Commercial/Institutional (NWS-CI