WorldWideScience

Sample records for model input variables

  1. Sensitivity Analysis of the ALMANAC Model's Input Variables

    Institute of Scientific and Technical Information of China (English)

    XIE Yun; James R.Kiniry; Jimmy R.Williams; CHEN You-min; LIN Er-da

    2002-01-01

    Crop models often require extensive input data sets to realistically simulate crop growth. Development of such input data sets can be difficult for some model users. The objective of this study was to evaluate the importance of variables in input data sets for crop modeling. Based on published hybrid performance trials in eight Texas counties, we developed standard data sets of 10-year simulations of maize and sorghum for these eight counties with the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) model. The simulation results were close to the measured county yields with relative error only 2.6%for maize, and - 0.6% for sorghum. We then analyzed the sensitivity of grain yield to solar radiation, rainfall, soil depth, soil plant available water, and runoff curve number, comparing simulated yields to those with the original, standard data sets. Runoff curve number changes had the greatest impact on simulated maize and sorghum yields for all the counties. The next most critical input was rainfall, and then solar radiation for both maize and sorghum, especially for the dryland condition. For irrigated sorghum, solar radiation was the second most critical input instead of rainfall. The degree of sensitivity of yield to all variables for maize was larger than for sorghum except for solar radiation. Many models use a USDA curve number approach to represent soil water redistribution, so it will be important to have accurate curve numbers, rainfall, and soil depth to realistically simulate yields.

  2. Researches on the Model of Telecommunication Service with Variable Input Tariff Rates

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The paper sets up and studies the model of the telecommunication queue servicing system with variable input tariff rates, which can relieve the crowding system traffic flows during the busy hour to enhance the utilizing rate of the telecom's resources.

  3. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    Science.gov (United States)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  4. Simulation model structure numerically robust to changes in magnitude and combination of input and output variables

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1999-01-01

    instabilities prevent the practical use of such a system model for more than one input/output combination and for other magnitudes of refrigerating capacities.A higher numerical robustness of system models can be achieved by making a model for the refrigeration cycle the core of the system model and by using...... variables with narrow definition intervals for the exchange of information between the cycle model and the component models.The advantages of the cycle-oriented method are illustrated by an example showing the refrigeration cycle similarities between two very different refrigeration systems.......Mathematical models of refrigeration systems are often based on a coupling of component models forming a “closed loop” type of system model. In these models the coupling structure of the component models represents the actual flow path of refrigerant in the system. Very often numerical...

  5. Bootstrap rank-ordered conditional mutual information (broCMI): A nonlinear input variable selection method for water resources modeling

    Science.gov (United States)

    Quilty, John; Adamowski, Jan; Khalil, Bahaa; Rathinasamy, Maheswaran

    2016-03-01

    The input variable selection problem has recently garnered much interest in the time series modeling community, especially within water resources applications, demonstrating that information theoretic (nonlinear)-based input variable selection algorithms such as partial mutual information (PMI) selection (PMIS) provide an improved representation of the modeled process when compared to linear alternatives such as partial correlation input selection (PCIS). PMIS is a popular algorithm for water resources modeling problems considering nonlinear input variable selection; however, this method requires the specification of two nonlinear regression models, each with parametric settings that greatly influence the selected input variables. Other attempts to develop input variable selection methods using conditional mutual information (CMI) (an analog to PMI) have been formulated under different parametric pretenses such as k nearest-neighbor (KNN) statistics or kernel density estimates (KDE). In this paper, we introduce a new input variable selection method based on CMI that uses a nonparametric multivariate continuous probability estimator based on Edgeworth approximations (EA). We improve the EA method by considering the uncertainty in the input variable selection procedure by introducing a bootstrap resampling procedure that uses rank statistics to order the selected input sets; we name our proposed method bootstrap rank-ordered CMI (broCMI). We demonstrate the superior performance of broCMI when compared to CMI-based alternatives (EA, KDE, and KNN), PMIS, and PCIS input variable selection algorithms on a set of seven synthetic test problems and a real-world urban water demand (UWD) forecasting experiment in Ottawa, Canada.

  6. Modeling the BOD of Danube River in Serbia using spatial, temporal, and input variables optimized artificial neural network models.

    Science.gov (United States)

    Šiljić Tomić, Aleksandra N; Antanasijević, Davor Z; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A; Pocajt, Viktor V

    2016-05-01

    This paper describes the application of artificial neural network models for the prediction of biological oxygen demand (BOD) levels in the Danube River. Eighteen regularly monitored water quality parameters at 17 stations on the river stretch passing through Serbia were used as input variables. The optimization of the model was performed in three consecutive steps: firstly, the spatial influence of a monitoring station was examined; secondly, the monitoring period necessary to reach satisfactory performance was determined; and lastly, correlation analysis was applied to evaluate the relationship among water quality parameters. Root-mean-square error (RMSE) was used to evaluate model performance in the first two steps, whereas in the last step, multiple statistical indicators of performance were utilized. As a result, two optimized models were developed, a general regression neural network model (labeled GRNN-1) that covers the monitoring stations from the Danube inflow to the city of Novi Sad and a GRNN model (labeled GRNN-2) that covers the stations from the city of Novi Sad to the border with Romania. Both models demonstrated good agreement between the predicted and actually observed BOD values.

  7. Optimization modeling of U.S. renewable electricity deployment using local input variables

    Science.gov (United States)

    Bernstein, Adam

    For the past five years, state Renewable Portfolio Standard (RPS) laws have been a primary driver of renewable electricity (RE) deployments in the United States. However, four key trends currently developing: (i) lower natural gas prices, (ii) slower growth in electricity demand, (iii) challenges of system balancing intermittent RE within the U.S. transmission regions, and (iv) fewer economical sites for RE development, may limit the efficacy of RPS laws over the remainder of the current RPS statutes' lifetime. An outsized proportion of U.S. RE build occurs in a small number of favorable locations, increasing the effects of these variables on marginal RE capacity additions. A state-by-state analysis is necessary to study the U.S. electric sector and to generate technology specific generation forecasts. We used LP optimization modeling similar to the National Renewable Energy Laboratory (NREL) Renewable Energy Development System (ReEDS) to forecast RE deployment across the 8 U.S. states with the largest electricity load, and found state-level RE projections to Year 2031 significantly lower than thoseimplied in the Energy Information Administration (EIA) 2013 Annual Energy Outlook forecast. Additionally, the majority of states do not achieve their RPS targets in our forecast. Combined with the tendency of prior research and RE forecasts to focus on larger national and global scale models, we posit that further bottom-up state and local analysis is needed for more accurate policy assessment, forecasting, and ongoing revision of variables as parameter values evolve through time. Current optimization software eliminates much of the need for algorithm coding and programming, allowing for rapid model construction and updating across many customized state and local RE parameters. Further, our results can be tested against the empirical outcomes that will be observed over the coming years, and the forecast deviation from the actuals can be attributed to discrete parameter

  8. Statistical identification of effective input variables. [SCREEN

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications.

  9. Measurement method for urine puddle depth in dairy cow houses as input variable for ammonia emission modelling

    NARCIS (Netherlands)

    Snoek, J.W.; Stigter, J.D.; Ogink, Nico; Groot Koerkamp, P.W.G.

    2015-01-01

    Dairy cow houses are a major contributor to ammonia (NH3) emission in many European countries. To understand and predict NH3 emissions from cubicle dairy cow houses a mechanistic model was developed and a sensitivity analysis was performed to assess the contribution to NH3 emission of each input var

  10. Measurement method for urine puddle depth in dairy cow houses as input variable for ammonia emission modelling

    NARCIS (Netherlands)

    Snoek, J.W.; Stigter, J.D.; Ogink, Nico; Groot Koerkamp, P.W.G.

    2015-01-01

    Dairy cow houses are a major contributor to ammonia (NH3) emission in many European countries. To understand and predict NH3 emissions from cubicle dairy cow houses a mechanistic model was developed and a sensitivity analysis was performed to assess the contribution to NH3 emission of each input

  11. APPLICATION OF FRF ESTIMATOR BASED ON ERRORS-IN-VARIABLES MODEL IN MULTI-INPUT MULTI-OUTPUT VIBRATION CONTROL SYSTEM

    Institute of Scientific and Technical Information of China (English)

    GUAN Guangfeng; CONG Dacheng; HAN Junwei; LI Hongren

    2007-01-01

    The FRF estimator based on the errors-in-variables (EV) model of multi-input multi-output (MIMO) System is presented to reduce the bias error of FRF Hl estimator. The FRF Hl estimator is influenced by the noises in the inputs of the System and generates an under-estimation of the true FRF. The FRF estimator based on the EV model takes into account the errors in both the inputs and Outputs of the System and would lead to more accurate FRF estimation. The FRF estimator based on the EV model is applied to the waveform replication on the 6-DOF (degree-of-freedom) hydraulic Vibration table. The result shows that it is favorable to improve the control precision of the MIMO Vibration control system.

  12. Effect of the spatiotemporal variability of rainfall inputs in water quality integrated catchment modelling for dissolved oxygen concentrations

    Science.gov (United States)

    Moreno Ródenas, Antonio Manuel; Cecinati, Francesca; ten Veldhuis, Marie-Claire; Langeveld, Jeroen; Clemens, Francois

    2016-04-01

    Maintaining water quality standards in highly urbanised hydrological catchments is a worldwide challenge. Water management authorities struggle to cope with changing climate and an increase in pollution pressures. Water quality modelling has been used as a decision support tool for investment and regulatory developments. This approach led to the development of integrated catchment models (ICM), which account for the link between the urban/rural hydrology and the in-river pollutant dynamics. In the modelled system, rainfall triggers the drainage systems of urban areas scattered along a river. When flow exceeds the sewer infrastructure capacity, untreated wastewater enters the natural system by combined sewer overflows. This results in a degradation of the river water quality, depending on the magnitude of the emission and river conditions. Thus, being capable of representing these dynamics in the modelling process is key for a correct assessment of the water quality. In many urbanised hydrological systems the distances between draining sewer infrastructures go beyond the de-correlation length of rainfall processes, especially, for convective summer storms. Hence, spatial and temporal scales of selected rainfall inputs are expected to affect water quality dynamics. The objective of this work is to evaluate how the use of rainfall data from different sources and with different space-time characteristics affects modelled output concentrations of dissolved oxygen in a simplified ICM. The study area is located at the Dommel, a relatively small and sensitive river flowing through the city of Eindhoven (The Netherlands). This river stretch receives the discharge of the 750,000 p.e. WWTP of Eindhoven and from over 200 combined sewer overflows scattered along its length. A pseudo-distributed water quality model has been developed in WEST (mikedhi.com); this is a lumped-physically based model that accounts for urban drainage processes, WWTP and river dynamics for several

  13. Kriging atomic properties with a variable number of inputs

    Science.gov (United States)

    Davie, Stuart J.; Di Pasquale, Nicodemo; Popelier, Paul L. A.

    2016-09-01

    A new force field called FFLUX uses the machine learning technique kriging to capture the link between the properties (energies and multipole moments) of topological atoms (i.e., output) and the coordinates of the surrounding atoms (i.e., input). Here we present a novel, general method of applying kriging to chemical systems that do not possess a fixed number of (geometrical) inputs. Unlike traditional kriging methods, which require an input system to be of fixed dimensionality, the method presented here can be readily applied to molecular simulation, where an interaction cutoff radius is commonly used and the number of atoms or molecules within the cutoff radius is not constant. The method described here is general and can be applied to any machine learning technique that normally operates under a fixed number of inputs. In particular, the method described here is also useful for interpolating methods other than kriging, which may suffer from difficulties stemming from identical sets of inputs corresponding to different outputs or input biasing. As a demonstration, the new method is used to predict 54 energetic and electrostatic properties of the central water molecule of a set of 5000, 4 Å radius water clusters, with a variable number of water molecules. The results are validated against equivalent models from a set of clusters composed of a fixed number of water molecules (set to ten, i.e., decamers) and against models created by using a naïve method of treating the variable number of inputs problem presented. Results show that the 4 Å water cluster models, utilising the method presented here, return similar or better kriging models than the decamer clusters for all properties considered and perform much better than the truncated models.

  14. Are Financial Variables Inputs in Delivered Production Functions? Are Financial Variables Inputs in Delivered Production Functions?

    Directory of Open Access Journals (Sweden)

    Miguel Kiguel

    1995-03-01

    Full Text Available Fischer's classic (1974 paper develops conditions under which it is appropriate to use money as an input in a 'delivered' production function. In this paper, we extend Fischer's model I (the Baumol-Tobin inventory approach by incorporating credit into the analysis. Our investigation of the extended model brings out a very restrictive but necessary implicit assumption employed by Fischer to treat money as an input. Namely. that there exists a binding constraint on the use of money! A similar result holds for our more general model. Fischer's classic (1974 paper develops conditions under which it is appropriate to use money as an input in a 'delivered' production function. In this paper, we extend Fischer's model I (the Baumol-Tobin inventory approach by incorporating credit into the analysis. Our investigation of the extended model brings out a very restrictive but necessary implicit assumption employed by Fischer to treat money as an input. Namely. that there exists a binding constraint on the use of money! A similar result holds for our more general model.

  15. Wind Power Curve Modeling Using Statistical Models: An Investigation of Atmospheric Input Variables at a Flat and Complex Terrain Wind Farm

    Energy Technology Data Exchange (ETDEWEB)

    Wharton, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bulaevskaya, V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Irons, Z. [Enel Green Power North America, Andover, MA (United States); Qualley, G. [Infigen Energy, Dallas, TX (United States); Newman, J. F. [Univ. of Oklahoma, Norman, OK (United States); Miller, W. O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-28

    The goal of our FY15 project was to explore the use of statistical models and high-resolution atmospheric input data to develop more accurate prediction models for turbine power generation. We modeled power for two operational wind farms in two regions of the country. The first site is a 235 MW wind farm in Northern Oklahoma with 140 GE 1.68 turbines. Our second site is a 38 MW wind farm in the Altamont Pass Region of Northern California with 38 Mitsubishi 1 MW turbines. The farms are very different in topography, climatology, and turbine technology; however, both occupy high wind resource areas in the U.S. and are representative of typical wind farms found in their respective areas.

  16. Variable Input and the Acquisition of Plural Morphology

    Science.gov (United States)

    Miller, Karen L.; Schmitt, Cristina

    2012-01-01

    The present article examines the effect of variable input on the acquisition of plural morphology in two varieties of Spanish: Chilean Spanish, where the plural marker is sometimes omitted due to a phonological process of syllable final /s/ lenition, and Mexican Spanish (of Mexico City), with no such lenition process. The goal of the study is to…

  17. LCA of emerging technologies: addressing high uncertainty on inputs' variability when performing global sensitivity analysis.

    Science.gov (United States)

    Lacirignola, Martino; Blanc, Philippe; Girard, Robin; Pérez-López, Paula; Blanc, Isabelle

    2017-02-01

    In the life cycle assessment (LCA) context, global sensitivity analysis (GSA) has been identified by several authors as a relevant practice to enhance the understanding of the model's structure and ensure reliability and credibility of the LCA results. GSA allows establishing a ranking among the input parameters, according to their influence on the variability of the output. Such feature is of high interest in particular when aiming at defining parameterized LCA models. When performing a GSA, the description of the variability of each input parameter may affect the results. This aspect is critical when studying new products or emerging technologies, where data regarding the model inputs are very uncertain and may cause misleading GSA outcomes, such as inappropriate input rankings. A systematic assessment of this sensitivity issue is now proposed. We develop a methodology to analyze the sensitivity of the GSA results (i.e. the stability of the ranking of the inputs) with respect to the description of such inputs of the model (i.e. the definition of their inherent variability). With this research, we aim at enriching the debate on the application of GSA to LCAs affected by high uncertainties. We illustrate its application with a case study, aiming at the elaboration of a simple model expressing the life cycle greenhouse gas emissions of enhanced geothermal systems (EGS) as a function of few key parameters. Our methodology allows identifying the key inputs of the LCA model, taking into account the uncertainty related to their description.

  18. Variable structure control for MRAC systems with perturbations in input and output channels

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A design scheme of variable structure model reference control systems using only input and output measurements is presented for the systems with unmodeled dynamics and disturbances in input and output channels. The modeled part of the systems has relative degree greater than one and unknown upper bound of degree. By introducing some auxiliary signals and normalized signals with memory functions and appropriate choice of controller parameters, the developed variable structure controller guarantees the global stability of the closed-loop system and the arbitrarily small tracking error.

  19. Treatments of Precipitation Inputs to Hydrologic Models

    Science.gov (United States)

    Hydrological models are used to assess many water resources problems from agricultural use and water quality to engineering issues. The success of these models are dependent on correct parameterization; the most sensitive being the rainfall input time series. These records can come from land-based ...

  20. Effects of uncertainty in major input variables on simulated functional soil behaviour

    NARCIS (Netherlands)

    Finke, P.A.; Wösten, J.H.M.; Jansen, M.J.W.

    1997-01-01

    Uncertainties in major input variables in water and solute models were quantified and their effects on simulated functional aspects of soil behaviour were studied using basic soil properties from an important soil map unit in the Netherlands. Two sources of uncertainty were studied: spatial

  1. Effects of uncertainty in major input variables on simulated functional soil behaviour

    NARCIS (Netherlands)

    Finke, P.A.; Wösten, J.H.M.; Jansen, M.J.W.

    1996-01-01

    Uncertainties in major input variables in water and solute models were quantified and their effects on simulated functional aspects of soil behaviour were studied using basic soil properties from an important soil map unit in the Netherlands. Two sources of uncertainty were studied: spatial

  2. Estimates of volume and magma input in crustal magmatic systems from zircon geochronology: the effect of modelling assumptions and system variables

    Science.gov (United States)

    Caricchi, Luca; Simpson, Guy; Schaltegger, Urs

    2016-04-01

    Magma fluxes in the Earth's crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes). Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions. Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.

  3. Estimates of volume and magma input in crustal magmatic systems from zircon geochronology: the effect of modelling assumptions and system variables

    Directory of Open Access Journals (Sweden)

    Luca eCaricchi

    2016-04-01

    Full Text Available Magma fluxes in the Earth’s crust play an important role in regulating the relationship between the frequency and magnitude of volcanic eruptions, the chemical evolution of magmatic systems and the distribution of geothermal energy and mineral resources on our planet. Therefore, quantifying magma productivity and the rate of magma transfer within the crust can provide valuable insights to characterise the long-term behaviour of volcanic systems and to unveil the link between the physical and chemical evolution of magmatic systems and their potential to generate resources. We performed thermal modelling to compute the temperature evolution of crustal magmatic intrusions with different final volumes assembled over a variety of timescales (i.e., at different magma fluxes. Using these results, we calculated synthetic populations of zircon ages assuming the number of zircons crystallising in a given time period is directly proportional to the volume of magma at temperature within the zircon crystallisation range. The statistical analysis of the calculated populations of zircon ages shows that the mode, median and standard deviation of the populations varies coherently as function of the rate of magma injection and final volume of the crustal intrusions. Therefore, the statistical properties of the population of zircon ages can add useful constraints to quantify the rate of magma injection and the final volume of magmatic intrusions.Here, we explore the effect of different ranges of zircon saturation temperature, intrusion geometry, and wall rock temperature on the calculated distributions of zircon ages. Additionally, we determine the effect of undersampling on the variability of mode, median and standards deviation of calculated populations of zircon ages to estimate the minimum number of zircon analyses necessary to obtain meaningful estimates of magma flux and final intrusion volume.

  4. Determination of the Optimal Training Principle and Input Variables in Artificial Neural Network Model for the Biweekly Chlorophyll-a Prediction: A Case Study of the Yuqiao Reservoir, China

    Science.gov (United States)

    Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang

    2015-01-01

    Predicting the levels of chlorophyll-a (Chl-a) is a vital component of water quality management, which ensures that urban drinking water is safe from harmful algal blooms. This study developed a model to predict Chl-a levels in the Yuqiao Reservoir (Tianjin, China) biweekly using water quality and meteorological data from 1999-2012. First, six artificial neural networks (ANNs) and two non-ANN methods (principal component analysis and the support vector regression model) were compared to determine the appropriate training principle. Subsequently, three predictors with different input variables were developed to examine the feasibility of incorporating meteorological factors into Chl-a prediction, which usually only uses water quality data. Finally, a sensitivity analysis was performed to examine how the Chl-a predictor reacts to changes in input variables. The results were as follows: first, ANN is a powerful predictive alternative to the traditional modeling techniques used for Chl-a prediction. The back program (BP) model yields slightly better results than all other ANNs, with the normalized mean square error (NMSE), the correlation coefficient (Corr), and the Nash-Sutcliffe coefficient of efficiency (NSE) at 0.003 mg/l, 0.880 and 0.754, respectively, in the testing period. Second, the incorporation of meteorological data greatly improved Chl-a prediction compared to models solely using water quality factors or meteorological data; the correlation coefficient increased from 0.574-0.686 to 0.880 when meteorological data were included. Finally, the Chl-a predictor is more sensitive to air pressure and pH compared to other water quality and meteorological variables. PMID:25768650

  5. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  6. Rank correlation plots for use with correlated input variables in simulation studies

    Energy Technology Data Exchange (ETDEWEB)

    Iman, R.L.; Davenport, J.M.

    1980-11-01

    A method for inducing a desired rank correlation matrix on multivariate input vectors for simulation studies has recently been developed by Iman and Conover (SAND 80-0157). The primary intention of this procedure is to produce correlated input variables for use with computer models. Since this procedure is distribution free and allows the exact marginal distributions to remain intact, it can be used with any marginal distributions for which it is reasonable to think in terms of correlation. A series of rank correlation plots based on this procedure when the marginal distributions are normal, lognormal, uniform, and loguniform is presented. These plots provide a convenient tool for both aiding the modeler in determining the degree of dependence among variables (rather than guessing) and communicating with the modeler the effect of different correlation assumptions. 12 figures, 10 tables.

  7. Model based optimization of EMC input filters

    Energy Technology Data Exchange (ETDEWEB)

    Raggl, K; Kolar, J. W. [Swiss Federal Institute of Technology, Power Electronic Systems Laboratory, Zuerich (Switzerland); Nussbaumer, T. [Levitronix GmbH, Zuerich (Switzerland)

    2008-07-01

    Input filters of power converters for compliance with regulatory electromagnetic compatibility (EMC) standards are often over-dimensioned in practice due to a non-optimal selection of number of filter stages and/or the lack of solid volumetric models of the inductor cores. This paper presents a systematic filter design approach based on a specific filter attenuation requirement and volumetric component parameters. It is shown that a minimal volume can be found for a certain optimal number of filter stages for both the differential mode (DM) and common mode (CM) filter. The considerations are carried out exemplarily for an EMC input filter of a single phase power converter for the power levels of 100 W, 300 W, and 500 W. (author)

  8. Optimization of precipitation inputs for SWAT modeling in mountainous catchment

    Science.gov (United States)

    Tuo, Ye; Chiogna, Gabriele; Disse, Markus

    2016-04-01

    Precipitation is often the most important input data in hydrological models when simulating streamflow in mountainous catchment. The Soil and Water Assessment Tool (SWAT), a widely used hydrological model, only makes use of data from one precipitation gauging station which is nearest to the centroid of each subcatchment, eventually corrected using the band elevation method. This leads in general to inaccurate subcatchment precipitation representation, which results in unreliable simulation results in mountainous catchment. To investigate the impact of the precipitation inputs and consider the high spatial and temporal variability of precipitation, we first interpolated 21 years (1990-2010) of daily measured data using the Inverse Distance Weighting (IDW) method. Averaged IDW daily values have been calculated at the subcatchment scale to be further supplied as optimized precipitation inputs for SWAT. Both datasets (Measured data and IDW data) are applied to three Alpine subcatchments of the Adige catchment (North-eastern Italy, 12100 km2) as precipitation inputs. Based on the calibration and validation results, model performances are evaluated according to the Nash Sutchliffe Efficiency (NSE) and Coefficient of Determination (R2). For all three subcatchments, the simulation results with IDW inputs are better than the original method which uses measured inputs from the nearest station. This suggests that IDW method could improve the model performance in Alpine catchments to some extent. By taking into account and weighting the distance between precipitation records, IDW supplies more accurate precipitation inputs for each individual Alpine subcatchment, which would as a whole lead to an improved description of the hydrological behavior of the entire Adige catchment.

  9. Approximate input physics for stellar modelling

    CERN Document Server

    Pols, O R; Eggleton, P P; Han, Z; Pols, O R; Tout, C A; Eggleton, P P; Han, Z

    1995-01-01

    We present a simple and efficient, yet reasonably accurate, equation of state, which at the moderately low temperatures and high densities found in the interiors of stars less massive than the Sun is substantially more accurate than its predecessor by Eggleton, Faulkner & Flannery. Along with the most recently available values in tabular form of opacities, neutrino loss rates, and nuclear reaction rates for a selection of the most important reactions, this provides a convenient package of input physics for stellar modelling. We briefly discuss a few results obtained with the updated stellar evolution code.

  10. Sensitivity analysis of a sound absorption model with correlated inputs

    Science.gov (United States)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  11. Input variable selection for water resources systems using a modified minimum redundancy maximum relevance (mMRMR) algorithm

    Science.gov (United States)

    Hejazi, Mohamad I.; Cai, Ximing

    2009-04-01

    Input variable selection (IVS) is a necessary step in modeling water resources systems. Neglecting this step may lead to unnecessary model complexity and reduced model accuracy. In this paper, we apply the minimum redundancy maximum relevance (MRMR) algorithm to identifying the most relevant set of inputs in modeling a water resources system. We further introduce two modified versions of the MRMR algorithm ( α-MRMR and β-MRMR), where α and β are correction factors that are found to increase and decrease as a power-law function, respectively, with the progress of the input selection algorithms and the increase of the number of selected input variables. We apply the proposed algorithms to 22 reservoirs in California to predict daily releases based on a set from a 121 potential input variables. Results indicate that the two proposed algorithms are good measures of model inputs as reflected in enhanced model performance. The α-MRMR and β-MRMR values exhibit strong negative correlation to model performance as depicted in lower root-mean-square-error (RMSE) values.

  12. Input modelling for subchannel analysis of CANFLEX fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Jun, Ji Su; Suk, Ho Chun [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-06-01

    This report describs the input modelling for subchannel analysis of CANFLEX fuel bundle using CASS(Candu thermalhydraulic Analysis by Subchannel approacheS) code which has been developed for subchannel analysis of CANDU fuel channel. CASS code can give the different calculation results according to users' input modelling. Hence, the objective of this report provide the background information of input modelling, the accuracy of input data and gives the confidence of calculation results. (author). 11 refs., 3 figs., 4 tabs.

  13. REFLECTIONS ON THE INOPERABILITY INPUT-OUTPUT MODEL

    NARCIS (Netherlands)

    Dietzenbacher, Erik; Miller, Ronald E.

    2015-01-01

    We argue that the inoperability input-output model is a straightforward - albeit potentially very relevant - application of the standard input-output model. In addition, we propose two less standard input-output approaches as alternatives to take into consideration when analyzing the effects of disa

  14. Queuing model with variable input rates, mistake service and impatient customers%输入率可变且有差错服务及不耐烦顾客的排队模型分析

    Institute of Scientific and Technical Information of China (English)

    潘全如

    2012-01-01

    在系统顾客容量不变的情况下,顾客到达系统后是否进入系统接受服务对销售行业影响是巨大的.从排队长度对顾客输入率的影响着手,研究了输入率、服务正确率及不耐烦顾客强度均与队长有关的排队模型,得出了进入系统的顾客流是泊松过程.系统中的顾客数是生灭过程,同时求得了系统的队长平稳分布,因没有进入系统而导致系统损失的概率、因不耐烦而离去的顾客的均值、单位时间内系统服务错误率、因系统容量有限而无法加入队列的损失概率等多项指标,得出了并非输入率越高系统就盈利越多、并非系统服务正确率越低系统就赚得越少等结论.还得到了能使企业利润最大化的系统容量及服务速度,为销售行业提高自己的销售业绩提供了很有价值的参考.%The customers do not necessarily get into the system though arriving at it in the case of a fixed customer number, which influences sales industry enormously. Focusing on the influence of queue length on input rate, we build up a queuing model with variable input rates,mistake service and impatient customers, and draw the following conclusions; the customers get into the system in Poissonian flow; the number of customers in the system is a birth-death process. We obtained the stationary queue length distribution of the model, the loss probability for the customers not entering the system while arriving at the system, the mean of the customers who leaves the system due to impatience, the service error rate per unit time and the loss probability for the customers not joining the queue due to the limited capacity of the system and so on. This paper also justifies the falsehood of the following assumptions: the bigger the input rate is, the more profit business will get; the bigger the number of impatient customers is, the less profit business will make. We also obtaind the appropriate service speed and capacity a

  15. Speaker Input Variability Does Not Explain Why Larger Populations Have Simpler Languages.

    Directory of Open Access Journals (Sweden)

    Mark Atkinson

    Full Text Available A learner's linguistic input is more variable if it comes from a greater number of speakers. Higher speaker input variability has been shown to facilitate the acquisition of phonemic boundaries, since data drawn from multiple speakers provides more information about the distribution of phonemes in a speech community. It has also been proposed that speaker input variability may have a systematic influence on individual-level learning of morphology, which can in turn influence the group-level characteristics of a language. Languages spoken by larger groups of people have less complex morphology than those spoken in smaller communities. While a mechanism by which the number of speakers could have such an effect is yet to be convincingly identified, differences in speaker input variability, which is thought to be larger in larger groups, may provide an explanation. By hindering the acquisition, and hence faithful cross-generational transfer, of complex morphology, higher speaker input variability may result in structural simplification. We assess this claim in two experiments which investigate the effect of such variability on language learning, considering its influence on a learner's ability to segment a continuous speech stream and acquire a morphologically complex miniature language. We ultimately find no evidence to support the proposal that speaker input variability influences language learning and so cannot support the hypothesis that it explains how population size determines the structural properties of language.

  16. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Structure Analysis of Typical Fuzzy Controllers with Unequally Spaced Fuzzy Sets for Input and Output Variables

    Institute of Scientific and Technical Information of China (English)

    FAN Xingzhe; ZHANG Naiyao; LINing

    2001-01-01

    In this paper, a kind of typical fuzzycontrollers is defined, which have two inputs (e and△c) and one output (△u); triangular, symmetric andfull-overlapped membership functions for input vari-ables; singleton and symmetric membership func-tions for output variable; linear fuzzy control rules;Sum-Product inference method, and weighted meanmethod for defuzzification. For this kind of typicalfuzzy controllers we have analyzed their analyticalstructure, limiting structure and local stability.

  18. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    Science.gov (United States)

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  19. Analytical delay models for RLC interconnects under ramp input

    Institute of Scientific and Technical Information of China (English)

    REN Yinglei; MAO Junfa; LI Xiaochun

    2007-01-01

    Analytical delay models for Resistance Inductance Capacitance (RLC)interconnects with ramp input are presented for difierent situations,which include overdamped,underdamped and critical response cases.The errors of delay estimation using the analytical models proposed in this paper are less bv 3%in comparison to the SPICE-computed delay.These models are meaningful for the delay analysis of actual circuits in which the input signal is ramp but not ideal step input.

  20. Soil-Related Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    A. J. Smith

    2004-09-09

    This report presents one of the analyses that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the details of the conceptual model as well as the mathematical model and the required input parameters. The biosphere model is one of a series of process models supporting the postclosure Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A schematic representation of the documentation flow for the Biosphere input to TSPA is presented in Figure 1-1. This figure shows the evolutionary relationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the biosphere abstraction products for TSPA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (TWP) (BSC 2004 [DIRS 169573]). This figure is included to provide an understanding of how this analysis report contributes to biosphere modeling in support of the license application, and is not intended to imply that access to the listed documents is required to understand the contents of this report. This report, ''Soil-Related Input Parameters for the Biosphere Model'', is one of the five analysis reports that develop input parameters for use in the ERMYN model. This report is the source documentation for the six biosphere parameters identified in Table 1-1. The purpose of this analysis was to develop the biosphere model parameters associated with the accumulation and depletion of radionuclides in the soil. These parameters support the calculation of radionuclide concentrations in soil from on-going irrigation or ash deposition and, as a direct consequence, radionuclide concentration in other environmental media that are affected by radionuclide concentrations in soil. The analysis was performed in accordance with the TWP (BSC 2004 [DIRS 169573]) where the governing procedure

  1. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  2. Attitude Control Considering Variable Input Saturation Limit for a Spacecraft Equipped with Flywheels

    Institute of Scientific and Technical Information of China (English)

    TIAN Lin; XU Shijie

    2012-01-01

    A new attitude controller is proposed for spacecraft whose actuator has variable input saturation limit.There are three identical flywheels orthogonally mounted on board.Each rotor is driven by a brushless DC motor (BLDCM).Models of spacecraft attitude dynamics and flywheel rotor driving motor electromechanics are discussed in detail.The controller design is similar to saturation limit linear assignment.An auxiliary parameter and a boundary coefficient are imported into the controller to guarantee system stability and improve control performance.A time-varying and state-dependent flywheel output torque saturation limit model is established.Stability of the closed-loop control system and asymptotic convergence of system states are proved via Lyapunov methods and LaSalle invarianee principle.Boundedness of the auxiliary parameter ensures that the control objective can be achieved,while the boundary parameter's value makes a balance between system control performance and flywheel utilization efficiency.Compared with existing controllers,the newly developed controller with variable torque saturation limit can bring smoother control and faster system response.Numerical simulations validate the effectiveness of the controller.

  3. SIMPLE MODEL FOR THE INPUT IMPEDANCE OF RECTANGULAR MICROSTRIP ANTENNA

    Directory of Open Access Journals (Sweden)

    Celal YILDIZ

    1998-03-01

    Full Text Available A very simple model for the input impedance of a coax-fed rectangular microstrip patch antenna is presented. It is based on the cavity model and the equivalent resonant circuits. The theoretical input impedance results obtained from this model are in good agreement with the experimental results available in the literature. This model is well suited for computer-aided design (CAD.

  4. Storm-impact scenario XBeach model inputs and tesults

    Science.gov (United States)

    Mickey, Rangley; Long, Joseph W.; Thompson, David M.; Plant, Nathaniel G.; Dalyander, P. Soupy

    2017-01-01

    The XBeach model input and output of topography and bathymetry resulting from simulation of storm-impact scenarios at the Chandeleur Islands, LA, as described in USGS Open-File Report 2017–1009 (https://doi.org/10.3133/ofr20171009), are provided here. For further information regarding model input generation and visualization of model output topography and bathymetry refer to USGS Open-File Report 2017–1009 (https://doi.org/10.3133/ofr20171009).

  5. Graphical user interface for input output characterization of single variable and multivariable highly nonlinear systems

    Directory of Open Access Journals (Sweden)

    Shahrukh Adnan Khan M. D.

    2017-01-01

    Full Text Available This paper presents a Graphical User Interface (GUI software utility for the input/output characterization of single variable and multivariable nonlinear systems by obtaining the sinusoidal input describing function (SIDF of the plant. The software utility is developed on MATLAB R2011a environment. The developed GUI holds no restriction on the nonlinearity type, arrangement and system order; provided that output(s of the system is obtainable either though simulation or experiments. An insight to the GUI and its features are presented in this paper and example problems from both single variable and multivariable cases are demonstrated. The formulation of input/output behavior of the system is discussed and the nucleus of the MATLAB command underlying the user interface has been outlined. Some of the industries that would benefit from this software utility includes but not limited to aerospace, defense technology, robotics and automotive.

  6. Not All Children Agree: Acquisition of Agreement when the Input Is Variable

    Science.gov (United States)

    Miller, Karen

    2012-01-01

    In this paper we investigate the effect of variable input on the acquisition of grammar. More specifically, we examine the acquisition of the third person singular marker -s on the auxiliary "do" in comprehension and production in two groups of children who are exposed to similar varieties of English but that differ with respect to adult…

  7. Urban vs. Rural CLIL: An Analysis of Input-Related Variables, Motivation and Language Attainment

    Science.gov (United States)

    Alejo, Rafael; Piquer-Píriz, Ana

    2016-01-01

    The present article carries out an in-depth analysis of the differences in motivation, input-related variables and linguistic attainment of the students at two content and language integrated learning (CLIL) schools operating within the same institutional and educational context, the Spanish region of Extremadura, and differing only in terms of…

  8. Urban vs. Rural CLIL: An Analysis of Input-Related Variables, Motivation and Language Attainment

    Science.gov (United States)

    Alejo, Rafael; Piquer-Píriz, Ana

    2016-01-01

    The present article carries out an in-depth analysis of the differences in motivation, input-related variables and linguistic attainment of the students at two content and language integrated learning (CLIL) schools operating within the same institutional and educational context, the Spanish region of Extremadura, and differing only in terms of…

  9. A GENERAL APPROACH BASED ON AUTOCORRELATION TO DETERMINE INPUT VARIABLES OF NEURAL NETWORKS FOR TIME SERIES FORECASTING

    Institute of Scientific and Technical Information of China (English)

    HUANG Wei; NAKAMORI Yoshiteru; WANG Shouyang

    2004-01-01

    Input selection is probably one of the most critical decision issues in neural network designing, because it has a great impact on forecasting performance. Among the many applications of artificial neural networks to finance, time series forecasting is perhaps one of the most challenging issues. Considering the features of neural networks, we propose a general approach called Autocorrelation Criterion (AC) to determine the inputs variables for a neural network. The purpose is to seek optimal lag periods, which are more predictive and less correlated. AC is a data-driven approach in that there is no prior assumption about the models for time series under study. So it has extensive applications and avoids a lengthy experimentation and tinkering in input selection. We apply the approach to the determination of input variables for foreign exchange rate forecasting and conduct comparisons between AC and information-based in-sample model selection criterion. The experiment results show that AC outperforms information-based in-sample model selection criterion.

  10. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Valter Magalhaes; Pereira, Iraci Martinez, E-mail: valter.costa@usp.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because a previous diagnosis allows the correction of the fault and, like this, to prevent the production stopped, improving operator's security and it's not provoking economics losses. The objective of this work is to build a set, using bivariate correlation and canonical correlation, which will be the set of input variables of an artificial neural network to monitor the greater number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. Initially, for the input set of neural network we selected the variables: nuclear power, primary circuit flow rate, control/safety rod position and difference in pressure in the core of the reactor, because almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The nuclear power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures; the primary circuit flow rate has the function of energy transport by removing the nucleus heat. An artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables. (author)

  11. Preisach models of hysteresis driven by Markovian input processes

    Science.gov (United States)

    Schubert, Sven; Radons, Günter

    2017-08-01

    We study the response of Preisach models of hysteresis to stochastically fluctuating external fields. We perform numerical simulations, which indicate that analytical expressions derived previously for the autocorrelation functions and power spectral densities of the Preisach model with uncorrelated input, hold asymptotically also if the external field shows exponentially decaying correlations. As a consequence, the mechanisms causing long-term memory and 1 /f noise in Preisach models with uncorrelated inputs still apply in the presence of fast decaying input correlations. We collect additional evidence for the importance of the effective Preisach density previously introduced even for Preisach models with correlated inputs. Additionally, we present some results for the output of the Preisach model with uncorrelated input using analytical methods. It is found, for instance, that in order to produce the same long-time tails in the output, the elementary hysteresis loops of large width need to have a higher weight for the generic Preisach model than for the symmetric Preisach model. Further, we find autocorrelation functions and power spectral densities to be monotonically decreasing independently of the choice of input and Preisach density.

  12. Model-Free importance indicators for dependent input

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, A.; Ratto, M.; Tarantola, S

    2001-07-01

    A number of methods are available to asses uncertainty importance in the predictions of a simulation model for orthogonal sets of uncertain input factors. However, in many practical cases input factors are correlated. Even for these cases it is still possible to compute the correlation ratio and the partial (or incremental) importance measure, two popular sensitivity measures proposed in the recent literature on the subject. Unfortunately, the existing indicators of importance have limitations in terms of their use in sensitivity analysis of model output. Correlation ratios are indeed effective for priority setting (i.e. to find out what input factor needs better determination) but not, for instance, for the identification of the subset of the most important input factors, or for model simplification. In such cases other types of indicators are required that can cope with the simultaneous occurrence of correlation and interaction (a property of the model) among the input factors. In (1) the limitations of current measures of importance were discussed and a general approach was identified to quantify uncertainty importance for correlated inputs in terms of different betting contexts. This work was later submitted to the Journal of the American Statistical Association. However, the computational cost of such approach is still high, as it happens when dealing with correlated input factors. In this paper we explore how suitable designs could reduce the numerical load of the analysis. (Author) 11 refs.

  13. Modeling Pacific Decadal Variability

    Science.gov (United States)

    Schneider, N.

    2002-05-01

    Hypotheses for decadal variability rely on the large thermal inertia of the ocean to sequester heat and provide the long memory of the climate system. Understanding decadal variability requires the study of the generation of ocean anomalies at decadal frequencies, the evolution of oceanic signals, and the response of the atmosphere to oceanic perturbations. A sample of studies relevant for Pacific decadal variability will be reviewed in this presentation. The ocean integrates air-sea flux anomalies that result from internal atmospheric variability or broad-band coupled processes such as ENSO, or are an intrinsic part of the decadal feedback loop. Anomalies of Ekman pumping lead to deflections of the ocean thermocline and accompanying changes of the ocean circulation; perturbations of surface layer heat and fresh water budgets cause anomalies of T/S characteristics of water masses. The former process leads to decadal variability due to the dynamical adjustment of the mid latitude gyres or thermocline circulation; the latter accounts for the low frequency climate variations by the slow propagation of anomalies in the thermocline from the mid-latitude outcrops to the equatorial upwelling regions. Coupled modeling studies and ocean model hindcasts suggest that the adjustment of the North Pacific gyres to variation of Ekman pumping causes low frequency variations of surface temperature in the Kuroshio-Oyashio extension region. These changes appear predictable a few years in advance, and affect the local upper ocean heat budget and precipitation. The majority of low frequency variance is explained by the ocean's response to stochastic atmospheric forcing, the additional variance explained by mid-latitude ocean to atmosphere feedbacks appears to be small. The coupling of subtropical and tropical regions by the equator-ward motion in the thermocline can support decadal anomalies by changes of its speed and path, or by transporting water mass anomalies to the equatorial

  14. Effect of variable heat input on the heat transfer characteristics in an Organic Rankine Cycle system

    Directory of Open Access Journals (Sweden)

    Aboaltabooq Mahdi Hatf Kadhum

    2016-01-01

    Full Text Available This paper analyzes the heat transfer characteristics of an ORC evaporator applied on a diesel engine using measured data from experimental work such as flue gas mass flow rate and flue gas temperature. A mathematical model was developed with regard to the preheater, boiler and the superheater zones of a counter flow evaporator. Each of these zones has been subdivided into a number of cells. The hot source of the ORC cycle was modeled. The study involves the variable heat input's dependence on the ORC system's heat transfer characteristics, with especial emphasis on the evaporator. The results show that the refrigerant's heat transfer coefficient has a higher value for a 100% load from the diesel engine, and decreases with the load decrease. Also, on the exhaust gas side, the heat transfer coefficient decreases with the decrease of the load. The refrigerant's heat transfer coefficient increased normally with the evaporator's tube length in the preheater zone, and then increases rapidly in the boiler zone, followed by a decrease in the superheater zone. The exhaust gases’ heat transfer coefficient increased with the evaporator’ tube length in all zones. The results were compared with result by other authors and were found to be in agreement.

  15. Effect of variable heat input on the heat transfer characteristics in an Organic Rankine Cycle system

    Directory of Open Access Journals (Sweden)

    Aboaltabooq Mahdi Hatf Kadhum

    2016-01-01

    Full Text Available This paper analyzes the heat transfer characteristics of an ORC evaporator applied on a diesel engine using measured data from experimental work such as flue gas mass flow rate and flue gas temperature. A mathematical model was developed with regard to the preheater, boiler and the superheater zones of a counter flow evaporator. Each of these zones has been subdivided into a number of cells. The hot source of the ORC cycle was modeled. The study involves the variable heat input's dependence on the ORC system's heat transfer characteristics, with especial emphasis on the evaporator. The results show that the refrigerant's heat transfer coefficient has a higher value for a 100% load from the diesel engine, and decreases with the load decrease. Also, on the exhaust gas side, the heat transfer coefficient decreases with the decrease of the load. The refrigerant's heat transfer coefficient increased normally with the evaporator's tube length in the preheater zone, and then increases rapidly in the boiler zone, followed by a decrease in the superheater zone. The exhaust gases’ heat transfer coefficient increased with the evaporator’ tube length in all zones. The results were compared with result by other authors and were found to be in agreement.

  16. Space market model space industry input-output model

    Science.gov (United States)

    Hodgin, Robert F.; Marchesini, Roberto

    1987-01-01

    The goal of the Space Market Model (SMM) is to develop an information resource for the space industry. The SMM is intended to contain information appropriate for decision making in the space industry. The objectives of the SMM are to: (1) assemble information related to the development of the space business; (2) construct an adequate description of the emerging space market; (3) disseminate the information on the space market to forecasts and planners in government agencies and private corporations; and (4) provide timely analyses and forecasts of critical elements of the space market. An Input-Output model of market activity is proposed which are capable of transforming raw data into useful information for decision makers and policy makers dealing with the space sector.

  17. Identification of multiple inputs single output errors-in-variables system using cumulant

    Institute of Scientific and Technical Information of China (English)

    Haihui Long; Jiankang Zhao

    2014-01-01

    A higher-order cumulant-based weighted least square (HOCWLS) and a higher-order cumulant-based iterative least square (HOCILS) are derived for multiple inputs single output (MISO) errors-in-variables (EIV) systems from noisy input/output data. Whether the noises of the input/output of the system are white or colored, the proposed algorithms can be insensitive to these noises and yield unbiased estimates. To realize adaptive pa-rameter estimates, a higher-order cumulant-based recursive least square (HOCRLS) method is also studied. Convergence analy-sis of the HOCRLS is conducted by using the stochastic process theory and the stochastic martingale theory. It indicates that the parameter estimation error of HOCRLS consistently converges to zero under a generalized persistent excitation condition. The use-fulness of the proposed algorithms is assessed through numerical simulations.

  18. Method and apparatus for smart battery charging including a plurality of controllers each monitoring input variables

    Science.gov (United States)

    Hammerstrom, Donald J.

    2013-10-15

    A method for managing the charging and discharging of batteries wherein at least one battery is connected to a battery charger, the battery charger is connected to a power supply. A plurality of controllers in communication with one and another are provided, each of the controllers monitoring a subset of input variables. A set of charging constraints may then generated for each controller as a function of the subset of input variables. A set of objectives for each controller may also be generated. A preferred charge rate for each controller is generated as a function of either the set of objectives, the charging constraints, or both, using an algorithm that accounts for each of the preferred charge rates for each of the controllers and/or that does not violate any of the charging constraints. A current flow between the battery and the battery charger is then provided at the actual charge rate.

  19. Assessment of input uncertainty by seasonally categorized latent variables using SWAT

    Science.gov (United States)

    Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...

  20. Wide Input Range Power Converters Using a Variable Turns Ratio Transformer

    DEFF Research Database (Denmark)

    Ouyang, Ziwei; Andersen, Michael A. E.

    2016-01-01

    A new integrated transformer with variable turns ratio is proposed to enable dc-dc converters operating over a wide input voltage range. The integrated transformer employs a new geometry of magnetic core with “four legs”, two primary windings with orthogonal arrangement, and “8” shape connection...... of diagonal secondary windings, in order to make the transformer turns ratio adjustable by controlling the phase between the two current excitations subjected to the two primary windings. Full-bridge boost dc-dc converter is employed with the proposed transformer to demonstrate the feasibility of the variable...

  1. Estimation of the input parameters in the Feller neuronal model

    Science.gov (United States)

    Ditlevsen, Susanne; Lansky, Petr

    2006-06-01

    The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time (FTP) through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the FTP is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed.

  2. Quality assurance of weather data for agricultural system model input

    Science.gov (United States)

    It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...

  3. Modeling Shared Variables in VHDL

    DEFF Research Database (Denmark)

    Madsen, Jan; Brage, Jens P.

    1994-01-01

    A set of concurrent processes communicating through shared variables is an often used model for hardware systems. This paper presents three modeling techniques for representing such shared variables in VHDL, depending on the acceptable constraints on accesses to the variables. Also a set of guide......A set of concurrent processes communicating through shared variables is an often used model for hardware systems. This paper presents three modeling techniques for representing such shared variables in VHDL, depending on the acceptable constraints on accesses to the variables. Also a set...

  4. Risk assessment of finishing beef cattle in feedlot: slaughter weights and correlation amongst input variables

    Directory of Open Access Journals (Sweden)

    Paulo Santana Pacheco

    2014-02-01

    Full Text Available The objective of this study was to evaluate the risk associated with finishing crossbred Charolais × Nellore steers in feedlot at different slaughter weights (425, 467 or 510 kg, considering or disregarding the correlation amongst random input variables. Data were collected from 2004 to 2010 and used in the simulation of the financial indicator Net Present Value (NPV. Animals slaughtered with 425, 467 or 510 kg were fed diets containing a roughage:concentrate ratio of 60:40 for 30, 65 and 94 days, respectively. In the simulation of NPV, a Latin Hypercube type of sampling was used, running 2000 interactions. An analysis of stochastic dominance of first and second orders was carried out as well as the Kolmogorov-Smirnov asymptotic test (to check for differences between pairs of curves of cumulative distributions, followed by sensitivity analysis using stepwise multivariate regression. Simulations of NPV considering the correlation amongst the input variables produced more consistent estimates of this financial indicator than simulations that disregarded it. The risk analysis showed that 467 kg slaughter weight presented the lowest risk for finishing cattle in feedlots when compared with 425 and 510 kg. The most important variables influencing the NVP are the prices of feeder and finished steers, initial and final weights, concentrate and roughage costs, and minimum rate of attractiveness; therefore, farmers should pay particular attention to these variables when making the decision of whether or not to use feedlot to finish beef cattle.

  5. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rautenstrauch

    2004-09-10

    This analysis is one of 10 reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN) biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. Inhalation Exposure Input Parameters for the Biosphere Model is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling, and the plan for development of the biosphere abstraction products for TSPA, as identified in the Technical Work Plan for Biosphere Modeling and Expert Support (BSC 2004 [DIRS 169573]). This analysis report defines and justifies values of mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of ERMYN to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception.

  6. How sensitive are estimates of carbon fixation in agricultural models to input data?

    Directory of Open Access Journals (Sweden)

    Tum Markus

    2012-02-01

    Full Text Available Abstract Background Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF. Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. Results For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC and the Biosphere Energy Transfer Hydrology (BETHY/DLR model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. Discussion This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison.

  7. How sensitive are estimates of carbon fixation in agricultural models to input data?

    Science.gov (United States)

    Tum, Markus; Strauss, Franziska; McCallum, Ian; Günther, Kurt; Schmid, Erwin

    2012-02-01

    Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF). Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC) and the Biosphere Energy Transfer Hydrology (BETHY/DLR) model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison.

  8. The use of synthetic input sequences in time series modeling

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Dair Jose de [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil); Letellier, Christophe [CORIA/CNRS UMR 6614, Universite et INSA de Rouen, Av. de l' Universite, BP 12, F-76801 Saint-Etienne du Rouvray cedex (France); Gomes, Murilo E.D. [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil); Aguirre, Luis A. [Programa de Pos-Graduacao em Engenharia Eletrica, Universidade Federal de Minas Gerais, Av. Antonio Carlos 6627, 31.270-901 Belo Horizonte, MG (Brazil)], E-mail: aguirre@cpdee.ufmg.br

    2008-08-04

    In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.

  9. The use of synthetic input sequences in time series modeling

    Science.gov (United States)

    de Oliveira, Dair José; Letellier, Christophe; Gomes, Murilo E. D.; Aguirre, Luis A.

    2008-08-01

    In many situations time series models obtained from noise-like data settle to trivial solutions under iteration. This Letter proposes a way of producing a synthetic (dummy) input, that is included to prevent the model from settling down to a trivial solution, while maintaining features of the original signal. Simulated benchmark models and a real time series of RR intervals from an ECG are used to illustrate the procedure.

  10. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-10

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment for the license application (TSPA-LA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA-LA, as identified in the ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]) (TWP). This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA). This report is one of the five reports that develop input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the conceptual model and the mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed description of the model input parameters. The output of this report is used as direct input in the ''Nominal Performance Biosphere Dose Conversion Factor Analysis'' and in the ''Disruptive Event Biosphere Dose Conversion Factor Analysis'' that calculate the values of biosphere dose conversion factors (BDCFs) for the groundwater and volcanic ash exposure scenarios, respectively. The purpose of this analysis was to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or in volcanic ash). The analysis

  11. Neural network with a self-selection ability of input variables for a nonlinear system identification; Nyuryoku hensu no jiko sentaku noryoku wo sonaeta hisenkei system dotei yo neural network

    Energy Technology Data Exchange (ETDEWEB)

    Kondo, T. [School of Medical Sciences, The University of Tokushima., Tokushima (Japan)

    1997-08-31

    When neural network is applied to identification problem with complex structure, structure of the network becomes larger in its scale and more complex as its input variables increase, because of necessity on effect of high order in the input variable within the network. In this study, a neural network with self-selection ability of the input variables is proposed. This network can remove the neuron related to the input variables from inner portion of the network according to the prediction error estimation standard and construct a neural network only by means of the neuron related to useful input variables, even when it contains unnecessary variables for its input variable. In this paper, by comparing with identification results obtained by conventional neural network or improved GMDH method, its effectiveness could be elucidated. And, it applied to short-term forecasting problem of air pollution concentration, to compare its estimation accuracy with those of other prediction models. 18 refs., 10 figs., 3 tabs.

  12. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    Science.gov (United States)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  13. Model reduction of nonlinear systems subject to input disturbances

    KAUST Repository

    Ndoye, Ibrahima

    2017-07-10

    The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.

  14. Variable input observer for structural health monitoring of high-rate systems

    Science.gov (United States)

    Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob

    2017-02-01

    The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.

  15. Inhalation Exposure Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2006-06-05

    This analysis is one of the technical reports that support the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), referred to in this report as the biosphere model. ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the conceptual model as well as the mathematical model and its input parameters. This report documents development of input parameters for the biosphere model that are related to atmospheric mass loading and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for a Yucca Mountain repository. ''Inhalation Exposure Input Parameters for the Biosphere Model'' is one of five reports that develop input parameters for the biosphere model. A graphical representation of the documentation hierarchy for the biosphere model is presented in Figure 1-1 (based on BSC 2006 [DIRS 176938]). This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and how this analysis report contributes to biosphere modeling. This analysis report defines and justifies values of atmospheric mass loading for the biosphere model. Mass loading is the total mass concentration of resuspended particles (e.g., dust, ash) in a volume of air. Mass loading values are used in the air submodel of the biosphere model to calculate concentrations of radionuclides in air inhaled by a receptor and concentrations in air surrounding crops. Concentrations in air to which the receptor is exposed are then used in the inhalation submodel to calculate the dose contribution to the receptor from inhalation of contaminated airborne particles. Concentrations in air surrounding plants are used in the plant submodel to calculate the concentrations of radionuclides in foodstuffs contributed from uptake by foliar interception. This

  16. Multi input single output model predictive control of non-linear bio-polymerization process

    Energy Technology Data Exchange (ETDEWEB)

    Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)

    2015-05-15

    This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.

  17. Agricultural and Environmental Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    K. Rasmuson; K. Rautenstrauch

    2004-09-14

    This analysis is one of 10 technical reports that support the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN) (i.e., the biosphere model). It documents development of agricultural and environmental input parameters for the biosphere model, and supports the use of the model to develop biosphere dose conversion factors (BDCFs). The biosphere model is one of a series of process models supporting the total system performance assessment (TSPA) for the repository at Yucca Mountain. The ERMYN provides the TSPA with the capability to perform dose assessments. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships between the major activities and their products (the analysis and model reports) that were planned in ''Technical Work Plan for Biosphere Modeling and Expert Support'' (BSC 2004 [DIRS 169573]). The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes the ERMYN and its input parameters.

  18. Determining avalanche modelling input parameters using terrestrial laser scanning technology

    OpenAIRE

    2013-01-01

    International audience; In dynamic avalanche modelling, data about the volumes and areas of the snow released, mobilized and deposited are key input parameters, as well as the fracture height. The fracture height can sometimes be measured in the field, but it is often difficult to access the starting zone due to difficult or dangerous terrain and avalanche hazards. More complex is determining the areas and volumes of snow involved in an avalanche. Such calculations require high-resolution spa...

  19. Land Building Models: Uncertainty in and Sensitivity to Input Parameters

    Science.gov (United States)

    2013-08-01

    Louisiana Coastal Area Ecosystem Restoration Projects Study , Vol. 3, Final integrated ERDC/CHL CHETN-VI-44 August 2013 24 feasibility study and... Nourishment Module, Chapter 8. In Coastal Louisiana Ecosystem Assessment and Restoration (CLEAR) Model of Louisiana Coastal Area (LCA) Comprehensive...to Input Parameters by Ty V. Wamsley PURPOSE: The purpose of this Coastal and Hydraulics Engineering Technical Note (CHETN) is to document a

  20. State-shared model for multiple-input multiple-output systems

    Institute of Scientific and Technical Information of China (English)

    Zhenhua TIAN; Karlene A. HOO

    2005-01-01

    This work proposes a method to construct a state-shared model for multiple-input multiple-output (MIMO)systems. A state-shared model is defined as a linear time invariant state-space structure that is driven by measurement signals-the plant outputs and the manipulated variables, but shared by different multiple input/output models. The genesis of the state-shared model is based on a particular reduced non-minimal realization. Any such realization necessarily fulfills the requirement that the output of the state-shared model is an asymptotically correct estimate of the output of the plant, if the process model is selected appropriately. The approach is demonstrated on a nonlinear MIMO system- a physiological model of calcium fluxes that controls muscle contraction and relaxation in human cardiac myocytes.

  1. Influence of magnetospheric inputs definition on modeling of ionospheric storms

    Science.gov (United States)

    Tashchilin, A. V.; Romanova, E. B.; Kurkin, V. I.

    Usually for numerical modeling of ionospheric storms corresponding empirical models specify parameters of neutral atmosphere and magnetosphere. Statistical kind of these models renders them impractical for simulation of the individual storm. Therefore one has to correct the empirical models using various additional speculations. The influence of magnetospheric inputs such as distributions of electric potential, number and energy fluxes of the precipitating electrons on the results of the ionospheric storm simulations has been investigated in this work. With this aim for the strong geomagnetic storm on September 25, 1998 hour global distributions of those magnetospheric inputs from 20 to 27 September were calculated by the magnetogram inversion technique (MIT). Then with the help of 3-D ionospheric model two variants of ionospheric response to this magnetic storm were simulated using MIT data and empirical models of the electric fields (Sojka et al., 1986) and electron precipitations (Hardy et al., 1985). The comparison of the received results showed that for high-latitude and subauroral stations the daily variations of electron density calculated with MIT data are more close to observations than those of empirical models. In addition using of the MIT data allows revealing some peculiarities in the daily variations of electron density during strong geomagnetic storm. References Sojka J.J., Rasmussen C.E., Schunk R.W. J.Geophys.Res., 1986, N10, p.11281. Hardy D.A., Gussenhoven M.S., Holeman E.A. J.Geophys.Res., 1985, N5, p.4229.

  2. MODELING SUPPLY CHAIN PERFORMANCE VARIABLES

    Directory of Open Access Journals (Sweden)

    Ashish Agarwal

    2005-01-01

    Full Text Available In order to understand the dynamic behavior of the variables that can play a major role in the performance improvement in a supply chain, a System Dynamics-based model is proposed. The model provides an effective framework for analyzing different variables affecting supply chain performance. Among different variables, a causal relationship among different variables has been identified. Variables emanating from performance measures such as gaps in customer satisfaction, cost minimization, lead-time reduction, service level improvement and quality improvement have been identified as goal-seeking loops. The proposed System Dynamics-based model analyzes the affect of dynamic behavior of variables for a period of 10 years on performance of case supply chain in auto business.

  3. Assessing and propagating uncertainty in model inputs in corsim

    Energy Technology Data Exchange (ETDEWEB)

    Molina, G.; Bayarri, M. J.; Berger, J. O.

    2001-07-01

    CORSIM is a large simulator for vehicular traffic, and is being studied with respect to its ability to successfully model and predict behavior of traffic in a 36 block section of Chicago. Inputs to the simulator include information about street configuration, driver behavior, traffic light timing, turning probabilities at each corner and distributions of traffic ingress into the system. This work is described in more detail in the article Fast Simulators for Assessment and Propagation of Model Uncertainty also in these proceedings. The focus of this conference poster is on the computational aspects of this problem. In particular, we address the description of the full conditional distributions needed for implementation of the MCMC algorithm and, in particular, how the constraints can be incorporated; details concerning the run time and convergence of the MCMC algorithm; and utilisation of the MCMC output for prediction and uncertainty analysis concerning the CORSIM computer model. As this last is the ultimate goal, it is worth emphasizing that the incorporation of all uncertainty concerning inputs can significantly affect the model predictions. (Author)

  4. Evaluating the uncertainty of input quantities in measurement models

    Science.gov (United States)

    Possolo, Antonio; Elster, Clemens

    2014-06-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) gives guidance about how values and uncertainties should be assigned to the input quantities that appear in measurement models. This contribution offers a concrete proposal for how that guidance may be updated in light of the advances in the evaluation and expression of measurement uncertainty that were made in the course of the twenty years that have elapsed since the publication of the GUM, and also considering situations that the GUM does not yet contemplate. Our motivation is the ongoing conversation about a new edition of the GUM. While generally we favour a Bayesian approach to uncertainty evaluation, we also recognize the value that other approaches may bring to the problems considered here, and focus on methods for uncertainty evaluation and propagation that are widely applicable, including to cases that the GUM has not yet addressed. In addition to Bayesian methods, we discuss maximum-likelihood estimation, robust statistical methods, and measurement models where values of nominal properties play the same role that input quantities play in traditional models. We illustrate these general-purpose techniques in concrete examples, employing data sets that are realistic but that also are of conveniently small sizes. The supplementary material available online lists the R computer code that we have used to produce these examples (stacks.iop.org/Met/51/3/339/mmedia). Although we strive to stay close to clause 4 of the GUM, which addresses the evaluation of uncertainty for input quantities, we depart from it as we review the classes of measurement models that we believe are generally useful in contemporary measurement science. We also considerably expand and update the treatment that the GUM gives to Type B evaluations of uncertainty: reviewing the state-of-the-art, disciplined approach to the elicitation of expert knowledge, and its encapsulation in probability distributions that are usable in

  5. Kernel Principal Component Analysis for Stochastic Input Model Generation (PREPRINT)

    Science.gov (United States)

    2010-08-17

    c ( )d Fig. 13. Contour of saturation at 0.2 PVI : MC mean (a) and variance (b) from experimental samples; MC mean (c) and variance (d) from PC...realizations. The contour plots of saturation at 0.2 PVI are given in Fig. 13. PVI represents dimensionless time and is computed as PVI = ∫ Q dt/Vp...stochastic input model provides a fast way to generate many realizations, which are consistent, in a useful sense, with the experimental data. PVI M ea n

  6. Hybrid Unifying Variable Supernetwork Model

    Institute of Scientific and Technical Information of China (English)

    LIU; Qiang; FANG; Jin-qing; LI; Yong

    2015-01-01

    In order to compare new phenomenon of topology change,evolution,hybrid ratio and network characteristics of unified hybrid network theoretical model with unified hybrid supernetwork model,this paper constructed unified hybrid variable supernetwork model(HUVSM).The first layer introduces a hybrid ratio dr,the

  7. Water Yield and Sediment Yield Simulations for Teba Catchment in Spain Using SWRRB Model: Ⅰ. Model Input and Simulation Experiment

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Water yield and sediment yield in the Teba catchment, Spain, were simulated using SWRRB (Simulator for Water Resources in Rural Basins) model. The model is composed of 198 mathematical equations. About 120 items (variables) were input for the simulation, including meteorological and climatic factors, hydrologic factors, topographic factors, parent materials, soils, vegetation, human activities, etc. The simulated results involved surface runoff, subsurface runoff, sediment, peak flow, evapotranspiration, soil water, total biomass,etc. Careful and thorough input data preparation and repeated simulation experiments are the key to get the accurate results. In this work the simulation accuracy for annual water yield prediction reached to 83.68%.``

  8. Performance Comparison of Sub Phonetic Model with Input Signal Processing

    Directory of Open Access Journals (Sweden)

    Dr E. Ramaraj

    2006-01-01

    Full Text Available The quest to arrive at a better model for signal transformation for speech has resulted in striving to develop better signal representations and algorithm. The article explores the word model which is a concatenation of state dependent senones as an alternate for phoneme. The Research Work has an objective of involving the senone with the Input signal processing an algorithm which has been tried with phoneme and has been quite successful and try to compare the performance of senone with ISP and Phoneme with ISP and supply the result analysis. The research model has taken the SPHINX IV[4] speech engine for its implementation owing to its flexibility to the new algorithm, robustness and performance consideration.

  9. Estimation of Soil Carbon Input in France: An Inverse Modelling Approach

    Institute of Scientific and Technical Information of China (English)

    J.MEERSMANS; M.P.MARTIN; E.LACARCE; T.G.ORTON; S.DE BAETS; M.GOURRAT; N.P.A.SABY

    2013-01-01

    Development of a quantitative understanding of soil organic carbon (SOC) dynamics is vital for management of soil to sequester carbon (C) and maintain fertility,thereby contributing to food security and climate change mitigation.There are well-established process-based models that can be used to simulate SOC stock evolution; however,there are few plant residue C input values and those that exist represent a limited range of environments.This limitation in a fundamental model component (i.e.,C input) constrains the reliability of current SOC stock simulations.This study aimed to estimate crop-specific and environment-specific plant-derived soil C input values for agricultural sites in Prance based on data from 700 sites selected from a recently established French soil monitoring network (the RMQS database).Measured SOC stock values from this large scale soil database were used to constrain an inverse RothC modelling approach to derive estimated C input values consistent with the stocks.This approach allowed us to estimate significant crop-specific C input values (P < 0.05) for 14 out of 17 crop types in the range from 1.84 ± 0.69 t C ha-1 year-1 (silage corn) to 5.15 ± 0.12 t C ha-1 year-1 (grassland/pasture).Furthermore,the incorporation of climate variables improved the predictions.C input of 4 crop types could be predicted as a function of temperature and 8 as a function of precipitation.This study offered an approach to meet the urgent need for crop-specific and environment-specific C input values in order to improve the reliability of SOC stock prediction.

  10. Comparison of friction stir welding heat transfer analysis methods and parametric study on unspecified input variables

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Sung Wook; Jang, Beom Seon [Seoul National University, Seoul (Korea, Republic of)

    2014-10-15

    A three-dimensional numerical model was constructed to analyze the heat transfer of friction stir welding using Fluent and ANSYS multi-physics. The analysis result was used to calculate welding deformation and residual stress. Before the numerical simulation, several simplifying assumptions were applied to the model. Three different methods of heat transfer analysis were employed, and several assumptions were applied to each heat source model. In this work, several parametric studies were performed for certain unspecified variables. The calculated temperature data were compared with experimental data from relevant studies. Additionally, the advantages and disadvantages of the three heat transfer analysis methods were compared.

  11. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    Science.gov (United States)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    inundation indicators and flood wave travel time in addition to temporally and spatially variable indicators. This enables us to assess whether the sensitivity of the model to various input factors is stationary in both time and space. Furthermore, competing models are assessed against observations of water depths from a historical flood event. Consequently we are able to determine which of the input factors has the most influence on model performance. Initial findings suggest the sensitivity of the model to different input factors varies depending on the type of model output assessed and at what stage during the flood hydrograph the model output is assessed. We have also found that initial decisions regarding the characterisation of the input factors, for example defining the upper and lower bounds of the parameter sample space, can be significant in influencing the implied sensitivities.

  12. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  13. Environmental Transport Input Parameters for the Biosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-06-27

    This analysis report is one of the technical reports documenting the Environmental Radiation Model for Yucca Mountain Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the geologic repository at Yucca Mountain. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows relationships among the reports developed for biosphere modeling and biosphere abstraction products for the TSPA, as identified in the ''Technical Work Plan: for Biosphere Modeling and Expert Support'' (TWP) (BSC 2003 [163602]). Some documents in Figure 1-1 may be under development and not available when this report is issued. This figure provides an understanding of how this report contributes to biosphere modeling in support of the license application (LA), but access to the listed documents is not required to understand the contents of this report. This report is one of the reports that develops input parameter values for the biosphere model. The ''Biosphere Model Report'' (BSC 2003 [160699]) describes the conceptual model, the mathematical model, and the input parameters. The purpose of this analysis is to develop biosphere model parameter values related to radionuclide transport and accumulation in the environment. These parameters support calculations of radionuclide concentrations in the environmental media (e.g., soil, crops, animal products, and air) resulting from a given radionuclide concentration at the source of contamination (i.e., either in groundwater or volcanic ash). The analysis was performed in accordance with the TWP (BSC 2003 [163602]). This analysis develops values of parameters associated with many features, events, and processes (FEPs) applicable to the reference biosphere (DTN: M00303SEPFEPS2.000 [162452]), which are addressed in the biosphere model (BSC 2003 [160699]). The treatment of these FEPs is described in BSC (2003 [160699

  14. A Study on the Fuzzy-Logic-Based Solar Power MPPT Algorithms Using Different Fuzzy Input Variables

    Directory of Open Access Journals (Sweden)

    Jaw-Kuen Shiau

    2015-04-01

    Full Text Available Maximum power point tracking (MPPT is one of the key functions of the solar power management system in solar energy deployment. This paper investigates the design of fuzzy-logic-based solar power MPPT algorithms using different fuzzy input variables. Six fuzzy MPPT algorithms, based on different input variables, were considered in this study, namely (i slope (of solar power-versus-solar voltage and changes of the slope; (ii slope and variation of the power; (iii variation of power and variation of voltage; (iv variation of power and variation of current; (v sum of conductance and increment of the conductance; and (vi sum of angles of arctangent of the conductance and arctangent of increment of the conductance. Algorithms (i–(iv have two input variables each while algorithms (v and (vi use a single input variable. The fuzzy logic MPPT function is deployed using a buck-boost power converter. This paper presents the details of the determinations, considerations of the fuzzy rules, as well as advantages and disadvantages of each MPPT algorithm based upon photovoltaic (PV cell properties. The range of the input variable of Algorithm (vi is finite and the maximum power point condition is well defined in steady condition and, therefore, it can be used for multipurpose controller design. Computer simulations are conducted to verify the design.

  15. Measurement of Laser Weld Temperatures for 3D Model Input.

    Energy Technology Data Exchange (ETDEWEB)

    Dagel, Daryl; GROSSETETE, GRANT; Maccallum, Danny O.

    2016-10-01

    Laser welding is a key joining process used extensively in the manufacture and assembly of critical components for several weapons systems. Sandia National Laboratories advances the understanding of the laser welding process through coupled experimentation and modeling. This report summarizes the experimental portion of the research program, which focused on measuring temperatures and thermal history of laser welds on steel plates. To increase confidence in measurement accuracy, researchers utilized multiple complementary techniques to acquire temperatures during laser welding. This data serves as input to and validation of 3D laser welding models aimed at predicting microstructure and the formation of defects and their impact on weld-joint reliability, a crucial step in rapid prototyping of weapons components.

  16. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  17. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  18. Input modeling with phase-type distributions and Markov models theory and applications

    CERN Document Server

    Buchholz, Peter; Felko, Iryna

    2014-01-01

    Containing a summary of several recent results on Markov-based input modeling in a coherent notation, this book introduces and compares algorithms for parameter fitting and gives an overview of available software tools in the area. Due to progress made in recent years with respect to new algorithms to generate PH distributions and Markovian arrival processes from measured data, the models outlined are useful alternatives to other distributions or stochastic processes used for input modeling. Graduate students and researchers in applied probability, operations research and computer science along with practitioners using simulation or analytical models for performance analysis and capacity planning will find the unified notation and up-to-date results presented useful. Input modeling is the key step in model based system analysis to adequately describe the load of a system using stochastic models. The goal of input modeling is to find a stochastic model to describe a sequence of measurements from a real system...

  19. Modelling Analysis of Forestry Input-Output Elasticity in China

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2016-01-01

    Full Text Available Based on an extended economic model and space econometrics, this essay analyzed the spatial distributions and interdependent relationships of the production of forestry in China; also the input-output elasticity of forestry production were calculated. Results figure out there exists significant spatial correlation in forestry production in China. Spatial distribution is mainly manifested as spatial agglomeration. The output elasticity of labor force is equal to 0.6649, and that of capital is equal to 0.8412. The contribution of land is significantly negative. Labor and capital are the main determinants for the province-level forestry production in China. Thus, research on the province-level forestry production should not ignore the spatial effect. The policy-making process should take into consideration the effects between provinces on the production of forestry. This study provides some scientific technical support for forestry production.

  20. Modelling Implicit Communication in Multi-Agent Systems with Hybrid Input/Output Automata

    Directory of Open Access Journals (Sweden)

    Marta Capiluppi

    2012-10-01

    Full Text Available We propose an extension of Hybrid I/O Automata (HIOAs to model agent systems and their implicit communication through perturbation of the environment, like localization of objects or radio signals diffusion and detection. To this end we decided to specialize some variables of the HIOAs whose values are functions both of time and space. We call them world variables. Basically they are treated similarly to the other variables of HIOAs, but they have the function of representing the interaction of each automaton with the surrounding environment, hence they can be output, input or internal variables. Since these special variables have the role of simulating implicit communication, their dynamics are specified both in time and space, because they model the perturbations induced by the agent to the environment, and the perturbations of the environment as perceived by the agent. Parallel composition of world variables is slightly different from parallel composition of the other variables, since their signals are summed. The theory is illustrated through a simple example of agents systems.

  1. A Probabilistic Collocation Method Based Statistical Gate Delay Model Considering Process Variations and Multiple Input Switching

    CERN Document Server

    Kumar, Y Satish; Talarico, Claudio; Wang, Janet; 10.1109/DATE.2005.31

    2011-01-01

    Since the advent of new nanotechnologies, the variability of gate delay due to process variations has become a major concern. This paper proposes a new gate delay model that includes impact from both process variations and multiple input switching. The proposed model uses orthogonal polynomial based probabilistic collocation method to construct a delay analytical equation from circuit timing performance. From the experimental results, our approach has less that 0.2% error on the mean delay of gates and less than 3% error on the standard deviation.

  2. Variable cluster analysis method for building neural network model

    Institute of Scientific and Technical Information of China (English)

    王海东; 刘元东

    2004-01-01

    To address the problems that input variables should be reduced as much as possible and explain output variables fully in building neural network model of complicated system, a variable selection method based on cluster analysis was investigated. Similarity coefficient which describes the mutual relation of variables was defined. The methods of the highest contribution rate, part replacing whole and variable replacement are put forwarded and deduced by information theory. The software of the neural network based on cluster analysis, which can provide many kinds of methods for defining variable similarity coefficient, clustering system variable and evaluating variable cluster, was developed and applied to build neural network forecast model of cement clinker quality. The results show that all the network scale, training time and prediction accuracy are perfect. The practical application demonstrates that the method of selecting variables for neural network is feasible and effective.

  3. Input--output capital coefficients for energy technologies. [Input-output model

    Energy Technology Data Exchange (ETDEWEB)

    Tessmer, R.G. Jr.

    1976-12-01

    Input-output capital coefficients are presented for five electric and seven non-electric energy technologies. They describe the durable goods and structures purchases (at a 110 sector level of detail) that are necessary to expand productive capacity in each of twelve energy source sectors. Coefficients are defined in terms of 1967 dollar purchases per 10/sup 6/ Btu of output from new capacity, and original data sources include Battelle Memorial Institute, the Harvard Economic Research Project, The Mitre Corp., and Bechtel Corp. The twelve energy sectors are coal, crude oil and gas, shale oil, methane from coal, solvent refined coal, refined oil products, pipeline gas, coal combined-cycle electric, fossil electric, LWR electric, HTGR electric, and hydroelectric.

  4. Behavioral and Electrophysiological Evidence for Early and Automatic Detection of Phonological Equivalence in Variable Speech Inputs

    Science.gov (United States)

    Kharlamov, Viktor; Campbell, Kenneth; Kazanina, Nina

    2011-01-01

    Speech sounds are not always perceived in accordance with their acoustic-phonetic content. For example, an early and automatic process of perceptual repair, which ensures conformity of speech inputs to the listener's native language phonology, applies to individual input segments that do not exist in the native inventory or to sound sequences that…

  5. Quasi-supervised scoring of human sleep in polysomnograms using augmented input variables.

    Science.gov (United States)

    Yaghouby, Farid; Sunderam, Sridhar

    2015-04-01

    The limitations of manual sleep scoring make computerized methods highly desirable. Scoring errors can arise from human rater uncertainty or inter-rater variability. Sleep scoring algorithms either come as supervised classifiers that need scored samples of each state to be trained, or as unsupervised classifiers that use heuristics or structural clues in unscored data to define states. We propose a quasi-supervised classifier that models observations in an unsupervised manner but mimics a human rater wherever training scores are available. EEG, EMG, and EOG features were extracted in 30s epochs from human-scored polysomnograms recorded from 42 healthy human subjects (18-79 years) and archived in an anonymized, publicly accessible database. Hypnograms were modified so that: 1. Some states are scored but not others; 2. Samples of all states are scored but not for transitional epochs; and 3. Two raters with 67% agreement are simulated. A framework for quasi-supervised classification was devised in which unsupervised statistical models-specifically Gaussian mixtures and hidden Markov models--are estimated from unlabeled training data, but the training samples are augmented with variables whose values depend on available scores. Classifiers were fitted to signal features incorporating partial scores, and used to predict scores for complete recordings. Performance was assessed using Cohen's Κ statistic. The quasi-supervised classifier performed significantly better than an unsupervised model and sometimes as well as a completely supervised model despite receiving only partial scores. The quasi-supervised algorithm addresses the need for classifiers that mimic scoring patterns of human raters while compensating for their limitations.

  6. Paleodust variability since the Last Glacial Maximum and implications for iron inputs to the ocean

    Science.gov (United States)

    Albani, S.; Mahowald, N. M.; Murphy, L. N.; Raiswell, R.; Moore, J. K.; Anderson, R. F.; McGee, D.; Bradtmiller, L. I.; Delmonte, B.; Hesse, P. P.; Mayewski, P. A.

    2016-04-01

    Changing climate conditions affect dust emissions and the global dust cycle, which in turn affects climate and biogeochemistry. In this study we use observationally constrained model reconstructions of the global dust cycle since the Last Glacial Maximum, combined with different simplified assumptions of atmospheric and sea ice processing of dust-borne iron, to provide estimates of soluble iron deposition to the oceans. For different climate conditions, we discuss uncertainties in model-based estimates of atmospheric processing and dust deposition to key oceanic regions, highlighting the large degree of uncertainty of this important variable for ocean biogeochemistry and the global carbon cycle. We also show the role of sea ice acting as a time buffer and processing agent, which results in a delayed and pulse-like soluble iron release into the ocean during the melting season, with monthly peaks up to ~17 Gg/month released into the Southern Oceans during the Last Glacial Maximum (LGM).

  7. Variability of energy input into selected subsystems of the human-glove-tool system: a theoretical study.

    Science.gov (United States)

    Hermann, Tomasz; Dobry, Marian Witalis

    2017-05-31

    This article presents an application of the energy method to assess the energy input introduced into two subsystems of the human-glove-tool system. To achieve this aim, a physical model of the system was developed. This consists of dynamic models of the human body and the glove described in Standard No. ISO 10068:2012, and a model of a hand-held power tool. The energy input introduced into the subsystems, i.e., the human body and the glove, was analysed in the domain of energy and involved calculating three component energy inputs of forces. The energy model was solved using numerical simulation implemented in the MATLAB/simulink environment. This procedure demonstrates that the vibration energy was distributed quite differently in the internal structure of the two subsystems. The results suggest that the operating frequency of the tool has a significant impact on the level of energy inputs transmitted into both subsystems.

  8. Formulation of a hybrid calibration approach for a physically based distributed model with NEXRAD data input

    Science.gov (United States)

    Di Luzio, Mauro; Arnold, Jeffrey G.

    2004-10-01

    This paper describes the background, formulation and results of an hourly input-output calibration approach proposed for the Soil and Water Assessment Tool (SWAT) watershed model, presented for 24 representative storm events occurring during the period between 1994 and 2000 in the Blue River watershed (1233 km 2 located in Oklahoma). This effort is the first follow up to the participation in the National Weather Service-Distributed Modeling Intercomparison Project (DMIP), an opportunity to apply, for the first time within the SWAT modeling framework, routines for hourly stream flow prediction based on gridded precipitation (NEXRAD) data input. Previous SWAT model simulations, uncalibrated and with moderate manual calibration (only the water balance over the calibration period), were provided for the entire set of watersheds and associated outlets for the comparison designed in the DMIP project. The extended goal of this follow up was to verify the model efficiency in simulating hourly hydrographs calibrating each storm event using the formulated approach. This included a combination of a manual and an automatic calibration approach (Shuffled Complex Evolution Method) and the use of input parameter values allowed to vary only within their physical extent. While the model provided reasonable water budget results with minimal calibration, event simulations with the revised calibration were significantly improved. The combination of NEXRAD precipitation data input, the soil water balance and runoff equations, along with the calibration strategy described in the paper, appear to adequately describe the storm events. The presented application and the formulated calibration method are initial steps toward the improvement of the simulation on an hourly basis of the SWAT model loading variables associated with the storm flow, such as sediment and pollutants, and the success of Total Maximum Daily Load (TMDL) projects.

  9. Unitary input DEA model to identify beef cattle production systems typologies

    Directory of Open Access Journals (Sweden)

    Eliane Gonçalves Gomes

    2012-08-01

    Full Text Available The cow-calf beef production sector in Brazil has a wide variety of operating systems. This suggests the identification and the characterization of homogeneous regions of production, with consequent implementation of actions to achieve its sustainability. In this paper we attempted to measure the performance of 21 livestock modal production systems, in their cow-calf phase. We measured the performance of these systems, considering husbandry and production variables. The proposed approach is based on data envelopment analysis (DEA. We used unitary input DEA model, with apparent input orientation, together with the efficiency measurements generated by the inverted DEA frontier. We identified five modal production systems typologies, using the isoefficiency layers approach. The results showed that the knowledge and the processes management are the most important factors for improving the efficiency of beef cattle production systems.

  10. Input determination for neural network models in water resources applications. Part 2. Case study: forecasting salinity in a river

    Science.gov (United States)

    Bowden, Gavin J.; Maier, Holger R.; Dandy, Graeme C.

    2005-01-01

    This paper is the second of a two-part series in this issue that presents a methodology for determining an appropriate set of model inputs for artificial neural network (ANN) models in hydrologic applications. The first paper presented two input determination methods. The first method utilises a measure of dependence known as the partial mutual information (PMI) criterion to select significant model inputs. The second method utilises a self-organising map (SOM) to remove redundant input variables, and a hybrid genetic algorithm (GA) and general regression neural network (GRNN) to select the inputs that have a significant influence on the model's forecast. In the first paper, both methods were applied to synthetic data sets and were shown to lead to a set of appropriate ANN model inputs. To verify the proposed techniques, it is important that they are applied to a real-world case study. In this paper, the PMI algorithm and the SOM-GAGRNN are used to find suitable inputs to an ANN model for forecasting salinity in the River Murray at Murray Bridge, South Australia. The proposed methods are also compared with two methods used in previous studies, for the same case study. The two proposed methods were found to lead to more parsimonious models with a lower forecasting error than the models developed using the methods from previous studies. To verify the robustness of each of the ANNs developed using the proposed methodology, a real-time forecasting simulation was conducted. This validation data set consisted of independent data from a six-year period from 1992 to 1998. The ANN developed using the inputs identified by the stepwise PMI algorithm was found to be the most robust for this validation set. The PMI scores obtained using the stepwise PMI algorithm revealed useful information about the order of importance of each significant input.

  11. A generic method for automatic translation between input models for different versions of simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Serfontein, Dawid E., E-mail: Dawid.Serfontein@nwu.ac.za [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)

    2014-05-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  12. Concomitant variables in finite mixture models

    NARCIS (Netherlands)

    Wedel, M

    The standard mixture model, the concomitant variable mixture model, the mixture regression model and the concomitant variable mixture regression model all enable simultaneous identification and description of groups of observations. This study reviews the different ways in which dependencies among

  13. The stability of input structures in a supply-driven input-output model: A regional analysis

    Energy Technology Data Exchange (ETDEWEB)

    Allison, T.

    1994-06-01

    Disruptions in the supply of strategic resources or other crucial factor inputs often present significant problems for planners and policymakers. The problem may be particularly significant at the regional level where higher levels of product specialization mean supply restrictions are more likely to affect leading regional industries. To maintain economic stability in the event of a supply restriction, regional planners may therefore need to evaluate the importance of market versus non-market systems for allocating the remaining supply of the disrupted resource to the region`s leading consuming industries. This paper reports on research that has attempted to show that large short term changes on the supply side do not lead to substantial changes in input coefficients and do not therefore mean the abandonment of the concept of the production function as has been suggested (Oosterhaven, 1988). The supply-driven model was tested for six sectors of the economy of Washington State and found to yield new input coefficients whose values were in most cases close approximations of their original values, even with substantial changes in supply. Average coefficient changes from a 50% output reduction in these six sectors were in the vast majority of cases (297 from a total of 315) less than +2.0% of their original values, excluding coefficient changes for the restricted input. Given these small changes, the most important issue for the validity of the supply-driven input-output model may therefore be the empirical question of the extent to which these coefficient changes are acceptable as being within the limits of approximation.

  14. Development of ANFIS models for air quality forecasting and input optimization for reducing the computational cost and time

    Science.gov (United States)

    Prasad, Kanchan; Gorai, Amit Kumar; Goyal, Pramila

    2016-03-01

    This study aims to develop adaptive neuro-fuzzy inference system (ANFIS) for forecasting of daily air pollution concentrations of five air pollutants [sulphur dioxide (SO2), nitrogen dioxide (NO2), carbon monoxide (CO), ozone (O3) and particular matters (PM10)] in the atmosphere of a Megacity (Howrah). Air pollution in the city (Howrah) is rising in parallel with the economics and thus observing, forecasting and controlling the air pollution becomes increasingly important due to the health impact. ANFIS serve as a basis for constructing a set of fuzzy IF-THEN rules, with appropriate membership functions to generate the stipulated input-output pairs. The ANFIS model predictor considers the value of meteorological factors (pressure, temperature, relative humidity, dew point, visibility, wind speed, and precipitation) and previous day's pollutant concentration in different combinations as the inputs to predict the 1-day advance and same day air pollution concentration. The concentration value of five air pollutants and seven meteorological parameters of the Howrah city during the period 2009 to 2011 were used for development of the ANFIS model. Collinearity tests were conducted to eliminate the redundant input variables. A forward selection (FS) method is used for selecting the different subsets of input variables. Application of collinearity tests and FS techniques reduces the numbers of input variables and subsets which helps in reducing the computational cost and time. The performances of the models were evaluated on the basis of four statistical indices (coefficient of determination, normalized mean square error, index of agreement, and fractional bias).

  15. Neural Network Based Scheduling for Variable-Length Packets in Gigabit Router with Crossbar Switch Fabric and Input Queuing

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A high-speed and effective packet scheduling method is crucial to the performance of Gigabit routers. The paper studies the variable-length packet scheduling problem in Gigabit router with crossbar switch fabric and input queuing, and a scheduling method based on neural network is proposed. For the proposed method, a scheduling system structure fit for the variable-length packet case is presented first, then some rules for scheduling are given, At last, an optimal scheduling method using Hopfield neural network is proposed based on the rules. Furthermore, the paper discusses that the proposed method can be realized by hardware circuit. The simulation result shows the effectiveness of the proposed method.

  16. Wage Differentials among Workers in Input-Output Models.

    Science.gov (United States)

    Filippini, Luigi

    1981-01-01

    Using an input-output framework, the author derives hypotheses on wage differentials based on the assumption that human capital (in this case, education) will explain workers' wage differentials. The hypothetical wage differentials are tested on data from the Italian economy. (RW)

  17. High Temperature Test Facility Preliminary RELAP5-3D Input Model Description

    Energy Technology Data Exchange (ETDEWEB)

    Bayless, Paul David [Idaho National Laboratory

    2015-12-01

    A RELAP5-3D input model is being developed for the High Temperature Test Facility at Oregon State University. The current model is described in detail. Further refinements will be made to the model as final as-built drawings are released and when system characterization data are available for benchmarking the input model.

  18. A DNA-based system for selecting and displaying the combined result of two input variables

    DEFF Research Database (Denmark)

    Liu, Huajie; Wang, Jianbang; Song, S

    2015-01-01

    Oligonucleotide-based technologies for biosensing or bio-regulation produce huge amounts of rich high-dimensional information. There is a consequent need for flexible means to combine diverse pieces of such information to form useful derivative outputs, and to display those immediately. Here we...... demonstrate this capability in a DNA-based system that takes two input numbers, represented in DNA strands, and returns the result of their multiplication, writing this as a number in a display. Unlike a conventional calculator, this system operates by selecting the result from a library of solutions rather...

  19. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    Science.gov (United States)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  20. Improving land cover classification using input variables derived from a geographically weighted principal components analysis

    Science.gov (United States)

    Comber, Alexis J.; Harris, Paul; Tsutsumida, Narumasa

    2016-09-01

    This study demonstrates the use of a geographically weighted principal components analysis (GWPCA) of remote sensing imagery to improve land cover classification accuracy. A principal components analysis (PCA) is commonly applied in remote sensing but generates global, spatially-invariant results. GWPCA is a local adaptation of PCA that locally transforms the image data, and in doing so, can describe spatial change in the structure of the multi-band imagery, thus directly reflecting that many landscape processes are spatially heterogenic. In this research the GWPCA localised loadings of MODIS data are used as textural inputs, along with GWPCA localised ranked scores and the image bands themselves to three supervised classification algorithms. Using a reference data set for land cover to the west of Jakarta, Indonesia the classification procedure was assessed via training and validation data splits of 80/20, repeated 100 times. For each classification algorithm, the inclusion of the GWPCA loadings data was found to significantly improve classification accuracy. Further, but more moderate improvements in accuracy were found by additionally including GWPCA ranked scores as textural inputs, data that provide information on spatial anomalies in the imagery. The critical importance of considering both spatial structure and spatial anomalies of the imagery in the classification is discussed, together with the transferability of the new method to other studies. Research topics for method refinement are also suggested.

  1. Variable Structure Controller with Time-Varying Switching Surface under the Bound of Input using Evolution Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Min Jung; Choi, Young Kiu [Pusan National University (Korea); Kim, Hyun Sik [Agency for Defense Development (Korea); Jeon, Seong Jeub [Pukyong National University (Korea)

    1999-04-01

    Variable structure control law is well known to be a robust control algorithm and evolution strategy is used as an effective search algorithm in optimization problems. In this paper, we propose a variable structure controller with time-varying switching surface. We calculate the maximum value of switching surface gradient under the bound of input. To enhance the robustness, we choose a time-varying switching surface gradient that is of the 3 rd order polynomial form. Evolution strategy is used to optimize the parameters of the switching surface gradient. Finally, the proposed method is applied to position tracking control for BLDC motor. Experimental results show that the proposed method is more useful than the conventional variable structure control. (author). 8 refs., 16 figs., 1 tab.

  2. Analysis of the Model Checkers' Input Languages for Modeling Traffic Light Systems

    Directory of Open Access Journals (Sweden)

    Pathiah A. Samat

    2011-01-01

    Full Text Available Problem statement: Model checking is an automated verification technique that can be used for verifying properties of a system. A number of model checking systems have been developed over the last few years. However, there is no guideline that is available for selecting the most suitable model checker to be used to model a particular system. Approach: In this study, we compare the use of four model checkers: SMV, SPIN, UPPAAL and PRISM for modeling a distributed control system. In particular, we are looking at the capabilities of the input languages of these model checkers for modeling this type of system. Limitations and differences of their input language are compared and analyses by using a set of questions. Results: The result of the study shows that although the input languages of these model checkers have a lot of similarities, they also have a significant number of differences. The result of the study also shows that one model checker may be more suitable than others for verifying this type of systems Conclusion: User need to choose the right model checker for the problem to be verified.

  3. INPUT MODELLING USING STATISTICAL DISTRIBUTIONS AND ARENA SOFTWARE

    Directory of Open Access Journals (Sweden)

    Elena Iuliana GINGU (BOTEANU

    2015-05-01

    Full Text Available The paper presents a method of choosing properly the probability distributions for failure time in a flexible manufacturing system. Several well-known distributions often provide good approximation in practice. The commonly used continuous distributions are: Uniform, Triangular, Beta, Normal, Lognormal, Weibull, and Exponential. In this article is studied how to use the Input Analyzer in the simulation language Arena to fit probability distributions to data, or to evaluate how well a particular distribution. The objective was to provide the selection of the most appropriate statistical distributions and to estimate parameter values of failure times for each machine of a real manufacturing line.

  4. A pre-calibration approach to select optimum inputs for hydrological models in data-scarce regions

    Science.gov (United States)

    Tarawneh, Esraa; Bridge, Jonathan; Macdonald, Neil

    2016-10-01

    This study uses the Soil and Water Assessment Tool (SWAT) model to quantitatively compare available input datasets in a data-poor dryland environment (Wala catchment, Jordan; 1743 km2). Eighteen scenarios combining best available land-use, soil and weather datasets (1979-2002) are considered to construct SWAT models. Data include local observations and global reanalysis data products. Uncalibrated model outputs assess the variability in model performance derived from input data sources only. Model performance against discharge and sediment load data are compared using r2, Nash-Sutcliffe efficiency (NSE), root mean square error standard deviation ratio (RSR) and percent bias (PBIAS). NSE statistic varies from 0.56 to -12 and 0.79 to -85 for best- and poorest-performing scenarios against observed discharge and sediment data respectively. Global weather inputs yield considerable improvements on discontinuous local datasets, whilst local soil inputs perform considerably better than global-scale mapping. The methodology provides a rapid, transparent and transferable approach to aid selection of the most robust suite of input data.

  5. Experimental demonstration of continuous variable cloning with phase-conjugate inputs

    DEFF Research Database (Denmark)

    Sabuncu, Metin; Andersen, Ulrik Lund; Leuchs, G.

    2007-01-01

    We report the first experimental demonstration of continuous variable cloning of phase-conjugate coherent states as proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)]. In contrast to this proposal, the cloning transformation is accomplished using only linear optical components......, homodyne detection, and feedforward. As a result of combining phase conjugation with a joint measurement strategy, superior cloning is demonstrated with cloning fidelities reaching 89%....

  6. The MARINA model (Model to Assess River Inputs of Nutrients to seAs)

    NARCIS (Netherlands)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-01-01

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients t

  7. Input Response of Neural Network Model with Lognormally Distributed Synaptic Weights

    Science.gov (United States)

    Nagano, Yoshihiro; Karakida, Ryo; Watanabe, Norifumi; Aoyama, Atsushi; Okada, Masato

    2016-07-01

    Neural assemblies in the cortical microcircuit can sustain irregular spiking activity without external inputs. On the other hand, neurons exhibit rich evoked activities driven by sensory stimulus, and both activities are reported to contribute to cognitive functions. We studied the external input response of the neural network model with lognormally distributed synaptic weights. We show that the model can achieve irregular spontaneous activity and population oscillation depending on the presence of external input. The firing rate distribution was maintained for the external input, and the order of firing rates in evoked activity reflected that in spontaneous activity. Moreover, there were bistable regions in the inhibitory input parameter space. The bimodal membrane potential distribution, which is a characteristic feature of the up-down state, was obtained under such conditions. From these results, we can conclude that the model displays various evoked activities due to the external input and is biologically plausible.

  8. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  9. Motivation Monitoring and Assessment Extension for Input-Process-Outcome Game Model

    Science.gov (United States)

    Ghergulescu, Ioana; Muntean, Cristina Hava

    2014-01-01

    This article proposes a Motivation Assessment-oriented Input-Process-Outcome Game Model (MotIPO), which extends the Input-Process-Outcome game model with game-centred and player-centred motivation assessments performed right from the beginning of the game-play. A feasibility case-study involving 67 participants playing an educational game and…

  10. Using the Iterative Input variable Selection (IIS) algorithm to assess the relevance of ENSO teleconnections patterns on hydro-meteorological processes at the catchment scale

    Science.gov (United States)

    Beltrame, Ludovica; Carbonin, Daniele; Galelli, Stefano; Castelletti, Andrea

    2014-05-01

    Population growth, water scarcity and climate change are three major factors making the understanding of variations in water availability increasingly important. Therefore, reliable medium-to-long range forecasts of streamflows are essential to the development of water management policies. To this purpose, recent modelling efforts have been dedicated to seasonal and inter-annual streamflow forecasts based on the teleconnection between "at-site" hydro-meteorological processes and low frequency climate fluctuations, such as El Niño Southern Oscillation (ENSO). This work proposes a novel procedure for first detecting the impact of ENSO on hydro-meteorological processes at the catchment scale, and then assessing the potential of ENSO indicators for building medium-to-long range statistical streamflow prediction models. Core of this procedure is the adoption of the Iterative Input variable Selection (IIS) algorithm that is employed to find the most relevant forcings of streamflow variability and derive predictive models based on the selected inputs. The procedure is tested on the Columbia (USA) and Williams (Australia) Rivers, where ENSO influence has been well-documented, and then adopted on the unexplored Red River basin (Vietnam). Results show that IIS outcomes on the Columbia and Williams Rivers are consistent with the results of previous studies, and that ENSO indicators can be effectively used to enhance the streamflow forecast models capabilities. The experiments on the Red River basin show that the ENSO influence is less pronounced, inducing little effects on the basin hydro-meteorological processes.

  11. Error model identification of inertial navigation platform based on errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    Liu Ming; Liu Yu; Su Baoku

    2009-01-01

    Because the real input acceleration cannot be obtained during the error model identification of inertial navigation platform, both the input and output data contain noises. In this case, the conventional regression model and the least squares (LS) method will result in bias. Based on the models of inertial navigation platform error and observation error, the errors-in-variables (EV) model and the total least squares (TLS) method are proposed to identify the error model of the inertial navigation platform. The estimation precision is improved and the result is better than the conventional regression model based LS method. The simulation results illustrate the effectiveness of the proposed method.

  12. Meteorological input for atmospheric dispersion models: an inter-comparison between new generation models

    Energy Technology Data Exchange (ETDEWEB)

    Busillo, C.; Calastrini, F.; Gualtieri, G. [Lab. for Meteorol. and Environ. Modell. (LaMMA/CNR-IBIMET), Florence (Italy); Carpentieri, M.; Corti, A. [Dept. of Energetics, Univ. of Florence (Italy); Canepa, E. [INFM, Dept. of Physics, Univ. of Genoa (Italy)

    2004-07-01

    The behaviour of atmospheric dispersion models is strongly influenced by meteorological input, especially as far as new generation models are concerned. More sophisticated meteorological pre-processors require more extended and more reliable data. This is true in particular when short-term simulations are performed, while in long-term modelling detailed data are less important. In Europe no meteorological standards exist about data, therefore testing and evaluating the results of new generation dispersion models is particularly important in order to obtain information on reliability of model predictions. (orig.)

  13. Rainfall variability modelling in Rwanda

    Science.gov (United States)

    Nduwayezu, E.; Kanevski, M.; Jaboyedoff, M.

    2012-04-01

    Support to climate change adaptation is a priority in many International Organisations meetings. But is the international approach for adaptation appropriate with field reality in developing countries? In Rwanda, the main problems will be heavy rain and/or long dry season. Four rainfall seasons have been identified, corresponding to the four thermal Earth ones in the south hemisphere: the normal season (summer), the rainy season (autumn), the dry season (winter) and the normo-rainy season (spring). The spatial rainfall decreasing from West to East, especially in October (spring) and February (summer) suggests an «Atlantic monsoon influence» while the homogeneous spatial rainfall distribution suggests an «Inter-tropical front » mechanism. The torrential rainfall that occurs every year in Rwanda disturbs the circulation for many days, damages the houses and, more seriously, causes heavy losses of people. All districts are affected by bad weather (heavy rain) but the costs of such events are the highest in mountains districts. The objective of the current research is to proceed to an evaluation of the potential rainfall risk by applying advanced geospatial modelling tools in Rwanda: geostatistical predictions and simulations, machine learning algorithm (different types of neural networks) and GIS. The research will include rainfalls variability mapping and probabilistic analyses of extreme events.

  14. Catchment2Coast: making the link between coastal resource variability and river inputs

    CSIR Research Space (South Africa)

    Monteiro, P

    2003-07-01

    Full Text Available boundaries, including the tradi- tional ones between river catchments and their adjacent coastal ecosystems. The Catchment2Coast Project aims to address these gaps in knowledge using a combination of numerical modelling and field observations... refinement of numerical modelling plat- forms, affording them the necessary degree of rigour and reliability for more generic usage as management and policy development support tools. The project uses as a case study the Incomati River–Maputo Bay system...

  15. Sediment records of highly variable mercury inputs to mountain lakes in Patagonia during the past millennium

    Science.gov (United States)

    Ribeiro Guevara, S.; Meili, M.; Rizzo, A.; Daga, R.; Arribére, M.

    2010-04-01

    High Hg levels in the pristine lacustrine ecosystems of the Nahuel Huapi National Park, a protected zone situated in the Andes of Northern Patagonia, Argentina, have initiated further investigations on Hg cycling and source identification. Here we report Hg records in sedimentary sequences to identify atmospheric sources during the past millennium. In addition to global transport and deposition, a potential atmospheric Hg source to be considered is the local emissions associated with volcanic activity, because the Park is situated in the Southern Volcanic Zone. Two sediment cores were extracted from Lake Tonček, a small, high-altitude system reflecting mainly direct inputs associated with atmospheric contributions, and Lake Moreno Oeste, a much larger and deeper lake having an extended watershed covered mostly by native forest. The sedimentary sequences were dated based on both 210Pb and 137Cs profiles. In addition, tephra layers were identified and geochemically characterized for chronological application and to investigate any association of volcanic eruptions with Hg records. Hg concentrations in sediments were measured along with 32 other elements, as well as organic matter, subfossil chironomids, and biogenic silica. Observed background Hg concentrations, determined from the sequence domains with lower values, ranged from 50 to 100 ng g-1 dry weight (DW), whereas the surficial layers reached 200 to 500 ng g-1 DW. In addition to this traditional pattern, however, two deep domains in both sequences showed dramatically increased Hg levels reaching 400 to 650 ng g-1 DW; the upper dated to the 18th to 19th centuries, and the lower around the 13th century. These concentrations are not only elevated in the present profiles but also many-fold above the background values determined in other fresh water sediments, as were also the Hg fluxes, reaching 120 to 150 μg m-2 y-1 in Lake Tonček . No correlation was observed between Hg concentrations and the contents of

  16. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy

    Science.gov (United States)

    Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.

    2016-01-01

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models. PMID:27876821

  17. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy

    Science.gov (United States)

    Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.

    2016-11-01

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.

  18. Identifying best-fitting inputs in health-economic model calibration: a Pareto frontier approach.

    Science.gov (United States)

    Enns, Eva A; Cipriano, Lauren E; Simons, Cyrena T; Kong, Chung Yin

    2015-02-01

    To identify best-fitting input sets using model calibration, individual calibration target fits are often combined into a single goodness-of-fit (GOF) measure using a set of weights. Decisions in the calibration process, such as which weights to use, influence which sets of model inputs are identified as best-fitting, potentially leading to different health economic conclusions. We present an alternative approach to identifying best-fitting input sets based on the concept of Pareto-optimality. A set of model inputs is on the Pareto frontier if no other input set simultaneously fits all calibration targets as well or better. We demonstrate the Pareto frontier approach in the calibration of 2 models: a simple, illustrative Markov model and a previously published cost-effectiveness model of transcatheter aortic valve replacement (TAVR). For each model, we compare the input sets on the Pareto frontier to an equal number of best-fitting input sets according to 2 possible weighted-sum GOF scoring systems, and we compare the health economic conclusions arising from these different definitions of best-fitting. For the simple model, outcomes evaluated over the best-fitting input sets according to the 2 weighted-sum GOF schemes were virtually nonoverlapping on the cost-effectiveness plane and resulted in very different incremental cost-effectiveness ratios ($79,300 [95% CI 72,500-87,600] v. $139,700 [95% CI 79,900-182,800] per quality-adjusted life-year [QALY] gained). Input sets on the Pareto frontier spanned both regions ($79,000 [95% CI 64,900-156,200] per QALY gained). The TAVR model yielded similar results. Choices in generating a summary GOF score may result in different health economic conclusions. The Pareto frontier approach eliminates the need to make these choices by using an intuitive and transparent notion of optimality as the basis for identifying best-fitting input sets. © The Author(s) 2014.

  19. Limited dependent variable models for panel data

    NARCIS (Netherlands)

    Charlier, E.

    1997-01-01

    Many economic phenomena require limited variable models for an appropriate treatment. In addition, panel data models allow the inclusion of unobserved individual-specific effects. These models are combined in this thesis. Distributional assumptions in the limited dependent variable models are

  20. BEYOND SEM: GENERAL LATENT VARIABLE MODELING

    National Research Council Canada - National Science Library

    Muthén, Bengt O

    2002-01-01

    This article gives an overview of statistical analysis with latent variables. Using traditional structural equation modeling as a starting point, it shows how the idea of latent variables captures a wide variety of statistical concepts...

  1. Effects of reconstruction filter and input parameter variability on object detectability in CT imaging

    Science.gov (United States)

    Boedeker, Kirsten L.

    The purpose of this work is to investigate and quantify the effects of technical parameter variability and reconstruction algorithm on image quality and object detectability. To accomplish this, metrics of both noise and signal to noise ratio (SNR) are explored and then applied in object detection tasks using a computer aided diagnosis (CAD) system. The noise power spectrum (NPS) is investigated as a noise metric in that it describes both the magnitude of noise and the spatial characteristics of noise that are introduced by the reconstruction algorithm. The NPS was found to be much more robust than the conventional standard deviation metric. The noise equivalent quanta (NEQ) is also studied as a tool for comparing effects of acquisition parameters (esp. mAs) on noise and, as NEQ is not influenced by reconstruction filter or other post-processing, its utility for comparison across different techniques and manufacturers is demonstrated. The Ideal Bayesian Observer (IBO) and Non-Prewhitening Matched Filter (NPWMF) are investigated as SNR metrics under a variety of acquisition and reconstruction conditions. The signal and noise processes of image formation were studied individually, which allowed for analysis of their separate effects on the overall SNR. The SNR metrics were found to characterize the influence of reconstruction filter and technical parameter variability with high sensitivity. To correlate the above SNR metrics with detection, signal images were combined with noise images and passed to a CAD system. A simulated lung nodule detection task was performed on a series of objects of increasing contrast. The average minimum contrast detected and corresponding IBO and NPWMF SNR values were recorded over 100 trials for each reconstruction filter and technical parameter condition studied. Among the trends discovered, it was found that detectability scales with SNR as mAs is varied. Furthermore, the CAD system appears to under-perform when sharp algorithms are

  2. Translation of CODEV Lens Model To IGES Input File

    Science.gov (United States)

    Wise, T. D.; Carlin, B. B.

    1986-10-01

    The design of modern optical systems is not a trivial task; even more difficult is the requirement for an opticker to accurately describe the physical constraints implicit in his design so that a mechanical designer can correctly mount the optical elements. Typical concerns include setback of baffles, obstruction of clear apertures by mounting hardware, location of the image plane with respect to fiducial marks, and the correct interpretation of systems having odd geometry. The presence of multiple coordinate systems (optical, mechan-ical, system test, and spacecraft) only exacerbates an already difficult situation. A number of successful optical design programs, such as CODEV (1), have come into existence over the years while the development of Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) has allowed a number of firms to install "paperless" design systems. In such a system, a part which is entered by keyboard, or pallet, is made into a real physical piece on a milling machine which has received its instructions from the design system. However, a persistent problem is the lack of a link between the optical design programs and the mechanical CAD programs. This paper will describe a first step which has been taken to bridge this gap. Starting with the neutral plot file generated by the CODEV optical design program, we have been able to produce a file suitable for input to the ANVIL (2) and GEOMOD (3) software packages, using the International Graphics Exchange Standard (IGES) interface. This is accomplished by software of our design, which runs on a VAX (4) system. A description of the steps to be taken in transferring a design will be provided. We shall also provide some examples of designs on which this technique has been used successfully. Finally, we shall discuss limitations of the existing software and suggest some improvements which might be undertaken.

  3. Predicting coral species richness: the effect of input variables, diversity and scale.

    Science.gov (United States)

    Richards, Zoe T; Hobbs, Jean-Paul A

    2014-01-01

    Coral reefs are facing a biodiversity crisis due to increasing human impacts, consequently, one third of reef-building corals have an elevated risk of extinction. Logistic challenges prevent broad-scale species-level monitoring of hard corals; hence it has become critical that effective proxy indicators of species richness are established. This study tests how accurately three potential proxy indicators (generic richness on belt transects, generic richness on point-intercept transects and percent live hard coral cover on point-intercept transects) predict coral species richness at three different locations and two analytical scales. Generic richness (measured on a belt transect) was found to be the most effective predictor variable, with significant positive linear relationships across locations and scales. Percent live hard coral cover consistently performed poorly as an indicator of coral species richness. This study advances the practical framework for optimizing coral reef monitoring programs and empirically demonstrates that generic richness offers an effective way to predict coral species richness with a moderate level of precision. While the accuracy of species richness estimates will decrease in communities dominated by species-rich genera (e.g. Acropora), generic richness provides a useful measure of phylogenetic diversity and incorporating this metric into monitoring programs will increase the likelihood that changes in coral species diversity can be detected.

  4. Predicting coral species richness: the effect of input variables, diversity and scale.

    Directory of Open Access Journals (Sweden)

    Zoe T Richards

    Full Text Available Coral reefs are facing a biodiversity crisis due to increasing human impacts, consequently, one third of reef-building corals have an elevated risk of extinction. Logistic challenges prevent broad-scale species-level monitoring of hard corals; hence it has become critical that effective proxy indicators of species richness are established. This study tests how accurately three potential proxy indicators (generic richness on belt transects, generic richness on point-intercept transects and percent live hard coral cover on point-intercept transects predict coral species richness at three different locations and two analytical scales. Generic richness (measured on a belt transect was found to be the most effective predictor variable, with significant positive linear relationships across locations and scales. Percent live hard coral cover consistently performed poorly as an indicator of coral species richness. This study advances the practical framework for optimizing coral reef monitoring programs and empirically demonstrates that generic richness offers an effective way to predict coral species richness with a moderate level of precision. While the accuracy of species richness estimates will decrease in communities dominated by species-rich genera (e.g. Acropora, generic richness provides a useful measure of phylogenetic diversity and incorporating this metric into monitoring programs will increase the likelihood that changes in coral species diversity can be detected.

  5. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.

    2006-01-01

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  6. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence Livermore National Laboratory

    2006-01-27

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  7. A robust hybrid model integrating enhanced inputs based extreme learning machine with PLSR (PLSR-EIELM) and its application to intelligent measurement.

    Science.gov (United States)

    He, Yan-Lin; Geng, Zhi-Qiang; Xu, Yuan; Zhu, Qun-Xiong

    2015-09-01

    In this paper, a robust hybrid model integrating an enhanced inputs based extreme learning machine with the partial least square regression (PLSR-EIELM) was proposed. The proposed PLSR-EIELM model can overcome two main flaws in the extreme learning machine (ELM), i.e. the intractable problem in determining the optimal number of the hidden layer neurons and the over-fitting phenomenon. First, a traditional extreme learning machine (ELM) is selected. Second, a method of randomly assigning is applied to the weights between the input layer and the hidden layer, and then the nonlinear transformation for independent variables can be obtained from the output of the hidden layer neurons. Especially, the original input variables are regarded as enhanced inputs; then the enhanced inputs and the nonlinear transformed variables are tied together as the whole independent variables. In this way, the PLSR can be carried out to identify the PLS components not only from the nonlinear transformed variables but also from the original input variables, which can remove the correlation among the whole independent variables and the expected outputs. Finally, the optimal relationship model of the whole independent variables with the expected outputs can be achieved by using PLSR. Thus, the PLSR-EIELM model is developed. Then the PLSR-EIELM model served as an intelligent measurement tool for the key variables of the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. The experimental results show that the predictive accuracy of PLSR-EIELM is stable, which indicate that PLSR-EIELM has good robust character. Moreover, compared with ELM, PLSR, hierarchical ELM (HELM), and PLSR-ELM, PLSR-EIELM can achieve much smaller predicted relative errors in these two applications.

  8. "Updates to Model Algorithms & Inputs for the Biogenic ...

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observations. This has resulted in improvements in model evaluations of modeled isoprene, NOx, and O3. The National Exposure Research Laboratory (NERL) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA mission to protect human health and the environment. AMAD research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollution problem, but also in developing emission control policies and regulations for air quality improvements.

  9. Evapotranspiration Input Data for the Central Valley Hydrologic Model (CVHM)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This digital dataset contains monthly reference evapotranspiration (ETo) data for the Central Valley Hydrologic Model (CVHM). The Central Valley encompasses an...

  10. Using Crowd Sensed Data as Input to Congestion Model

    DEFF Research Database (Denmark)

    Lehmann, Anders; Gross, Allan

    2016-01-01

    . To get accurate and timely information on traffic congestion, and by extension information on air pollution, near real time traffic models are needed. We present in this paper an implementation of the Restricted Stochastic User equilibrium model, that is capable to model congestions for very large Urban......Emission of airborne pollutants and climate gasses from the transport sector is a growing problem, both in indus- trialised and developing countries. Planning of urban transport system is essential to minimise the environmental, health and economic impact of congestion in the transport system...... traffic systems, in less than an hour. The model is implemented in an open source database system, for easy interface with GIS resources and crowd sensed transportation data....

  11. Input-dependent wave attenuation in a critically-balanced model of cortex.

    Directory of Open Access Journals (Sweden)

    Xiao-Hu Yan

    Full Text Available A number of studies have suggested that many properties of brain activity can be understood in terms of critical systems. However it is still not known how the long-range susceptibilities characteristic of criticality arise in the living brain from its local connectivity structures. Here we prove that a dynamically critically-poised model of cortex acquires an infinitely-long ranged susceptibility in the absence of input. When an input is presented, the susceptibility attenuates exponentially as a function of distance, with an increasing spatial attenuation constant (i.e., decreasing range the larger the input. This is in direct agreement with recent results that show that waves of local field potential activity evoked by single spikes in primary visual cortex of cat and macaque attenuate with a characteristic length that also increases with decreasing contrast of the visual stimulus. A susceptibility that changes spatial range with input strength can be thought to implement an input-dependent spatial integration: when the input is large, no additional evidence is needed in addition to the local input; when the input is weak, evidence needs to be integrated over a larger spatial domain to achieve a decision. Such input-strength-dependent strategies have been demonstrated in visual processing. Our results suggest that input-strength dependent spatial integration may be a natural feature of a critically-balanced cortical network.

  12. High Flux Isotope Reactor system RELAP5 input model

    Energy Technology Data Exchange (ETDEWEB)

    Morris, D.G.; Wendel, M.W.

    1993-01-01

    A thermal-hydraulic computational model of the High Flux Isotope Reactor (HFIR) has been developed using the RELAP5 program. The purpose of the model is to provide a state-of-the art thermal-hydraulic simulation tool for analyzing selected hypothetical accident scenarios for a revised HFIR Safety Analysis Report (SAR). The model includes (1) a detailed representation of the reactor core and other vessel components, (2) three heat exchanger/pump cells, (3) pressurizing pumps and letdown valves, and (4) secondary coolant system (with less detail than the primary system). Data from HFIR operation, component tests, tests in facility mockups and the HFIR, HFIR specific experiments, and other pertinent experiments performed independent of HFIR were used to construct the model and validate it to the extent permitted by the data. The detailed version of the model has been used to simulate loss-of-coolant accidents (LOCAs), while the abbreviated version has been developed for the operational transients that allow use of a less detailed nodalization. Analysis of station blackout with core long-term decay heat removal via natural convection has been performed using the core and vessel portions of the detailed model.

  13. Regional input-output models and the treatment of imports in the European System of Accounts

    OpenAIRE

    Kronenberg, Tobias

    2011-01-01

    Input-output models are often used in regional science due to their versatility and their ability to capture many of the distinguishing features of a regional economy. Input-output tables are available for all EU member countries, but they are hard to find at the regional level, since many regional governments lack the resources or the will to produce reliable, survey-based regional input-output tables. Therefore, in many cases researchers adopt nonsurvey techniques to derive regional input-o...

  14. Large uncertainty in soil carbon modelling related to carbon input calculation method

    DEFF Research Database (Denmark)

    Keel, Sonja; Leifeld, Jens; Mayer, Jochen

    2017-01-01

    The application of dynamic models to report changes in soil organic carbon (SOC) stocks, for example as part of greenhouse gas inventories, is becoming increasingly important. Most of these models rely on input data from harvest residues or decaying plant parts and also organic fertilizer, together...... referred to as soil carbon inputs (C). The soil C inputs from plants are derived from measured agricultural yields using allometric equations. Here we compared the results of five previously published equations. Our goal was to test whether the choice of method is critical for modelling soil C and if so......, which of these equations is most suitable for Swiss conditions. For this purpose we used the five equations to calculate soil C inputs based on yield data from a Swiss long-term cropping experiment. Estimated annual soil C inputs from various crops were averaged from 28 years and four fertilizer...

  15. Cardinality-dependent Variability in Orthogonal Variability Models

    DEFF Research Database (Denmark)

    Mærsk-Møller, Hans Martin; Jørgensen, Bo Nørregaard

    2012-01-01

    During our work on developing and running a software product line for eco-sustainable greenhouse-production software tools, which currently have three products members we have identified a need for extending the notation of the Orthogonal Variability Model (OVM) to support what we refer to as car......During our work on developing and running a software product line for eco-sustainable greenhouse-production software tools, which currently have three products members we have identified a need for extending the notation of the Orthogonal Variability Model (OVM) to support what we refer...

  16. Scientific and technical advisory committee review of the nutrient inputs to the watershed model

    Science.gov (United States)

    The following is a report by a STAC Review Team concerning the methods and documentation used by the Chesapeake Bay Partnership for evaluation of nutrient inputs to Phase 6 of the Chesapeake Bay Watershed Model. The “STAC Review of the Nutrient Inputs to the Watershed Model” (previously referred to...

  17. From LCC to LCA Using a Hybrid Input Output Model – A Maritime Case Study

    DEFF Research Database (Denmark)

    Kjær, Louise Laumann; Pagoropoulos, Aris; Hauschild, Michael Zwicky;

    2015-01-01

    As companies try to embrace life cycle thinking, Life Cycle Assessment (LCA) and Life Cycle Costing (LCC) have proven to be powerful tools. In this paper, an Environmental Input-Output model is used for analysis as it enables an LCA using the same economic input data as LCC. This approach helps...

  18. User requirements for hydrological models with remote sensing input

    Energy Technology Data Exchange (ETDEWEB)

    Kolberg, Sjur

    1997-10-01

    Monitoring the seasonal snow cover is important for several purposes. This report describes user requirements for hydrological models utilizing remotely sensed snow data. The information is mainly provided by operational users through a questionnaire. The report is primarily intended as a basis for other work packages within the Snow Tools project which aim at developing new remote sensing products for use in hydrological models. The HBV model is the only model mentioned by users in the questionnaire. It is widely used in Northern Scandinavia and Finland, in the fields of hydroelectric power production, flood forecasting and general monitoring of water resources. The current implementation of HBV is not based on remotely sensed data. Even the presently used HBV implementation may benefit from remotely sensed data. However, several improvements can be made to hydrological models to include remotely sensed snow data. Among these the most important are a distributed version, a more physical approach to the snow depletion curve, and a way to combine data from several sources. 1 ref.

  19. Tracking cellular telephones as an input for developing transport models

    CSIR Research Space (South Africa)

    Cooper, Antony K

    2010-08-01

    Full Text Available of tracking cellular telephones and using the data to populate transport and other models. We report here on one of the pilots, known as DYNATRACK (Dynamic Daily Path Tracking), a larger experiment conducted in 2007 with a more heterogeneous group of commuters...

  20. Physics input for modelling superfluid neutron stars with hyperon cores

    CERN Document Server

    Gusakov, M E; Kantor, E M

    2014-01-01

    Observations of massive ($M \\approx 2.0~M_\\odot$) neutron stars (NSs), PSRs J1614-2230 and J0348+0432, rule out most of the models of nucleon-hyperon matter employed in NS simulations. Here we construct three possible models of nucleon-hyperon matter consistent with the existence of $2~M_\\odot$ pulsars as well as with semi-empirical nuclear matter parameters at saturation, and semi-empirical hypernuclear data. Our aim is to calculate for these models all the parameters necessary for modelling dynamics of hyperon stars (such as equation of state, adiabatic indices, thermodynamic derivatives, relativistic entrainment matrix, etc.), making them available for a potential user. To this aim a general non-linear hadronic Lagrangian involving $\\sigma\\omega\\rho\\phi\\sigma^\\ast$ meson fields, as well as quartic terms in vector-meson fields, is considered. A universal scheme for calculation of the $\\ell=0,1$ Landau Fermi-liquid parameters and relativistic entrainment matrix is formulated in the mean-field approximation. ...

  1. Variable Fidelity Aeroelastic Toolkit - Structural Model Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is a methodology to incorporate variable fidelity structural models into steady and unsteady aeroelastic and aeroservoelastic analyses in...

  2. Human task animation from performance models and natural language input

    Science.gov (United States)

    Esakov, Jeffrey; Badler, Norman I.; Jung, Moon

    1989-01-01

    Graphical manipulation of human figures is essential for certain types of human factors analyses such as reach, clearance, fit, and view. In many situations, however, the animation of simulated people performing various tasks may be based on more complicated functions involving multiple simultaneous reaches, critical timing, resource availability, and human performance capabilities. One rather effective means for creating such a simulation is through a natural language description of the tasks to be carried out. Given an anthropometrically-sized figure and a geometric workplace environment, various simple actions such as reach, turn, and view can be effectively controlled from language commands or standard NASA checklist procedures. The commands may also be generated by external simulation tools. Task timing is determined from actual performance models, if available, such as strength models or Fitts' Law. The resulting action specification are animated on a Silicon Graphics Iris workstation in real-time.

  3. Tumor Growth Model with PK Input for Neuroblastoma Drug Development

    Science.gov (United States)

    2015-09-01

    9/2012 - 4/30/2017 2.40 calendar NCI Anticancer Drug Pharmacology in Very Young Children The proposed studies will use pharmacokinetic... anticancer drugs . DOD W81XWH-14-1-0103 CA130396 (Stewart) 9/1/2014 - 8/31/2016 .60 calendar DOD-DEPARTMENT OF THE ARMY Tumor Growth Model with PK... anticancer drugs . .60 calendar V Foundation Translational (Stewart) 11/1/2012-10/31/2015 THE V FDN FOR CA RES Identification & preclinical testing

  4. Comparison of different snow model formulations and their responses to input uncertainties in the Upper Indus Basin

    Science.gov (United States)

    Pritchard, David; Fowler, Hayley; Forsythe, Nathan; O'Donnell, Greg; Rutter, Nick; Bardossy, Andras

    2017-04-01

    Snow and glacier melt in the mountainous Upper Indus Basin (UIB) sustain water supplies, irrigation networks, hydropower production and ecosystems in extensive downstream lowlands. Understanding hydrological and cryospheric sensitivities to climatic variability and change in the basin is therefore critical for local, national and regional water resources management. Assessing these sensitivities using numerical modelling is challenging, due to limitations in the quality and quantity of input and evaluation data, as well as uncertainties in model structures and parameters. This study explores how these uncertainties in inputs and process parameterisations affect distributed simulations of ablation in the complex climatic setting of the UIB. The role of model forcing uncertainties is explored using combinations of local observations, remote sensing and reanalysis - including the high resolution High Asia Refined Analysis - to generate multiple realisations of spatiotemporal model input fields. Forcing a range of model structures with these input fields then provides an indication of how different ablation parameterisations respond to uncertainties and perturbations in climatic drivers. Model structures considered include simple, empirical representations of melt processes through to physically based, full energy balance models with multi-physics options for simulating snowpack evolution (including an adapted version of FSM). Analysing model input and structural uncertainties in this way provides insights for methodological choices in climate sensitivity assessments of data-sparse, high mountain catchments. Such assessments are key for supporting water resource management in these catchments, particularly given the potential complications of enhanced warming through elevation effects or, in the case of the UIB, limited understanding of how and why local climate change signals differ from broader patterns.

  5. Influence of input matrix representation on topic modelling performance

    CSIR Research Space (South Africa)

    De Waal, A

    2010-11-01

    Full Text Available model, perplexity is an appropriate measure. It provides an indication of the model’s ability to generalise by measuring the exponent of the mean log-likelihood of words in a held-out test set of the corpus. The exploratory abilities of the latent.... The phrases are clearly more intelligible than only single word phrases in many cases, thus demonstrating the qualitative advantage of the proposed method. 1For the CRAN corpus, each subset of chunks includes the top 1000 chunks with the highest...

  6. Handbook of latent variable and related models

    CERN Document Server

    Lee, Sik-Yum

    2011-01-01

    This Handbook covers latent variable models, which are a flexible class of models for modeling multivariate data to explore relationships among observed and latent variables.- Covers a wide class of important models- Models and statistical methods described provide tools for analyzing a wide spectrum of complicated data- Includes illustrative examples with real data sets from business, education, medicine, public health and sociology.- Demonstrates the use of a wide variety of statistical, computational, and mathematical techniques.

  7. Remote sensing inputs to landscape models which predict future spatial land use patterns for hydrologic models

    Science.gov (United States)

    Miller, L. D.; Tom, C.; Nualchawee, K.

    1977-01-01

    A tropical forest area of Northern Thailand provided a test case of the application of the approach in more natural surroundings. Remote sensing imagery subjected to proper computer analysis has been shown to be a very useful means of collecting spatial data for the science of hydrology. Remote sensing products provide direct input to hydrologic models and practical data bases for planning large and small-scale hydrologic developments. Combining the available remote sensing imagery together with available map information in the landscape model provides a basis for substantial improvements in these applications.

  8. Statistical selection of multiple-input multiple-output nonlinear dynamic models of spike train transformation.

    Science.gov (United States)

    Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; Hampson, Robert E; Deadwyler, Sam A; Berger, Theodore W

    2007-01-01

    Multiple-input multiple-output nonlinear dynamic model of spike train to spike train transformations was previously formulated for hippocampal-cortical prostheses. This paper further described the statistical methods of selecting significant inputs (self-terms) and interactions between inputs (cross-terms) of this Volterra kernel-based model. In our approach, model structure was determined by progressively adding self-terms and cross-terms using a forward stepwise model selection technique. Model coefficients were then pruned based on Wald test. Results showed that the reduced kernel models, which contained much fewer coefficients than the full Volterra kernel model, gave good fits to the novel data. These models could be used to analyze the functional interactions between neurons during behavior.

  9. Evaluating the effects of model structure and meteorological input data on runoff modelling in an alpine headwater basin

    Science.gov (United States)

    Schattan, Paul; Bellinger, Johannes; Förster, Kristian; Schöber, Johannes; Huttenlau, Matthias; Kirnbauer, Robert; Achleitner, Stefan

    2017-04-01

    Modelling water resources in snow-dominated mountainous catchments is challenging due to both, short concentration times and a highly variable contribution of snow melt in space and time from complex terrain. A number of model setups exist ranging from physically based models to conceptional models which do not attempt to represent the natural processes in a physically meaningful way. Within the flood forecasting system for the Tyrolean Inn River two serially linked hydrological models with differing process representation are used. Non- glacierized catchments are modelled by a semi-distributed, water balance model (HQsim) based on the HRU-approach. A fully-distributed energy and mass balance model (SES), purpose-built for snow- and icemelt, is used for highly glacierized headwater catchments. Previous work revealed uncertainties and limitations within the models' structures regarding (i) the representation of snow processes in HQsim, (ii) the runoff routing of SES, and (iii) the spatial resolution of the meteorological input data in both models. To overcome these limitations, a "strengths driven" model coupling is applied. Instead of linking the models serially, a vertical one-way coupling of models has been implemented. The fully-distributed snow modelling of SES is combined with the semi-distributed HQsim structure, allowing to benefit from soil and runoff routing schemes in HQsim. A monte-carlo based modelling experiment was set up to evaluate the resulting differences in the runoff prediction due to the improved model coupling and a refined spatial resolution of the meteorological forcing. The experiment design follows a gradient of spatial discretisation of hydrological processes and meteorological forcing data with a total of six different model setups for the alpine headwater basin of the Fagge River in the Tyrolean Alps. In general, all setups show a good performance for this particular basin. It is therefore planned to include other basins with differing

  10. A Core Language for Separate Variability Modeling

    DEFF Research Database (Denmark)

    Iosif-Lazăr, Alexandru Florin; Wasowski, Andrzej; Schaefer, Ina

    2014-01-01

    Separate variability modeling adds variability to a modeling language without requiring modifications of the language or the supporting tools. We define a core language for separate variability modeling using a single kind of variation point to define transformations of software artifacts in object...... hierarchical dependencies between variation points via copying and flattening. Thus, we reduce a model with intricate dependencies to a flat executable model transformation consisting of simple unconditional local variation points. The core semantics is extremely concise: it boils down to two operational rules...

  11. On the Influence of Input Data Quality to Flood Damage Estimation: The Performance of the INSYDE Model

    Directory of Open Access Journals (Sweden)

    Daniela Molinari

    2017-09-01

    Full Text Available IN-depth SYnthetic Model for Flood Damage Estimation (INSYDE is a model for the estimation of flood damage to residential buildings at the micro-scale. This study investigates the sensitivity of INSYDE to the accuracy of input data. Starting from the knowledge of input parameters at the scale of individual buildings for a case study, the level of detail of input data is progressively downgraded until the condition in which a representative value is defined for all inputs at the census block scale. The analysis reveals that two conditions are required to limit the errors in damage estimation: the representativeness of representatives values with respect to micro-scale values and the local knowledge of the footprint area of the buildings, being the latter the main extensive variable adopted by INSYDE. Such a result allows for extending the usability of the model at the meso-scale, also in different countries, depending on the availability of aggregated building data.

  12. Experimental falsification of Leggett's nonlocal variable model.

    Science.gov (United States)

    Branciard, Cyril; Ling, Alexander; Gisin, Nicolas; Kurtsiefer, Christian; Lamas-Linares, Antia; Scarani, Valerio

    2007-11-23

    Bell's theorem guarantees that no model based on local variables can reproduce quantum correlations. Also, some models based on nonlocal variables, if subject to apparently "reasonable" constraints, may fail to reproduce quantum physics. In this Letter, we introduce a family of inequalities, which use a finite number of measurement settings, and which therefore allow testing Leggett's nonlocal model versus quantum physics. Our experimental data falsify Leggett's model and are in agreement with quantum predictions.

  13. Bayesian nonlinear structural FE model and seismic input identification for damage assessment of civil structures

    Science.gov (United States)

    Astroza, Rodrigo; Ebrahimian, Hamed; Li, Yong; Conte, Joel P.

    2017-09-01

    A methodology is proposed to update mechanics-based nonlinear finite element (FE) models of civil structures subjected to unknown input excitation. The approach allows to jointly estimate unknown time-invariant model parameters of a nonlinear FE model of the structure and the unknown time histories of input excitations using spatially-sparse output response measurements recorded during an earthquake event. The unscented Kalman filter, which circumvents the computation of FE response sensitivities with respect to the unknown model parameters and unknown input excitations by using a deterministic sampling approach, is employed as the estimation tool. The use of measurement data obtained from arrays of heterogeneous sensors, including accelerometers, displacement sensors, and strain gauges is investigated. Based on the estimated FE model parameters and input excitations, the updated nonlinear FE model can be interrogated to detect, localize, classify, and assess damage in the structure. Numerically simulated response data of a three-dimensional 4-story 2-by-1 bay steel frame structure with six unknown model parameters subjected to unknown bi-directional horizontal seismic excitation, and a three-dimensional 5-story 2-by-1 bay reinforced concrete frame structure with nine unknown model parameters subjected to unknown bi-directional horizontal seismic excitation are used to illustrate and validate the proposed methodology. The results of the validation studies show the excellent performance and robustness of the proposed algorithm to jointly estimate unknown FE model parameters and unknown input excitations.

  14. Decision variables analysis for structured modeling

    Institute of Scientific and Technical Information of China (English)

    潘启树; 赫东波; 张洁; 胡运权

    2002-01-01

    Structured modeling is the most commonly used modeling method, but it is not quite addaptive to significant changes in environmental conditions. Therefore, Decision Variables Analysis(DVA), a new modelling method is proposed to deal with linear programming modeling and changing environments. In variant linear programming , the most complicated relationships are those among decision variables. DVA classifies the decision variables into different levels using different index sets, and divides a model into different elements so that any change can only have its effect on part of the whole model. DVA takes into consideration the complicated relationships among decision variables at different levels, and can therefore sucessfully solve any modeling problem in dramatically changing environments.

  15. Multi-bump solutions in a neural field model with external inputs

    Science.gov (United States)

    Ferreira, Flora; Erlhagen, Wolfram; Bicho, Estela

    2016-07-01

    We study the conditions for the formation of multiple regions of high activity or "bumps" in a one-dimensional, homogeneous neural field with localized inputs. Stable multi-bump solutions of the integro-differential equation have been proposed as a model of a neural population representation of remembered external stimuli. We apply a class of oscillatory coupling functions and first derive criteria to the input width and distance, which relate to the synaptic couplings that guarantee the existence and stability of one and two regions of high activity. These input-induced patterns are attracted by the corresponding stable one-bump and two-bump solutions when the input is removed. We then extend our analytical and numerical investigation to N-bump solutions showing that the constraints on the input shape derived for the two-bump case can be exploited to generate a memory of N > 2 localized inputs. We discuss the pattern formation process when either the conditions on the input shape are violated or when the spatial ranges of the excitatory and inhibitory connections are changed. An important aspect for applications is that the theoretical findings allow us to determine for a given coupling function the maximum number of localized inputs that can be stored in a given finite interval.

  16. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders

    2004-01-01

    METHODOLOGY THE OMNI-PRESENCE OF LATENT VARIABLES Introduction 'True' variable measured with error Hypothetical constructs Unobserved heterogeneity Missing values and counterfactuals Latent responses Generating flexible distributions Combining information Summary MODELING DIFFERENT RESPONSE PROCESSES Introduction Generalized linear models Extensions of generalized linear models Latent response formulation Modeling durations or survival Summary and further reading CLASSICAL LATENT VARIABLE MODELS Introduction Multilevel regression models Factor models and item respons

  17. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  18. Geochemical inputs for hydrological models of deep-lying sedimentary units: Loss of mineral hydration water

    Science.gov (United States)

    Graf, D. L.; Anderson, D. E.

    1981-12-01

    Hydrological models that treat phenomena occurring deep in sedimentary piles, such as petroleum maturation and retention of chemical and radioactive waste, may require time spans of at least several million years. Many input quantities classically treated as constants will be variables on this time scale. Models sophisticated enough to include transport contributions from such processes as chemical diffusion, mineral dehydration and shale membrane behavior require considerable knowledge about regional geological history as well as the pertinent mineralogical and geochemical relationships. Simple dehydrations such as those of gypsum and halloysite occur at sharply-defined temperatures but, as with all mineral dehydration reactions, the equilibrium temperature is strongly dependent on the pore-fluid salinity and degree of overpressuring encountered in the subsurface. The dehydrations of analcime and smectite proceed by reactions involving other sedimentary minerals. The smectite reaction is crystallographically complex, yielding a succession of mixed-layered illite/smectites, and on the U.S.A. Gulf of Mexico coast continues over several million years at a particular stratigraphic interval.

  19. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  20. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Science.gov (United States)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-06-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  1. Input-to-output transformation in a model of the rat hippocampal CA1 network

    OpenAIRE

    Olypher, Andrey V; Lytton, William W; Prinz, Astrid A.

    2012-01-01

    Here we use computational modeling to gain new insights into the transformation of inputs in hippocampal field CA1. We considered input-output transformation in CA1 principal cells of the rat hippocampus, with activity synchronized by population gamma oscillations. Prior experiments have shown that such synchronization is especially strong for cells within one millimeter of each other. We therefore simulated a one-millimeter patch of CA1 with 23,500 principal cells. We used morphologically an...

  2. Regional Input Output Models and the FLQ Formula: A Case Study of Finland

    OpenAIRE

    Tony Flegg; Paul White

    2008-01-01

    This paper examines the use of location quotients (LQs) in constructing regional input-output models. Its focus is on the augmented FLQ formula (AFLQ) proposed by Flegg and Webber, 2000, which takes regional specialization explicitly into account. In our case study, we examine data for 20 Finnish regions, ranging in size from very small to very large, in order to assess the relative performance of the AFLQ formula in estimating regional imports, total intermediate inputs and output multiplier...

  3. Interregional spillovers in Spain: an estimation using an interregional input-output model

    OpenAIRE

    Llano, Carlos

    2009-01-01

    In this note we introduce the 1995 Spanish Interregional Input-Output Model, which was estimated using a wide set of One-region input-output tables and interregional trade matrices, estimated for each sector using interregional transport flows. Based on this framework, and by means of the Hypothetical Regional Extraction Method, the interregional backward and feedback effects are computed, capturing the pull effect of every region over the rest of Spain, through their sectoral relations withi...

  4. Sensitivity of Global Modeling Initiative chemistry and transport model simulations of radon-222 and lead-210 to input meteorological data

    Directory of Open Access Journals (Sweden)

    D. B. Considine

    2005-01-01

    Full Text Available We have used the Global Modeling Initiative chemistry and transport model to simulate the radionuclides radon-222 and lead-210 using three different sets of input meteorological information: 1. Output from the Goddard Space Flight Center Global Modeling and Assimilation Office GEOS-STRAT assimilation; 2. Output from the Goddard Institute for Space Studies GISS II' general circulation model; and 3. Output from the National Center for Atmospheric Research MACCM3 general circulation model. We intercompare these simulations with observations to determine the variability resulting from the different meteorological data used to drive the model, and to assess the agreement of the simulations with observations at the surface and in the upper troposphere/lower stratosphere region. The observational datasets we use are primarily climatologies developed from multiple years of observations. In the upper troposphere/lower stratosphere region, climatological distributions of lead-210 were constructed from ~25 years of aircraft and balloon observations compiled into the US Environmental Measurements Laboratory RANDAB database. Taken as a whole, no simulation stands out as superior to the others. However, the simulation driven by the NCAR MACCM3 meteorological data compares better with lead-210 observations in the upper troposphere/lower stratosphere region. Comparisons of simulations made with and without convection show that the role played by convective transport and scavenging in the three simulations differs substantially. These differences may have implications for evaluation of the importance of very short-lived halogen-containing species on stratospheric halogen budgets.

  5. Econometric Model Estimation and Sensitivity Analysis of Inputs for Mandarin Production in Mazandaran Province of Iran

    Directory of Open Access Journals (Sweden)

    Majid Namdari

    2011-05-01

    Full Text Available This study examines energy consumption of inputs and output used in mandarin production, and to find relationship between energy inputs and yield in Mazandaran, Iran. Also the Marginal Physical Product (MPP method was used to analyze the sensitivity of energy inputs on mandarin yield and returns to scale of econometric model was calculated. For this purpose, the data were collected from 110 mandarin orchards which were selected based on random sampling method. The results indicated that total energy inputs were 77501.17 MJ/ha. The energy use efficiency, energy productivity and net energy of mandarin production were found as 0.77, 0.41 kg/MJ and -17651.17 MJ/ha. About 41% of the total energy inputs used in mandarin production was indirect while about 59% was direct. Econometric estimation results revealed that the impact of human labor energy (0.37 was found the highest among the other inputs in mandarin production. The results also showed that direct, indirect and renewable and non-renewable, energy forms had a positive and statistically significant impact on output level. The results of sensitivity analysis of the energy inputs showed that with an additional use of 1 MJ of each of the human labor, farmyard manure and chemical fertilizers energy would lead to an increase in yield by 2.05, 1.80 and 1.26 kg, respectively. The results also showed that the MPP value of direct and renewable energy were higher.

  6. Dynamic Modeling of a Roller Chain Drive System Considering the Flexibility of Input Shaft

    Institute of Scientific and Technical Information of China (English)

    XU Lixin; YANG Yuhu; CHANG Zongyu; LIU Jianping

    2010-01-01

    Roller chain drives are widely used in various high-speed, high-load and power transmission applications, but their complex dynamic behavior is not well researched. Most studies were only focused on the analysis of the vibration of chain tight span, and in these models, many factors are neglected. In this paper, a mathematical model is developed to calculate the dynamic response of a roller chain drive working at constant or variable speed condition. In the model, the complete chain transmission with two sprockets and the necessary tight and slack spans is used. The effect of the flexibility of input shaft on dynamic response of the chain system is taken into account, as well as the elastic deformation in the chain, the inertial forces, the gravity and the torque on driven shaft. The nonlinear equations of movement are derived from using Lagrange equations and solved numerically. Given the center distance and the two initial position angles of teeth on driving and driven sprockets corresponding to the first seating roller on each side of the tight span, dynamics of any roller chain drive with two sprockets and two spans can be analyzed by the procedure. Finally, a numerical example is given and the validity of the procedure developed is demonstrated by analyzing the dynamic behavior of a typical roller chain drive. The model can well simulate the transverse and longitudinal vibration of the chain spans and the torsional vibration of the sprockets. This study can provide an effective method for the analysis of the dynamic characteristics of all the chain drive systems.

  7. Time series analysis as input for clinical predictive modeling: Modeling cardiac arrest in a pediatric ICU

    Directory of Open Access Journals (Sweden)

    Kennedy Curtis E

    2011-10-01

    Full Text Available Abstract Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1 selecting candidate variables; 2 specifying measurement parameters; 3 defining data format; 4 defining time window duration and resolution; 5 calculating latent variables for candidate variables not directly measured; 6 calculating time series features as latent variables; 7 creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8

  8. An integrated model for the assessment of global water resources – Part 1: Model description and input meteorological forcing

    Directory of Open Access Journals (Sweden)

    N. Hanasaki

    2008-07-01

    than ±1 mo in 19 of the 27 basins and less than ±2 mo in 25 basins. The performance was similar to the best available precedent studies with closure of energy and water. The input meteorological forcing component and the integrated model provide a framework with which to assess global water resources, with the potential application to investigate the subannual variability in water resources.

  9. An integrated model for the assessment of global water resources Part 1: Model description and input meteorological forcing

    Science.gov (United States)

    Hanasaki, N.; Kanae, S.; Oki, T.; Masuda, K.; Motoya, K.; Shirakawa, N.; Shen, Y.; Tanaka, K.

    2008-07-01

    basins and less than ±2 mo in 25 basins. The performance was similar to the best available precedent studies with closure of energy and water. The input meteorological forcing component and the integrated model provide a framework with which to assess global water resources, with the potential application to investigate the subannual variability in water resources.

  10. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    Energy Technology Data Exchange (ETDEWEB)

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    2016-08-01

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used to extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.

  11. Advancements in Wind Integration Study Input Data Modeling: The Wind Integration National Dataset (WIND) Toolkit

    Science.gov (United States)

    Hodge, B.; Orwig, K.; McCaa, J. R.; Harrold, S.; Draxl, C.; Jones, W.; Searight, K.; Getman, D.

    2013-12-01

    Regional wind integration studies in the United States, such as the Western Wind and Solar Integration Study (WWSIS), Eastern Wind Integration and Transmission Study (EWITS), and Eastern Renewable Generation Integration Study (ERGIS), perform detailed simulations of the power system to determine the impact of high wind and solar energy penetrations on power systems operations. Some of the specific aspects examined include: infrastructure requirements, impacts on grid operations and conventional generators, ancillary service requirements, as well as the benefits of geographic diversity and forecasting. These studies require geographically broad and temporally consistent wind and solar power production input datasets that realistically reflect the ramping characteristics, spatial and temporal correlations, and capacity factors of wind and solar power plant production, and are time-synchronous with load profiles. The original western and eastern wind datasets were generated independently for 2004-2006 using numerical weather prediction (NWP) models run on a ~2 km grid with 10-minute resolution. Each utilized its own site selection process to augment existing wind plants with simulated sites of high development potential. The original dataset also included day-ahead simulated forecasts. These datasets were the first of their kind and many lessons were learned from their development. For example, the modeling approach used generated periodic false ramps that later had to be removed due to unrealistic impacts on ancillary service requirements. For several years, stakeholders have been requesting an updated dataset that: 1) covers more recent years; 2) spans four or more years to better evaluate interannual variability; 3) uses improved methods to minimize false ramps and spatial seams; 4) better incorporates solar power production inputs; and 5) is more easily accessible. To address these needs, the U.S. Department of Energy (DOE) Wind and Solar Programs have funded two

  12. Input-to-output transformation in a model of the rat hippocampal CA1 network.

    Science.gov (United States)

    Olypher, Andrey V; Lytton, William W; Prinz, Astrid A

    2012-01-01

    Here we use computational modeling to gain new insights into the transformation of inputs in hippocampal field CA1. We considered input-output transformation in CA1 principal cells of the rat hippocampus, with activity synchronized by population gamma oscillations. Prior experiments have shown that such synchronization is especially strong for cells within one millimeter of each other. We therefore simulated a one-millimeter ıt patch of CA1 with 23,500 principal cells. We used morphologically and biophysically detailed neuronal models, each with more than 1000 compartments and thousands of synaptic inputs. Inputs came from binary patterns of spiking neurons from field CA3 and entorhinal cortex (EC). On average, each presynaptic pattern initiated action potentials in the same number of CA1 principal cells in the patch. We considered pairs of similar and pairs of distinct patterns. In all the cases CA1 strongly separated input patterns. However, CA1 cells were considerably more sensitive to small alterations in EC patterns compared to CA3 patterns. Our results can be used for comparison of input-to-output transformations in normal and pathological hippocampal networks.

  13. Modeling the short-run effect of fiscal stimuli on GDP : A new semi-closed input-output model

    NARCIS (Netherlands)

    Chen, Quanrun; Dietzenbacher, Erik; Los, Bart; Yang, Cuihong

    2016-01-01

    In this study, we propose a new semi-closed input-output model, which reconciles input-output analysis with modern consumption theories. It can simulate changes in household consumption behavior when exogenous stimulus policies lead to higher disposable income levels. It is useful for quantifying

  14. Random Effect and Latent Variable Model Selection

    CERN Document Server

    Dunson, David B

    2008-01-01

    Presents various methods for accommodating model uncertainty in random effects and latent variable models. This book focuses on frequentist likelihood ratio and score tests for zero variance components. It also focuses on Bayesian methods for random effects selection in linear mixed effects and generalized linear mixed models

  15. Sampling Weights in Latent Variable Modeling

    Science.gov (United States)

    Asparouhov, Tihomir

    2005-01-01

    This article reviews several basic statistical tools needed for modeling data with sampling weights that are implemented in Mplus Version 3. These tools are illustrated in simulation studies for several latent variable models including factor analysis with continuous and categorical indicators, latent class analysis, and growth models. The…

  16. A Model for Gathering Stakeholder Input for Setting Research Priorities at the Land-Grant University.

    Science.gov (United States)

    Kelsey, Kathleen Dodge; Pense, Seburn L.

    2001-01-01

    A model for collecting and using stakeholder input on research priorities is a modification of Guba and Lincoln's model, involving preevaluation preparation, stakeholder identification, information gathering and analysis, interpretive filtering, and negotiation and consensus. A case study at Oklahoma State University illustrates its applicability…

  17. Improving the Performance of Water Demand Forecasting Models by Using Weather Input

    NARCIS (Netherlands)

    Bakker, M.; Van Duist, H.; Van Schagen, K.; Vreeburg, J.; Rietveld, L.

    2014-01-01

    Literature shows that water demand forecasting models which use water demand as single input, are capable of generating a fairly accurate forecast. However, at changing weather conditions the forecasting errors are quite large. In this paper three different forecasting models are studied: an Adaptiv

  18. Improving the Performance of Water Demand Forecasting Models by Using Weather Input

    NARCIS (Netherlands)

    Bakker, M.; Van Duist, H.; Van Schagen, K.; Vreeburg, J.; Rietveld, L.

    2014-01-01

    Literature shows that water demand forecasting models which use water demand as single input, are capable of generating a fairly accurate forecast. However, at changing weather conditions the forecasting errors are quite large. In this paper three different forecasting models are studied: an Adaptiv

  19. Good Modeling Practice for PAT Applications: Propagation of Input Uncertainty and Sensitivity Analysis

    DEFF Research Database (Denmark)

    Sin, Gürkan; Gernaey, Krist; Eliasson Lantz, Anna

    2009-01-01

    The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input...

  20. Development of an Input Model to MELCOR 1.8.5 for the Oskarshamn 3 BWR

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Lars [Lentek, Nykoeping (Sweden)

    2006-05-15

    An input model has been prepared to the code MELCOR 1.8.5 for the Swedish Oskarshamn 3 Boiling Water Reactor (O3). This report describes the modelling work and the various files which comprise the input deck. Input data are mainly based on original drawings and system descriptions made available by courtesy of OKG AB. Comparison and check of some primary system data were made against an O3 input file to the SCDAP/RELAP5 code that was used in the SARA project. Useful information was also obtained from the FSAR (Final Safety Analysis Report) for O3 and the SKI report '2003 Stoerningshandboken BWR'. The input models the O3 reactor at its current state with the operating power of 3300 MW{sub th}. One aim with this work is that the MELCOR input could also be used for power upgrading studies. All fuel assemblies are thus assumed to consist of the new Westinghouse-Atom's SVEA-96 Optima2 fuel. MELCOR is a severe accident code developed by Sandia National Laboratory under contract from the U.S. Nuclear Regulatory Commission (NRC). MELCOR is a successor to STCP (Source Term Code Package) and has thus a long evolutionary history. The input described here is adapted to the latest version 1.8.5 available when the work began. It was released the year 2000, but a new version 1.8.6 was distributed recently. Conversion to the new version is recommended. (During the writing of this report still another code version, MELCOR 2.0, has been announced to be released within short.) In version 1.8.5 there is an option to describe the accident progression in the lower plenum and the melt-through of the reactor vessel bottom in more detail by use of the Bottom Head (BH) package developed by Oak Ridge National Laboratory especially for BWRs. This is in addition to the ordinary MELCOR COR package. Since problems arose running with the BH input two versions of the O3 input deck were produced, a NONBH and a BH deck. The BH package is no longer a separate package in the new 1

  1. GEN-IV BENCHMARKING OF TRISO FUEL PERFORMANCE MODELS UNDER ACCIDENT CONDITIONS MODELING INPUT DATA

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise Paul [Idaho National Laboratory

    2016-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read

  2. A Model for Positively Correlated Count Variables

    DEFF Research Database (Denmark)

    Møller, Jesper; Rubak, Ege Holger

    2010-01-01

    An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields and their poten......An α-permanental random field is briefly speaking a model for a collection of non-negative integer valued random variables with positive associations. Though such models possess many appealing probabilistic properties, many statisticians seem unaware of α-permanental random fields...

  3. Optimal input shaping for Fisher identifiability of control-oriented lithium-ion battery models

    Science.gov (United States)

    Rothenberger, Michael J.

    This dissertation examines the fundamental challenge of optimally shaping input trajectories to maximize parameter identifiability of control-oriented lithium-ion battery models. Identifiability is a property from information theory that determines the solvability of parameter estimation for mathematical models using input-output measurements. This dissertation creates a framework that exploits the Fisher information metric to quantify the level of battery parameter identifiability, optimizes this metric through input shaping, and facilitates faster and more accurate estimation. The popularity of lithium-ion batteries is growing significantly in the energy storage domain, especially for stationary and transportation applications. While these cells have excellent power and energy densities, they are plagued with safety and lifespan concerns. These concerns are often resolved in the industry through conservative current and voltage operating limits, which reduce the overall performance and still lack robustness in detecting catastrophic failure modes. New advances in automotive battery management systems mitigate these challenges through the incorporation of model-based control to increase performance, safety, and lifespan. To achieve these goals, model-based control requires accurate parameterization of the battery model. While many groups in the literature study a variety of methods to perform battery parameter estimation, a fundamental issue of poor parameter identifiability remains apparent for lithium-ion battery models. This fundamental challenge of battery identifiability is studied extensively in the literature, and some groups are even approaching the problem of improving the ability to estimate the model parameters. The first approach is to add additional sensors to the battery to gain more information that is used for estimation. The other main approach is to shape the input trajectories to increase the amount of information that can be gained from input

  4. Recurrent network models for perfect temporal integration of fluctuating correlated inputs.

    Directory of Open Access Journals (Sweden)

    Hiroshi Okamoto

    2009-06-01

    Full Text Available Temporal integration of input is essential to the accumulation of information in various cognitive and behavioral processes, and gradually increasing neuronal activity, typically occurring within a range of seconds, is considered to reflect such computation by the brain. Some psychological evidence suggests that temporal integration by the brain is nearly perfect, that is, the integration is non-leaky, and the output of a neural integrator is accurately proportional to the strength of input. Neural mechanisms of perfect temporal integration, however, remain largely unknown. Here, we propose a recurrent network model of cortical neurons that perfectly integrates partially correlated, irregular input spike trains. We demonstrate that the rate of this temporal integration changes proportionately to the probability of spike coincidences in synaptic inputs. We analytically prove that this highly accurate integration of synaptic inputs emerges from integration of the variance of the fluctuating synaptic inputs, when their mean component is kept constant. Highly irregular neuronal firing and spike coincidences are the major features of cortical activity, but they have been separately addressed so far. Our results suggest that the efficient protocol of information integration by cortical networks essentially requires both features and hence is heterotic.

  5. Computation of reduced energy input current stimuli for neuron phase models.

    Science.gov (United States)

    Anyalebechi, Jason; Koelling, Melinda E; Miller, Damon A

    2014-01-01

    A regularly spiking neuron can be studied using a phase model. The effect of an input stimulus current on the phase time derivative is captured by a phase response curve. This paper adapts a technique that was previously applied to conductance-based models to discover optimal input stimulus currents for phase models. First, the neuron phase response θ(t) due to an input stimulus current i(t) is computed using a phase model. The resulting θ(t) is taken to be a reference phase r(t). Second, an optimal input stimulus current i(*)(t) is computed to minimize a weighted sum of the square-integral `energy' of i(*)(t) and the tracking error between the reference phase r(t) and the phase response due to i(*)(t). The balance between the conflicting requirements of energy and tracking error minimization is controlled by a single parameter. The generated optimal current i(*)t) is then compared to the input current i(t) which was used to generate the reference phase r(t). This technique was applied to two neuron phase models; in each case, the current i(*)(t) generates a phase response similar to the reference phase r(t), and the optimal current i(*)(t) has a lower `energy' than the square-integral of i(t). For constant i(t), the optimal current i(*)(t) need not be constant in time. In fact, i(*)(t) is large (possibly even larger than i(t)) for regions where the phase response curve indicates a stronger sensitivity to the input stimulus current, and smaller in regions of reduced sensitivity.

  6. Integrating models that depend on variable data

    Science.gov (United States)

    Banks, A. T.; Hill, M. C.

    2016-12-01

    Models of human-Earth systems are often developed with the goal of predicting the behavior of one or more dependent variables from multiple independent variables, processes, and parameters. Often dependent variable values range over many orders of magnitude, which complicates evaluation of the fit of the dependent variable values to observations. Many metrics and optimization methods have been proposed to address dependent variable variability, with little consensus being achieved. In this work, we evaluate two such methods: log transformation (based on the dependent variable being log-normally distributed with a constant variance) and error-based weighting (based on a multi-normal distribution with variances that tend to increase as the dependent variable value increases). Error-based weighting has the advantage of encouraging model users to carefully consider data errors, such as measurement and epistemic errors, while log-transformations can be a black box for typical users. Placing the log-transformation into the statistical perspective of error-based weighting has not formerly been considered, to the best of our knowledge. To make the evaluation as clear and reproducible as possible, we use multiple linear regression (MLR). Simulations are conducted with MatLab. The example represents stream transport of nitrogen with up to eight independent variables. The single dependent variable in our example has values that range over 4 orders of magnitude. Results are applicable to any problem for which individual or multiple data types produce a large range of dependent variable values. For this problem, the log transformation produced good model fit, while some formulations of error-based weighting worked poorly. Results support previous suggestions fthat error-based weighting derived from a constant coefficient of variation overemphasizes low values and degrades model fit to high values. Applying larger weights to the high values is inconsistent with the log

  7. Queueing model for an ATM multiplexer with unequal input/output link capacities

    Science.gov (United States)

    Long, Y. H.; Ho, T. K.; Rad, A. B.; Lam, S. P. S.

    1998-10-01

    We present a queuing model for an ATM multiplexer with unequal input/output link capacities in this paper. This model can be used to analyze the buffer behaviors of an ATM multiplexer which multiplexes low speed input links into a high speed output link. For this queuing mode, we assume that the input and output slot times are not equal, this is quite different from most analysis of discrete-time queues for ATM multiplexer/switch. In the queuing analysis, we adopt a correlated arrival process represented by the Discrete-time Batch Markovian Arrival Process. The analysis is based upon M/G/1 type queue technique which enables easy numerical computation. Queue length distributions observed at different epochs and queue length distribution seen by an arbitrary arrival cell when it enters the buffer are given.

  8. Investigations of the sensitivity of a coronal mass ejection model (ENLIL) to solar input parameters

    DEFF Research Database (Denmark)

    Falkenberg, Thea Vilstrup; Vršnak, B.; Taktakishvili, A.;

    2010-01-01

    investigate the parameter space of the ENLILv2.5b model using the CME event of 25 July 2004. ENLIL is a time‐dependent 3‐D MHD model that can simulate the propagation of cone‐shaped interplanetary coronal mass ejections (ICMEs) through the solar system. Excepting the cone parameters (radius, position...... (CMEs), but in order to predict the caused effects, we need to be able to model their propagation from their origin in the solar corona to the point of interest, e.g., Earth. Many such models exist, but to understand the models in detail we must understand the primary input parameters. Here we......, and initial velocity), all remaining parameters are varied, resulting in more than 20 runs investigated here. The output parameters considered are velocity, density, magnetic field strength, and temperature. We find that the largest effects on the model output are the input parameters of upper limit...

  9. Green Input-Output Model for Power Company Theoretical & Application Analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on the theory of marginal opportunity cost, one kind of green input-output table and models of powercompany are put forward in this paper. For an appliable purpose, analysis of integrated planning, cost analysis, pricingof the power company are also given.

  10. The economic impact of multifunctional agriculture in Dutch regions: An input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2013-01-01

    Multifunctional agriculture is a broad concept lacking a precise definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model was constructed for multifunctional agriculture

  11. The economic impact of multifunctional agriculture in The Netherlands: A regional input-output model

    NARCIS (Netherlands)

    Heringa, P.W.; Heide, van der C.M.; Heijman, W.J.M.

    2012-01-01

    Multifunctional agriculture is a broad concept lacking a precise and uniform definition. Moreover, little is known about the societal importance of multifunctional agriculture. This paper is an empirical attempt to fill this gap. To this end, an input-output model is constructed for multifunctional

  12. Characteristic operator functions for quantum input-plant-output models and coherent control

    Science.gov (United States)

    Gough, John E.

    2015-01-01

    We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entries that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.

  13. Using a Joint-Input, Multi-Product Formulation to Improve Spatial Price Equilibrium Models

    OpenAIRE

    Bishop, Phillip M.; Pratt, James E.; Novakovic, Andrew M.

    1994-01-01

    Mathematical programming models, as typically formulated for international trade applications, may contain certain implied restrictions which lead to solutions which can be shown to be technically infeasible, or if feasible, then not actually an equilibrium. An alternative formulation is presented which allows joint-inputs and multi-products, with pure transshipment and product substitution forms of arbitrage.

  14. A neuromorphic model of motor overflow in focal hand dystonia due to correlated sensory input

    Science.gov (United States)

    Sohn, Won Joon; Niu, Chuanxin M.; Sanger, Terence D.

    2016-10-01

    Objective. Motor overflow is a common and frustrating symptom of dystonia, manifested as unintentional muscle contraction that occurs during an intended voluntary movement. Although it is suspected that motor overflow is due to cortical disorganization in some types of dystonia (e.g. focal hand dystonia), it remains elusive which mechanisms could initiate and, more importantly, perpetuate motor overflow. We hypothesize that distinct motor elements have low risk of motor overflow if their sensory inputs remain statistically independent. But when provided with correlated sensory inputs, pre-existing crosstalk among sensory projections will grow under spike-timing-dependent-plasticity (STDP) and eventually produce irreversible motor overflow. Approach. We emulated a simplified neuromuscular system comprising two anatomically distinct digital muscles innervated by two layers of spiking neurons with STDP. The synaptic connections between layers included crosstalk connections. The input neurons received either independent or correlated sensory drive during 4 days of continuous excitation. The emulation is critically enabled and accelerated by our neuromorphic hardware created in previous work. Main results. When driven by correlated sensory inputs, the crosstalk synapses gained weight and produced prominent motor overflow; the growth of crosstalk synapses resulted in enlarged sensory representation reflecting cortical reorganization. The overflow failed to recede when the inputs resumed their original uncorrelated statistics. In the control group, no motor overflow was observed. Significance. Although our model is a highly simplified and limited representation of the human sensorimotor system, it allows us to explain how correlated sensory input to anatomically distinct muscles is by itself sufficient to cause persistent and irreversible motor overflow. Further studies are needed to locate the source of correlation in sensory input.

  15. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike.

  16. Resonance model for non-perturbative inputs to gluon distributions in the hadrons

    CERN Document Server

    Ermolaev, B I; Troyan, S I

    2015-01-01

    We construct non-perturbative inputs for the elastic gluon-hadron scattering amplitudes in the forward kinematic region for both polarized and non-polarized hadrons. We use the optical theorem to relate invariant scattering amplitudes to the gluon distributions in the hadrons. By analyzing the structure of the UV and IR divergences, we can determine theoretical conditions on the non-perturbative inputs, and use these to construct the results in a generalized Basic Factorization framework using a simple Resonance Model. These results can then be related to the K_T and Collinear Factorization expressions, and the corresponding constrains can be extracted.

  17. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  18. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison

  19. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  20. Large uncertainty in soil carbon modelling related to carbon input calculation method

    Science.gov (United States)

    Keel, Sonja G.; Leifeld, Jens; Taghizadeh-Toosi, Arezoo; Oleson, Jørgen E.

    2016-04-01

    A model-based inventory for carbon (C) sinks and sources in agricultural soils is being established for Switzerland. As part of this project, five frequently used allometric equations that estimate soil C inputs based on measured yields are compared. To evaluate the different methods, we calculate soil C inputs for a long-term field trial in Switzerland. This DOK experiment (bio-Dynamic, bio-Organic, and conventional (German: Konventionell)) compares five different management systems, that are applied to identical crop rotations. Average calculated soil C inputs vary largely between allometric equations and range from 1.6 t C ha-1 yr-1 to 2.6 t C ha-1 yr-1. Among the most important crops in Switzerland, the uncertainty is largest for barley (difference between highest and lowest estimate: 3.0 t C ha-1 yr-1). For the unfertilized control treatment, the estimated soil C inputs vary less between allometric equations than for the treatment that received mineral fertilizer and farmyard manure. Most likely, this is due to the higher yields in the latter treatment, i.e. the difference between methods might be amplified because yields differ more. To evaluate the influence of these allometric equations on soil C dynamics we simulate the DOK trial for the years 1977-2004 using the model C-TOOL (Taghizadeh-Toosi et al. 2014) and the five different soil C input calculation methods. Across all treatments, C-TOOL simulates a decrease in soil C in line with the experimental data. This decline, however, varies between allometric equations (-2.4 t C ha-1 to -6.3 t C ha-1 for the years 1977-2004) and has the same order of magnitude as the difference between treatments. In summary, the method to estimate soil C inputs is identified as a significant source of uncertainty in soil C modelling. Choosing an appropriate allometric equation to derive the input data is thus a critical step when setting up a model-based national soil C inventory. References Taghizadeh-Toosi A et al. (2014) C

  1. Gaussian mixture model of heart rate variability.

    Directory of Open Access Journals (Sweden)

    Tommaso Costa

    Full Text Available Heart rate variability (HRV is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters.

  2. Bayesian variable selection for latent class models.

    Science.gov (United States)

    Ghosh, Joyee; Herring, Amy H; Siega-Riz, Anna Maria

    2011-09-01

    In this article, we develop a latent class model with class probabilities that depend on subject-specific covariates. One of our major goals is to identify important predictors of latent classes. We consider methodology that allows estimation of latent classes while allowing for variable selection uncertainty. We propose a Bayesian variable selection approach and implement a stochastic search Gibbs sampler for posterior computation to obtain model-averaged estimates of quantities of interest such as marginal inclusion probabilities of predictors. Our methods are illustrated through simulation studies and application to data on weight gain during pregnancy, where it is of interest to identify important predictors of latent weight gain classes.

  3. Modeling the impact of common noise inputs on the network activity of retinal ganglion cells.

    Science.gov (United States)

    Vidne, Michael; Ahmadian, Yashar; Shlens, Jonathon; Pillow, Jonathan W; Kulkarni, Jayant; Litke, Alan M; Chichilnisky, E J; Simoncelli, Eero; Paninski, Liam

    2012-08-01

    Synchronized spontaneous firing among retinal ganglion cells (RGCs), on timescales faster than visual responses, has been reported in many studies. Two candidate mechanisms of synchronized firing include direct coupling and shared noisy inputs. In neighboring parasol cells of primate retina, which exhibit rapid synchronized firing that has been studied extensively, recent experimental work indicates that direct electrical or synaptic coupling is weak, but shared synaptic input in the absence of modulated stimuli is strong. However, previous modeling efforts have not accounted for this aspect of firing in the parasol cell population. Here we develop a new model that incorporates the effects of common noise, and apply it to analyze the light responses and synchronized firing of a large, densely-sampled network of over 250 simultaneously recorded parasol cells. We use a generalized linear model in which the spike rate in each cell is determined by the linear combination of the spatio-temporally filtered visual input, the temporally filtered prior spikes of that cell, and unobserved sources representing common noise. The model accurately captures the statistical structure of the spike trains and the encoding of the visual stimulus, without the direct coupling assumption present in previous modeling work. Finally, we examined the problem of decoding the visual stimulus from the spike train given the estimated parameters. The common-noise model produces Bayesian decoding performance as accurate as that of a model with direct coupling, but with significantly more robustness to spike timing perturbations.

  4. Estimation of sectoral prices in the BNL energy input--output model

    Energy Technology Data Exchange (ETDEWEB)

    Tessmer, R.G. Jr.; Groncki, P.; Boyce, G.W. Jr.

    1977-12-01

    Value-added coefficients have been incorporated into Brookhaven's Energy Input-Output Model so that one can calculate the implicit price at which each sector sells its output to interindustry and final-demand purchasers. Certain adjustments to historical 1967 data are required because of the unique structure of the model. Procedures are also described for projecting energy-sector coefficients in future years that are consistent with exogenously specified energy prices.

  5. Global Behaviors of a Chemostat Model with Delayed Nutrient Recycling and Periodically Pulsed Input

    Directory of Open Access Journals (Sweden)

    Kai Wang

    2010-01-01

    Full Text Available The dynamic behaviors in a chemostat model with delayed nutrient recycling and periodically pulsed input are studied. By introducing new analysis technique, the sufficient and necessary conditions on the permanence and extinction of the microorganisms are obtained. Furthermore, by using the Liapunov function method, the sufficient condition on the global attractivity of the model is established. Finally, an example is given to demonstrate the effectiveness of the results in this paper.

  6. Use of Generalised Linear Models to quantify rainfall input uncertainty to hydrological modelling in the Upper Nile

    Science.gov (United States)

    Kigobe, M.; McIntyre, N.; Wheater, H. S.

    2009-04-01

    Interest in the application of climate and hydrological models in the Nile basin has risen in the recent past; however, the first drawback for most efforts has been the estimation of historic precipitation patterns. In this study we have applied stochastic models to infill and extend observed data sets to generate inputs for hydrological modelling. Several stochastic climate models within the Generalised Linear Modelling (GLM) framework have been applied to reproduce spatial and temporal patterns of precipitation in the Kyoga basin. A logistic regression model (describing rainfall occurrence) and a gamma distribution (describing rainfall amounts) are used to model rainfall patterns. The parameters of the models are functions of spatial and temporal covariates, and are fitted to the observed rainfall data using log-likelihood methods. Using the fitted model, multi-site rainfall sequences over the Kyoga basin are generated stochastically as a function of the dominant seasonal, climatic and geographic controls. The rainfall sequences generated are then used to drive a semi distributed hydrological model using the Soil Water and Assessment Tool (SWAT). The sensitivity of runoff to uncertainty associated with missing precipitation records is thus tested. In an application to the Lake Kyoga catchment, the performance of the hydrological model highly depends on the spatial representation of the input precipitation patterns, model parameterisation and the performance of the GLM stochastic models used to generate the input rainfall. The results obtained so far disclose that stochastic models can be developed for several climatic regions within the Kyoga basin; and, given identification of a stochastic rainfall model; input uncertainty due to precipitation can be usefully quantified. The ways forward for rainfall modelling and hydrological simulation in Uganda and the Upper Nile are discussed. Key Words: Precipitation, Generalised Linear Models, Input Uncertainty, Soil Water

  7. Modelling groundwater discharge areas using only digital elevation models as input data

    Energy Technology Data Exchange (ETDEWEB)

    Brydsten, Lars [Umeaa Univ. (Sweden). Dept. of Biology and Environmental Science

    2006-10-15

    Advanced geohydrological models require data on topography, soil distribution in three dimensions, vegetation, land use, bedrock fracture zones. To model present geohydrological conditions, these factors can be gathered with different techniques. If a future geohydrological condition is modelled in an area with positive shore displacement (say 5,000 or 10,000 years), some of these factors can be difficult to measure. This could include the development of wetlands and the filling of lakes. If the goal of the model is to predict distribution of groundwater recharge and discharge areas in the landscape, the most important factor is topography. The question is how much can topography alone explain the distribution of geohydrological objects in the landscape. A simplified description of the distribution of geohydrological objects in the landscape is that groundwater recharge areas occur at local elevation curvatures and discharge occurs in lakes, brooks, and low situated slopes. Areas in-between these make up discharge areas during wet periods and recharge areas during dry periods. A model that could predict this pattern only using topography data needs to be able to predict high ridges and future lakes and brooks. This study uses GIS software with four different functions using digital elevation models as input data, geomorphometrical parameters to predict landscape ridges, basin fill for predicting lakes, flow accumulations for predicting future waterways, and topographical wetness indexes for dividing in-between areas based on degree of wetness. An area between the village of and Forsmarks' Nuclear Power Plant has been used to calibrate the model. The area is within the SKB 10-metre Elevation Model (DEM) and has a high-resolution orienteering map for wetlands. Wetlands are assumed to be groundwater discharge areas. Five hundred points were randomly distributed across the wetlands. These are potential discharge points. Model parameters were chosen with the

  8. Regional disaster impact analysis: comparing input-output and computable general equilibrium models

    Science.gov (United States)

    Koks, Elco E.; Carrera, Lorenzo; Jonkeren, Olaf; Aerts, Jeroen C. J. H.; Husby, Trond G.; Thissen, Mark; Standardi, Gabriele; Mysiak, Jaroslav

    2016-08-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of them in combination with noneconomic methods. While both IO and CGE models are widely used, they are mainly compared on theoretical grounds. Few studies have compared disaster impacts of different model types in a systematic way and for the same geographical area, using similar input data. Such a comparison is valuable from both a scientific and policy perspective as the magnitude and the spatial distribution of the estimated losses are born likely to vary with the chosen modelling approach (IO, CGE, or hybrid). Hence, regional disaster impact loss estimates resulting from a range of models facilitate better decisions and policy making. Therefore, this study analyses the economic consequences for a specific case study, using three regional disaster impact models: two hybrid IO models and a CGE model. The case study concerns two flood scenarios in the Po River basin in Italy. Modelling results indicate that the difference in estimated total (national) economic losses and the regional distribution of those losses may vary by up to a factor of 7 between the three models, depending on the type of recovery path. Total economic impact, comprising all Italian regions, is negative in all models though.

  9. Effect of correlated inputs on DO (dissolved oxygen) uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Brown, L.C.; Song, Q.

    1988-06-01

    Although uncertainty analysis has been discussed in recent water-quality-modeling literature, much of the work has assumed that all input variables and parameters are mutually independent. The objective of this paper is to evaluate the importance of correlation among the model inputs in the study of model-output uncertainty. The model used for demonstrating the influence of input-variable correlation is the Streeter-Phelps dissolved oxygen equation. The model forms the basis of many of the water-quality models currently in use and the relationships between model inputs and output-state variables are well understood.

  10. Modelling variability in hospital bed occupancy.

    Science.gov (United States)

    Harrison, Gary W; Shafer, Andrea; Mackay, Mark

    2005-11-01

    A stochastic version of the Harrison-Millard multistage model of the flow of patients through a hospital division is developed in order to model correctly not only the average but also the variability in occupancy levels, since it is the variability that makes planning difficult and high percent occupancy levels increase the risk of frequent overflows. The model is fit to one year of data from the medical division of an acute care hospital in Adelaide, Australia. Admissions can be modeled as a Poisson process with rates varying by day of the week and by season. Methods are developed to use the entire annual occupancy profile to estimate transition rate parameters when admission rates are not constant and to estimate rate parameters that vary by day of the week and by season, which are necessary for the model variability to be as large as in the data. The final model matches well the mean, standard deviation and autocorrelation function of the occupancy data and also six months of data not used to estimate the parameters. Repeated simulations are used to construct percentiles of the daily occupancy distributions and thus identify ranges of normal fluctuations and those that are substantive deviations from the past, and also to investigate the trade-offs between frequency of overflows and the percent occupancy for both fixed and flexible bed allocations. Larger divisions can achieve more efficient occupancy levels than smaller ones with the same frequency of overflows. Seasonal variations are more significant than day-of-the-week variations and variable discharge rates are more significant than variable admission rates in contributing to overflows.

  11. Consolidating soil carbon turnover models by improved estimates of belowground carbon input

    Science.gov (United States)

    Taghizadeh-Toosi, Arezoo; Christensen, Bent T.; Glendining, Margaret; Olesen, Jørgen E.

    2016-09-01

    World soil carbon (C) stocks are third only to those in the ocean and earth crust, and represent twice the amount currently present in the atmosphere. Therefore, any small change in the amount of soil organic C (SOC) may affect carbon dioxide (CO2) concentrations in the atmosphere. Dynamic models of SOC help reveal the interaction among soil carbon systems, climate and land management, and they are also frequently used to help assess SOC dynamics. Those models often use allometric functions to calculate soil C inputs in which the amount of C in both above and below ground crop residues are assumed to be proportional to crop harvest yield. Here we argue that simulating changes in SOC stocks based on C input that are proportional to crop yield is not supported by data from long-term experiments with measured SOC changes. Rather, there is evidence that root C inputs are largely independent of crop yield, but crop specific. We discuss implications of applying fixed belowground C input regardless of crop yield on agricultural greenhouse gas mitigation and accounting.

  12. Application of a Linear Input/Output Model to Tankless Water Heaters

    Energy Technology Data Exchange (ETDEWEB)

    Butcher T.; Schoenbauer, B.

    2011-12-31

    In this study, the applicability of a linear input/output model to gas-fired, tankless water heaters has been evaluated. This simple model assumes that the relationship between input and output, averaged over both active draw and idle periods, is linear. This approach is being applied to boilers in other studies and offers the potential to make a small number of simple measurements to obtain the model parameters. These parameters can then be used to predict performance under complex load patterns. Both condensing and non-condensing water heaters have been tested under a very wide range of load conditions. It is shown that this approach can be used to reproduce performance metrics, such as the energy factor, and can be used to evaluate the impacts of alternative draw patterns and conditions.

  13. Interpolation of climate variables and temperature modeling

    Science.gov (United States)

    Samanta, Sailesh; Pal, Dilip Kumar; Lohar, Debasish; Pal, Babita

    2012-01-01

    Geographic Information Systems (GIS) and modeling are becoming powerful tools in agricultural research and natural resource management. This study proposes an empirical methodology for modeling and mapping of the monthly and annual air temperature using remote sensing and GIS techniques. The study area is Gangetic West Bengal and its neighborhood in the eastern India, where a number of weather systems occur throughout the year. Gangetic West Bengal is a region of strong heterogeneous surface with several weather disturbances. This paper also examines statistical approaches for interpolating climatic data over large regions, providing different interpolation techniques for climate variables' use in agricultural research. Three interpolation approaches, like inverse distance weighted averaging, thin-plate smoothing splines, and co-kriging are evaluated for 4° × 4° area, covering the eastern part of India. The land use/land cover, soil texture, and digital elevation model are used as the independent variables for temperature modeling. Multiple regression analysis with standard method is used to add dependent variables into regression equation. Prediction of mean temperature for monsoon season is better than winter season. Finally standard deviation errors are evaluated after comparing the predicted temperature and observed temperature of the area. For better improvement, distance from the coastline and seasonal wind pattern are stressed to be included as independent variables.

  14. Modeling Variability in Immunocompetence and Immunoresponsiveness

    NARCIS (Netherlands)

    Ask, B.; Waaij, van der E.H.; Bishop, S.C.

    2008-01-01

    The purposes of this paper were to 1) develop a stochastic model that would reflect observed variation between animals and across ages in immunocompetence and responsiveness; and 2) illustrate consequences of this variability for the statistical power of genotype comparisons and selection. A stochas

  15. Analytical modeling of the input admittance of an electric drive for stability analysis purposes

    Science.gov (United States)

    Girinon, S.; Baumann, C.; Piquet, H.; Roux, N.

    2009-07-01

    Embedded electric HVDC distribution network are facing difficult issues on quality and stability concerns. In order to help to resolve those problems, this paper proposes to develop an analytical model of an electric drive. This self-contained model includes an inverter, its regulation loops and the PMSM. After comparing the model with its equivalent (abc) full model, the study focuses on frequency analysis. The association with an input filter helps in expressing stability of the whole assembly by means of Routh-Hurtwitz criterion.

  16. New Results on Robust Model Predictive Control for Time-Delay Systems with Input Constraints

    Directory of Open Access Journals (Sweden)

    Qing Lu

    2014-01-01

    Full Text Available This paper investigates the problem of model predictive control for a class of nonlinear systems subject to state delays and input constraints. The time-varying delay is considered with both upper and lower bounds. A new model is proposed to approximate the delay. And the uncertainty is polytopic type. For the state-feedback MPC design objective, we formulate an optimization problem. Under model transformation, a new model predictive controller is designed such that the robust asymptotical stability of the closed-loop system can be guaranteed. Finally, the applicability of the presented results are demonstrated by a practical example.

  17. A hippocampal cognitive prosthesis: multi-input, multi-output nonlinear modeling and VLSI implementation.

    Science.gov (United States)

    Berger, Theodore W; Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; LaCoss, Jeff; Wills, Jack; Hampson, Robert E; Deadwyler, Sam A; Granacki, John J

    2012-03-01

    This paper describes the development of a cognitive prosthesis designed to restore the ability to form new long-term memories typically lost after damage to the hippocampus. The animal model used is delayed nonmatch-to-sample (DNMS) behavior in the rat, and the "core" of the prosthesis is a biomimetic multi-input/multi-output (MIMO) nonlinear model that provides the capability for predicting spatio-temporal spike train output of hippocampus (CA1) based on spatio-temporal spike train inputs recorded presynaptically to CA1 (e.g., CA3). We demonstrate the capability of the MIMO model for highly accurate predictions of CA1 coded memories that can be made on a single-trial basis and in real-time. When hippocampal CA1 function is blocked and long-term memory formation is lost, successful DNMS behavior also is abolished. However, when MIMO model predictions are used to reinstate CA1 memory-related activity by driving spatio-temporal electrical stimulation of hippocampal output to mimic the patterns of activity observed in control conditions, successful DNMS behavior is restored. We also outline the design in very-large-scale integration for a hardware implementation of a 16-input, 16-output MIMO model, along with spike sorting, amplification, and other functions necessary for a total system, when coupled together with electrode arrays to record extracellularly from populations of hippocampal neurons, that can serve as a cognitive prosthesis in behaving animals.

  18. Forecasting Urban Water Demand via Machine Learning Methods Coupled with a Bootstrap Rank-Ordered Conditional Mutual Information Input Variable Selection Method

    Science.gov (United States)

    Adamowski, J. F.; Quilty, J.; Khalil, B.; Rathinasamy, M.

    2014-12-01

    This paper explores forecasting short-term urban water demand (UWD) (using only historical records) through a variety of machine learning techniques coupled with a novel input variable selection (IVS) procedure. The proposed IVS technique termed, bootstrap rank-ordered conditional mutual information for real-valued signals (brCMIr), is multivariate, nonlinear, nonparametric, and probabilistic. The brCMIr method was tested in a case study using water demand time series for two urban water supply system pressure zones in Ottawa, Canada to select the most important historical records for use with each machine learning technique in order to generate forecasts of average and peak UWD for the respective pressure zones at lead times of 1, 3, and 7 days ahead. All lead time forecasts are computed using Artificial Neural Networks (ANN) as the base model, and are compared with Least Squares Support Vector Regression (LSSVR), as well as a novel machine learning method for UWD forecasting: the Extreme Learning Machine (ELM). Results from one-way analysis of variance (ANOVA) and Tukey Honesty Significance Difference (HSD) tests indicate that the LSSVR and ELM models are the best machine learning techniques to pair with brCMIr. However, ELM has significant computational advantages over LSSVR (and ANN) and provides a new and promising technique to explore in UWD forecasting.

  19. Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation.

    Science.gov (United States)

    Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S

    2017-10-01

    The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.

    Science.gov (United States)

    Ponzi, Adam; Wickens, Jeff

    2012-01-01

    The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.

  1. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network

    Directory of Open Access Journals (Sweden)

    Adam ePonzi

    2012-03-01

    Full Text Available The striatal medium spiny neuron (MSNs network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri stimulus time histograms (PSTH of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioural task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviourally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would in when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and delineate the range of parameters where this behaviour is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response

  2. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Science.gov (United States)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.

  3. Stable isotopes and Digital Elevation Models to study nutrient inputs in high-Arctic lakes

    Science.gov (United States)

    Calizza, Edoardo; Rossi, David; Costantini, Maria Letizia; Careddu, Giulio; Rossi, Loreto

    2016-04-01

    Ice cover, run-off from the watershed, aquatic and terrestrial primary productivity, guano deposition from birds are key factors controlling nutrient and organic matter inputs in high-Arctic lakes. All these factors are expected to be significantly affected by climate change. Quantifying these controls is a key baseline step to understand what combination of factors subtends the biological productivity in Arctic lakes and will drive their ecological response to environmental change. Basing on Digital Elevation Models, drainage maps, and C and N elemental content and stable isotope analysis in sediments, aquatic vegetation and a dominant macroinvertebrate species (Lepidurus arcticus Pallas 1973) belonging to Tvillingvatnet, Storvatnet and Kolhamna, three lakes located in North Spitsbergen (Svalbard), we propose an integrated approach for the analysis of (i) nutrient and organic matter inputs in lakes; (ii) the role of catchment hydro-geomorphology in determining inter-lake differences in the isotopic composition of sediments; (iii) effects of diverse nutrient inputs on the isotopic niche of Lepidurus arcticus. Given its high run-off and large catchment, organic deposits in Tvillingvatnet where dominated by terrestrial inputs, whereas inputs were mainly of aquatic origin in Storvatnet, a lowland lake with low potential run-off. In Kolhamna, organic deposits seem to be dominated by inputs from birds, which actually colonise the area. Isotopic signatures were similar between samples within each lake, representing precise tracers for studies on the effect of climate change on biogeochemical cycles in lakes. The isotopic niche of L. aricticus reflected differences in sediments between lakes, suggesting a bottom-up effect of hydro-geomorphology characterizing each lake on nutrients assimilated by this species. The presented approach proven to be an effective research pathway for the identification of factors subtending to nutrient and organic matter inputs and transfer

  4. Estimating input parameters from intracellular recordings in the Feller neuronal model

    Science.gov (United States)

    Bibbona, Enrico; Lansky, Petr; Sirovich, Roberta

    2010-03-01

    We study the estimation of the input parameters in a Feller neuronal model from a trajectory of the membrane potential sampled at discrete times. These input parameters are identified with the drift and the infinitesimal variance of the underlying stochastic diffusion process with multiplicative noise. The state space of the process is restricted from below by an inaccessible boundary. Further, the model is characterized by the presence of an absorbing threshold, the first hitting of which determines the length of each trajectory and which constrains the state space from above. We compare, both in the presence and in the absence of the absorbing threshold, the efficiency of different known estimators. In addition, we propose an estimator for the drift term, which is proved to be more efficient than the others, at least in the explored range of the parameters. The presence of the threshold makes the estimates of the drift term biased, and two methods to correct it are proposed.

  5. A diffusion model for drying of a heat sensitive solid under multiple heat input modes.

    Science.gov (United States)

    Sun, Lan; Islam, Md Raisul; Ho, J C; Mujumdar, A S

    2005-09-01

    To obtain optimal drying kinetics as well as quality of the dried product in a batch dryer, the energy required may be supplied by combining different modes of heat transfer. In this work, using potato slice as a model heat sensitive drying object, experimental studies were conducted using a batch heat pump dryer designed to permit simultaneous application of conduction and radiation heat. Four heat input schemes were compared: pure convection, radiation-coupled convection, conduction-coupled convection and radiation-conduction-coupled convection. A two-dimensional drying model was developed assuming the drying rate to be controlled by liquid water diffusion. Both drying rates and temperatures within the slab during drying under all these four heat input schemes showed good accord with measurements. Radiation-coupled convection is the recommended heat transfer scheme from the viewpoint of high drying rate and low energy consumption.

  6. On the redistribution of existing inputs using the spherical frontier dea model

    Directory of Open Access Journals (Sweden)

    José Virgilio Guedes de Avellar

    2010-04-01

    Full Text Available The Spherical Frontier DEA Model (SFM (Avellar et al., 2007 was developed to be used when one wants to fairly distribute a new and fixed input to a group of Decision Making Units (DMU's. SFM's basic idea is to distribute this new and fixed input in such a way that every DMU will be placed on an efficiency frontier with a spherical shape. We use SFM to analyze the problems that appear when one wants to redistribute an already existing input to a group of DMU's such that the total sum of this input will remain constant. We also analyze the case in which this total sum may vary.O Modelo de Fronteira Esférica (MFE (Avellar et al., 2007 foi desenvolvido para ser usado quando se deseja distribuir de maneira justa um novo insumo a um conjunto de unidades tomadoras de decisão (DMU's, da sigla em inglês, Decision Making Units. A ideia básica do MFE é a de distribuir esse novo insumo de maneira que todas as DMU's sejam colocadas numa fronteira de eficiência com um formato esférico. Neste artigo, usamos MFE para analisar o problema que surge quando se deseja redistribuir um insumo já existente para um grupo de DMU's de tal forma que a soma desse insumo para todas as DMU's se mantenha constante. Também analisamos o caso em que essa soma possa variar.

  7. Better temperature predictions in geothermal modelling by improved quality of input parameters

    DEFF Research Database (Denmark)

    Fuchs, Sven; Bording, Thue Sylvester; Balling, N.

    2015-01-01

    Thermal modelling is used to examine the subsurface temperature field and geothermal conditions at various scales (e.g. sedimentary basins, deep crust) and in the framework of different problem settings (e.g. scientific or industrial use). In such models, knowledge of rock thermal properties...... region (model dimension: 135 x115 km, depth: 20 km). Results clearly show that (i) the use of location-specific well-log derived rock thermal properties and (ii) the consideration of laterally varying input data (reflecting changes of thermofacies in the project area) significantly improves...

  8. A New Ensemble of Perturbed-Input-Parameter Simulations by the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Covey, C; Brandon, S; Bremer, P T; Domyancis, D; Garaizar, X; Johannesson, G; Klein, R; Klein, S A; Lucas, D D; Tannahill, J; Zhang, Y

    2011-10-27

    Uncertainty quantification (UQ) is a fundamental challenge in the numerical simulation of Earth's weather and climate, and other complex systems. It entails much more than attaching defensible error bars to predictions: in particular it includes assessing low-probability but high-consequence events. To achieve these goals with models containing a large number of uncertain input parameters, structural uncertainties, etc., raw computational power is needed. An automated, self-adapting search of the possible model configurations is also useful. Our UQ initiative at the Lawrence Livermore National Laboratory has produced the most extensive set to date of simulations from the US Community Atmosphere Model. We are examining output from about 3,000 twelve-year climate simulations generated with a specialized UQ software framework, and assessing the model's accuracy as a function of 21 to 28 uncertain input parameter values. Most of the input parameters we vary are related to the boundary layer, clouds, and other sub-grid scale processes. Our simulations prescribe surface boundary conditions (sea surface temperatures and sea ice amounts) to match recent observations. Fully searching this 21+ dimensional space is impossible, but sensitivity and ranking algorithms can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination. Bayesian statistical constraints, employing a variety of climate observations as metrics, also seem promising. Observational constraints will be important in the next step of our project, which will compute sea surface temperatures and sea ice interactively, and will study climate change due to increasing atmospheric carbon dioxide.

  9. Minimal state space realisation of continuous-time linear time-variant input-output models

    Science.gov (United States)

    Goos, J.; Pintelon, R.

    2016-04-01

    In the linear time-invariant (LTI) framework, the transformation from an input-output equation into state space representation is well understood. Several canonical forms exist that realise the same dynamic behaviour. If the coefficients become time-varying however, the LTI transformation no longer holds. We prove by induction that there exists a closed-form expression for the observability canonical state space model, using binomial coefficients.

  10. Integrated Flight Mechanic and Aeroelastic Modelling and Control of a Flexible Aircraft Considering Multidimensional Gust Input

    Science.gov (United States)

    2000-05-01

    INTEGRATED FLIGHT MECHANIC AND AEROELASTIC MODELLING AND CONTROL OF A FLEXIBLE AIRCRAFT CONSIDERING MULTIDIMENSIONAL GUST INPUT Patrick Teufel, Martin Hanel...the lateral separation distance have been developed by ’ = matrix of two dimensional spectrum function Eichenbaum 4 and are described by Bessel...Journal of Aircraft, Vol. 30, No. 5, Sept.-Oct. 1993 Relations to Risk Sensitivity, System & Control Letters 11, [4] Eichenbaum F.D., Evaluation of 3D

  11. The Role of Spatio-Temporal Resolution of Rainfall Inputs on a Landscape Evolution Model

    Science.gov (United States)

    Skinner, C. J.; Coulthard, T. J.

    2015-12-01

    Landscape Evolution Models are important experimental tools for understanding the long-term development of landscapes. Designed to simulate timescales ranging from decades to millennia, they are usually driven by precipitation inputs that are lumped, both spatially across the drainage basin, and temporally to daily, monthly, or even annual rates. This is based on an assumption that the spatial and temporal heterogeneity of the rainfall will equalise over the long timescales simulated. However, recent studies (Coulthard et al., 2012) have shown that such models are sensitive to event magnitudes, with exponential increases in sediment yields generated by linear increases in flood event size at a basin scale. This suggests that there may be a sensitivity to the spatial and temporal scales of rainfall used to drive such models. This study uses the CAESAR-Lisflood Landscape Evolution Model to investigate the impact of spatial and temporal resolution of rainfall input on model outputs. The sediment response to a range of temporal (15 min to daily) and spatial (5 km to 50km) resolutions over three different drainage basin sizes was observed. The results showed the model was sensitive to both, generating up to 100% differences in modelled sediment yields with smaller spatial and temporal resolution precipitation. Larger drainage basins also showed a greater sensitivity to both spatial and temporal resolution. Furthermore, analysis of the distribution of erosion and deposition patterns suggested that small temporal and spatial resolution inputs increased erosion in drainage basin headwaters and deposition in the valley floors. Both of these findings may have implications for existing models and approaches for simulating landscape development.

  12. Evolution of accretion disc flow in cataclysmic variables. 3. Outburst properties of constant and uniform-. cap alpha. model discs

    Energy Technology Data Exchange (ETDEWEB)

    Lin, D.N.C.; Faulkner, J. (Lick Observatory, Santa Cruz, CA (USA); California Univ., Santa Cruz (USA). Board of Studies in Astronomy and Astrophysics); Papaloizou, J. (Queen Mary Coll., London (UK). Dept. of Applied Mathematics)

    1985-01-01

    The investigation of accretion disc models relevant to cataclysmic-variable systems is continued. This paper examines the stability and evolution of some simple accretion disc models in which the viscosity is prescribed by an ad hoc uniform-..cap alpha.. model. It is primarily concerned with systems in which the mass-input rate from the secondary to the disc around the primary is assumed to be constant. However, initial calculations with variable mass-input rates are also performed. The time-dependent visual magnitude light-curves are constructed for cataclysmic binaries with a range of disc size, primary mass, mass-input rate, and magnitude of viscosity.

  13. Analyzing the sensitivity of a flood risk assessment model towards its input data

    Science.gov (United States)

    Glas, Hanne; Deruyter, Greet; De Maeyer, Philippe; Mandal, Arpita; James-Williamson, Sherene

    2016-11-01

    The Small Island Developing States are characterized by an unstable economy and low-lying, densely populated cities, resulting in a high vulnerability to natural hazards. Flooding affects more people than any other hazard. To limit the consequences of these hazards, adequate risk assessments are indispensable. Satisfactory input data for these assessments are hard to acquire, especially in developing countries. Therefore, in this study, a methodology was developed and evaluated to test the sensitivity of a flood model towards its input data in order to determine a minimum set of indispensable data. In a first step, a flood damage assessment model was created for the case study of Annotto Bay, Jamaica. This model generates a damage map for the region based on the flood extent map of the 2001 inundations caused by Tropical Storm Michelle. Three damages were taken into account: building, road and crop damage. Twelve scenarios were generated, each with a different combination of input data, testing one of the three damage calculations for its sensitivity. One main conclusion was that population density, in combination with an average number of people per household, is a good parameter in determining the building damage when exact building locations are unknown. Furthermore, the importance of roads for an accurate visual result was demonstrated.

  14. [Bivariate statistical model for calculating phosphorus input loads to the river from point and nonpoint sources].

    Science.gov (United States)

    Chen, Ding-Jiang; Sun, Si-Yang; Jia, Ying-Na; Chen, Jia-Bo; Lü, Jun

    2013-01-01

    Based on the hydrological difference between the point source (PS) and nonpoint source (NPS) pollution processes and the major influencing mechanism of in-stream retention processes, a bivariate statistical model was developed for relating river phosphorus load to river water flow rate and temperature. Using the calibrated and validated four model coefficients from in-stream monitoring data, monthly phosphorus input loads to the river from PS and NPS can be easily determined by the model. Compared to current hydrologica methods, this model takes the in-stream retention process and the upstream inflow term into consideration; thus it improves the knowledge on phosphorus pollution processes and can meet the requirements of both the district-based and watershed-based wate quality management patterns. Using this model, total phosphorus (TP) input load to the Changle River in Zhejiang Province was calculated. Results indicated that annual total TP input load was (54.6 +/- 11.9) t x a(-1) in 2004-2009, with upstream water inflow, PS and NPS contributing to 5% +/- 1%, 12% +/- 3% and 83% +/- 3%, respectively. The cumulative NPS TP input load during the high flow periods (i. e. , June, July, August and September) in summer accounted for 50% +/- 9% of the annual amount, increasing the alga blooming risk in downstream water bodies. Annual in-stream TP retention load was (4.5 +/- 0.1) t x a(-1) and occupied 9% +/- 2% of the total input load. The cumulative in-stream TP retention load during the summer periods (i. e. , June-September) accounted for 55% +/- 2% of the annual amount, indicating that in-stream retention function plays an important role in seasonal TP transport and transformation processes. This bivariate statistical model only requires commonly available in-stream monitoring data (i. e. , river phosphorus load, water flow rate and temperature) with no requirement of special software knowledge; thus it offers researchers an managers with a cost-effective tool for

  15. Dispersion modeling of accidental releases of toxic gases - Sensitivity study and optimization of the meteorological input

    Science.gov (United States)

    Baumann-Stanzer, K.; Stenzel, S.

    2009-04-01

    Several air dispersion models are available for prediction and simulation of the hazard areas associated with accidental releases of toxic gases. The most model packages (commercial or free of charge) include a chemical database, an intuitive graphical user interface (GUI) and automated graphical output for effective presentation of results. The models are designed especially for analyzing different accidental toxic release scenarios ("worst-case scenarios"), preparing emergency response plans and optimal countermeasures as well as for real-time risk assessment and management. Uncertainties in the meteorological input together with incorrect estimates of the source play a critical role for the model results. The research project RETOMOD (reference scenarios calculations for toxic gas releases - model systems and their utility for the fire brigade) was conducted by the Central Institute for Meteorology and Geodynamics (ZAMG) in cooperation with the Vienna fire brigade, OMV Refining & Marketing GmbH and Synex Ries & Greßlehner GmbH. RETOMOD was funded by the KIRAS safety research program at the Austrian Ministry of Transport, Innovation and Technology (www.kiras.at). The main tasks of this project were 1. Sensitivity study and optimization of the meteorological input for modeling of the hazard areas (human exposure) during the accidental toxic releases. 2. Comparison of several model packages (based on reference scenarios) in order to estimate the utility for the fire brigades. This presentation gives a short introduction to the project and presents the results of task 1 (meteorological input). The results of task 2 are presented by Stenzel and Baumann-Stanzer in this session. For the aim of this project, the observation-based analysis and forecasting system INCA, developed in the Central Institute for Meteorology and Geodynamics (ZAMG) was used. INCA (Integrated Nowcasting through Comprehensive Analysis) data were calculated with 1 km horizontal resolution and

  16. A time-resolved model of the mesospheric Na layer: constraints on the meteor input function

    Directory of Open Access Journals (Sweden)

    J. M. C. Plane

    2004-01-01

    Full Text Available A time-resolved model of the Na layer in the mesosphere/lower thermosphere region is described, where the continuity equations for the major sodium species Na, Na+ and NaHCO3 are solved explicity, and the other short-lived species are treated in steady-state. It is shown that the diurnal variation of the Na layer can only be modelled satisfactorily if sodium species are permanently removed below about 85 km, both through the dimerization of NaHCO3 and the uptake of sodium species on meteoric smoke particles that are assumed to have formed from the recondensation of vaporized meteoroids. When the sensitivity of the Na layer to the meteoroid input function is considered, an inconsistent picture emerges. The ratio of the column abundance of Na+ to Na is shown to increase strongly with the average meteoroid velocity, because the Na is injected at higher altitudes. Comparison with a limited set of Na+ measurements indicates that the average meteoroid velocity is probably less than about 25 km s-1, in agreement with velocity estimates from conventional meteor radars, and considerably slower than recent observations made by wide aperture incoherent scatter radars. The Na column abundance is shown to be very sensitive to the meteoroid mass input rate, and to the rate of vertical transport by eddy diffusion. Although the magnitude of the eddy diffusion coefficient in the 80–90 km region is uncertain, there is a consensus between recent models using parameterisations of gravity wave momentum deposition that the average value is less than 3×105 cm2 s-1. This requires that the global meteoric mass input rate is less than about 20 td-1, which is closest to estimates from incoherent scatter radar observations. Finally, the diurnal variation in the meteoroid input rate only slight perturbs the Na layer, because the residence time of Na in the layer is several days, and diurnal effects are effectively averaged out.

  17. An Integrated Hydrologic Bayesian Multi-Model Combination Framework: Confronting Input, parameter and model structural uncertainty in Hydrologic Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Sorooshian, S

    2006-05-05

    This paper presents a new technique--Integrated Bayesian Uncertainty Estimator (IBUNE) to account for the major uncertainties of hydrologic rainfall-runoff predictions explicitly. The uncertainties from the input (forcing) data--mainly the precipitation observations and from the model parameters are reduced through a Monte Carlo Markov Chain (MCMC) scheme named Shuffled Complex Evolution Metropolis (SCEM) algorithm which has been extended to include a precipitation error model. Afterwards, the Bayesian Model Averaging (BMA) scheme is employed to further improve the prediction skill and uncertainty estimation using multiple model output. A series of case studies using three rainfall-runoff models to predict the streamflow in the Leaf River basin, Mississippi are used to examine the necessity and usefulness of this technique. The results suggests that ignoring either input forcings error or model structural uncertainty will lead to unrealistic model simulations and their associated uncertainty bounds which does not consistently capture and represent the real-world behavior of the watershed.

  18. Simple nonlinear models suggest variable star universality

    CERN Document Server

    Lindner, John F; Kia, Behnam; Hippke, Michael; Learned, John G; Ditto, William L

    2015-01-01

    Dramatically improved data from observatories like the CoRoT and Kepler spacecraft have recently facilitated nonlinear time series analysis and phenomenological modeling of variable stars, including the search for strange (aka fractal) or chaotic dynamics. We recently argued [Lindner et al., Phys. Rev. Lett. 114 (2015) 054101] that the Kepler data includes "golden" stars, whose luminosities vary quasiperiodically with two frequencies nearly in the golden ratio, and whose secondary frequencies exhibit power-law scaling with exponent near -1.5, suggesting strange nonchaotic dynamics and singular spectra. Here we use a series of phenomenological models to make plausible the connection between golden stars and fractal spectra. We thereby suggest that at least some features of variable star dynamics reflect universal nonlinear phenomena common to even simple systems.

  19. Dissecting magnetar variability with Bayesian hierarchical models

    CERN Document Server

    Huppenkothen, D; Hogg, D W; Murray, I; Frean, M; Elenbaas, C; Watts, A L; Levin, Y; van der Horst, A J; Kouveliotou, C

    2015-01-01

    Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behaviour, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favoured models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture afte...

  20. Estimating net present value variability for deterministic models

    NARCIS (Netherlands)

    van Groenendaal, W.J.H.

    1995-01-01

    For decision makers the variability in the net present value (NPV) of an investment project is an indication of the project's risk. So-called risk analysis is one way to estimate this variability. However, risk analysis requires knowledge about the stochastic character of the inputs. For large, long

  1. Long-term solar UV radiation reconstructed by ANN modelling with emphasis on spatial characteristics of input data

    Science.gov (United States)

    Feister, U.; Junk, J.; Woldt, M.; Bais, A.; Helbig, A.; Janouch, M.; Josefsson, W.; Kazantzidis, A.; Lindfors, A.; den Outer, P. N.; Slaper, H.

    2008-06-01

    Artificial Neural Networks (ANN) are efficient tools to derive solar UV radiation from measured meteorological parameters such as global radiation, aerosol optical depths and atmospheric column ozone. The ANN model has been tested with different combinations of data from the two sites Potsdam and Lindenberg, and used to reconstruct solar UV radiation at eight European sites by more than 100 years into the past. Special emphasis will be given to the discussion of small-scale characteristics of input data to the ANN model. Annual totals of UV radiation derived from reconstructed daily UV values reflect interannual variations and long-term patterns that are compatible with variabilities and changes of measured input data, in particular global dimming by about 1980/1990, subsequent global brightening, volcanic eruption effects such as that of Mt. Pinatubo, and the long-term ozone decline since the 1970s. Patterns of annual erythemal UV radiation are very similar at sites located at latitudes close to each other, but different patterns occur between UV radiation at sites in different latitude regions.

  2. Long-term solar UV radiation reconstructed by ANN modelling with emphasis on spatial characteristics of input data

    Directory of Open Access Journals (Sweden)

    U. Feister

    2008-06-01

    Full Text Available Artificial Neural Networks (ANN are efficient tools to derive solar UV radiation from measured meteorological parameters such as global radiation, aerosol optical depths and atmospheric column ozone. The ANN model has been tested with different combinations of data from the two sites Potsdam and Lindenberg, and used to reconstruct solar UV radiation at eight European sites by more than 100 years into the past. Special emphasis will be given to the discussion of small-scale characteristics of input data to the ANN model.

    Annual totals of UV radiation derived from reconstructed daily UV values reflect interannual variations and long-term patterns that are compatible with variabilities and changes of measured input data, in particular global dimming by about 1980/1990, subsequent global brightening, volcanic eruption effects such as that of Mt. Pinatubo, and the long-term ozone decline since the 1970s. Patterns of annual erythemal UV radiation are very similar at sites located at latitudes close to each other, but different patterns occur between UV radiation at sites in different latitude regions.

  3. Efficient uncertainty quantification of a fully nonlinear and dispersive water wave model with random inputs

    DEFF Research Database (Denmark)

    Bigoni, Daniele; Engsig-Karup, Allan Peter; Eskilsson, Claes

    2016-01-01

    of the evolution of waves. The model is analyzed using random sampling techniques and nonintrusive methods based on generalized polynomial chaos (PC). These methods allow us to accurately and efficiently estimate the probability distribution of the solution and require only the computation of the solution...... at different points in the parameter space, allowing for the reuse of existing simulation software. The choice of the applied methods is driven by the number of uncertain input parameters and by the fact that finding the solution of the considered model is computationally intensive. We revisit experimental...

  4. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples.

  5. Dynamics of a Stage Structured Pest Control Model in a Polluted Environment with Pulse Pollution Input

    Directory of Open Access Journals (Sweden)

    Bing Liu

    2013-01-01

    Full Text Available By using pollution model and impulsive delay differential equation, we formulate a pest control model with stage structure for natural enemy in a polluted environment by introducing a constant periodic pollutant input and killing pest at different fixed moments and investigate the dynamics of such a system. We assume only that the natural enemies are affected by pollution, and we choose the method to kill the pest without harming natural enemies. Sufficient conditions for global attractivity of the natural enemy-extinction periodic solution and permanence of the system are obtained. Numerical simulations are presented to confirm our theoretical results.

  6. System Identification for Nonlinear FOPDT Model with Input-Dependent Dead-Time

    DEFF Research Database (Denmark)

    Sun, Zhen; Yang, Zhenyu

    2011-01-01

    . In order to identify these parameters in an online manner, the considered system is discretized at first. Then, the nonlinear FOPDT identification problem is formulated as a stochastic Mixed Integer Non-Linear Programming problem, and an identification algorithm is proposed by combining the Branch......An on-line iterative method of system identification for a kind of nonlinear FOPDT system is proposed in the paper. The considered nonlinear FOPDT model is an extension of the standard FOPDT model by means that its dead time depends on the input signal and the other parameters are time dependent...

  7. Predicting input impedance and efficiency of graphene reconfigurable dipoles using a simple circuit model

    CERN Document Server

    Tamagnone, Michele

    2014-01-01

    An analytical circuit model able to predict the input impedance of reconfigurable graphene plasmonic dipoles is presented. A suitable definition of plasmonic characteristic impedance, employing natural currents, is used to for consistent modeling of the antenna-load connection in the circuit. In its purely analytical form, the model shows good agreement with full-wave simulations, and explains the remarkable tuning properties of graphene antennas. Furthermore, using a single full-wave simulation and scaling laws, additional parasitic elements can be determined for a vast parametric space, leading to very accurate modeling. Finally, we also show that the modeling approach allows fair estimation of radiation efficiency as well. The approach also applies to thin plasmonic antennas realized using noble metals or semiconductors.

  8. Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes

    Science.gov (United States)

    Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping

    2017-01-01

    Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.

  9. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    Science.gov (United States)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  10. Scaling precipitation input to spatially distributed hydrological models by measured snow distribution

    Directory of Open Access Journals (Sweden)

    Christian Vögeli

    2016-12-01

    Full Text Available Accurate knowledge on snow distribution in alpine terrain is crucial for various applicationssuch as flood risk assessment, avalanche warning or managing water supply and hydro-power.To simulate the seasonal snow cover development in alpine terrain, the spatially distributed,physics-based model Alpine3D is suitable. The model is typically driven by spatial interpolationsof observations from automatic weather stations (AWS, leading to errors in the spatial distributionof atmospheric forcing. With recent advances in remote sensing techniques, maps of snowdepth can be acquired with high spatial resolution and accuracy. In this work, maps of the snowdepth distribution, calculated from summer and winter digital surface models based on AirborneDigital Sensors (ADS, are used to scale precipitation input data, with the aim to improve theaccuracy of simulation of the spatial distribution of snow with Alpine3D. A simple method toscale and redistribute precipitation is presented and the performance is analysed. The scalingmethod is only applied if it is snowing. For rainfall the precipitation is distributed by interpolation,with a simple air temperature threshold used for the determination of the precipitation phase.It was found that the accuracy of spatial snow distribution could be improved significantly forthe simulated domain. The standard deviation of absolute snow depth error is reduced up toa factor 3.4 to less than 20 cm. The mean absolute error in snow distribution was reducedwhen using representative input sources for the simulation domain. For inter-annual scaling, themodel performance could also be improved, even when using a remote sensing dataset from adifferent winter. In conclusion, using remote sensing data to process precipitation input, complexprocesses such as preferential snow deposition and snow relocation due to wind or avalanches,can be substituted and modelling performance of spatial snow distribution is improved.

  11. Input impedance and reflection coefficient in fractal-like models of asymmetrically branching compliant tubes.

    Science.gov (United States)

    Brown, D J

    1996-07-01

    A mathematical model is described, based on linear transmission line theory, for the computation of hydraulic input impedance spectra in complex, dichotomously branching networks similar to mammalian arterial systems. Conceptually, the networks are constructed from a discretized set of self-similar compliant tubes whose dimensions are described by an integer power law. The model allows specification of the branching geometry, i.e., the daughter-parent branch area ratio and the daughter-daughter area asymmetry ratio, as functions of vessel size. Characteristic impedances of individual vessels are described by linear theory for a fully constrained thick-walled elastic tube. Besides termination impedances and fluid density and viscosity, other model parameters included relative vessel length and phase velocity, each as a function of vessel size (elastic nonuniformity). The primary goal of the study was to examine systematically the effect of fractal branching asymmetry, both degree and location within the network, on the complex input impedance spectrum and reflection coefficient. With progressive branching asymmetry, fractal model spectra exhibit some of the features inherent in natural arterial systems such as the loss of prominent, regularly-occurring maxima and minima; the effect is most apparent at higher frequencies. Marked reduction of the reflection coefficient occurs, due to disparities in wave path length, when branching is asymmetric. Because of path length differences, branching asymmetry near the system input has a far greater effect on minimizing spectrum oscillations and reflections than downstream asymmetry. Fractal-like constructs suggest a means by which arterial trees of realistic complexity might be described, both structurally and functionally.

  12. The MARINA model (Model to Assess River Inputs of Nutrients to seAs): Model description and results for China.

    Science.gov (United States)

    Strokal, Maryna; Kroeze, Carolien; Wang, Mengru; Bai, Zhaohai; Ma, Lin

    2016-08-15

    Chinese agriculture has been developing fast towards industrial food production systems that discharge nutrient-rich wastewater into rivers. As a result, nutrient export by rivers has been increasing, resulting in coastal water pollution. We developed a Model to Assess River Inputs of Nutrients to seAs (MARINA) for China. The MARINA Nutrient Model quantifies river export of nutrients by source at the sub-basin scale as a function of human activities on land. MARINA is a downscaled version for China of the Global NEWS-2 (Nutrient Export from WaterSheds) model with an improved approach for nutrient losses from animal production and population. We use the model to quantify dissolved inorganic and organic nitrogen (N) and phosphorus (P) export by six large rivers draining into the Bohai Gulf (Yellow, Hai, Liao), Yellow Sea (Yangtze, Huai) and South China Sea (Pearl) in 1970, 2000 and 2050. We addressed uncertainties in the MARINA Nutrient model. Between 1970 and 2000 river export of dissolved N and P increased by a factor of 2-8 depending on sea and nutrient form. Thus, the risk for coastal eutrophication increased. Direct losses of manure to rivers contribute to 60-78% of nutrient inputs to the Bohai Gulf and 20-74% of nutrient inputs to the other seas in 2000. Sewage is an important source of dissolved inorganic P, and synthetic fertilizers of dissolved inorganic N. Over half of the nutrients exported by the Yangtze and Pearl rivers originated from human activities in downstream and middlestream sub-basins. The Yellow River exported up to 70% of dissolved inorganic N and P from downstream sub-basins and of dissolved organic N and P from middlestream sub-basins. Rivers draining into the Bohai Gulf are drier, and thus transport fewer nutrients. For the future we calculate further increases in river export of nutrients. The MARINA Nutrient model quantifies the main sources of coastal water pollution for sub-basins. This information can contribute to formulation of

  13. Nutrient inputs to the Laurentian Great Lakes by source and watershed estimated using SPARROW watershed models

    Science.gov (United States)

    Robertson, Dale M.; Saad, David A.

    2011-01-01

    Nutrient input to the Laurentian Great Lakes continues to cause problems with eutrophication. To reduce the extent and severity of these problems, target nutrient loads were established and Total Maximum Daily Loads are being developed for many tributaries. Without detailed loading information it is difficult to determine if the targets are being met and how to prioritize rehabilitation efforts. To help address these issues, SPAtially Referenced Regressions On Watershed attributes (SPARROW) models were developed for estimating loads and sources of phosphorus (P) and nitrogen (N) from the United States (U.S.) portion of the Great Lakes, Upper Mississippi, Ohio, and Red River Basins. Results indicated that recent U.S. loadings to Lakes Michigan and Ontario are similar to those in the 1980s, whereas loadings to Lakes Superior, Huron, and Erie decreased. Highest loads were from tributaries with the largest watersheds, whereas highest yields were from areas with intense agriculture and large point sources of nutrients. Tributaries were ranked based on their relative loads and yields to each lake. Input from agricultural areas was a significant source of nutrients, contributing ∼33-44% of the P and ∼33-58% of the N, except for areas around Superior with little agriculture. Point sources were also significant, contributing ∼14-44% of the P and 13-34% of the N. Watersheds around Lake Erie contributed nutrients at the highest rate (similar to intensively farmed areas in the Midwest) because they have the largest nutrient inputs and highest delivery ratio.

  14. Self-Triggered Model Predictive Control for Linear Systems Based on Transmission of Control Input Sequences

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2016-01-01

    Full Text Available A networked control system (NCS is a control system where components such as plants and controllers are connected through communication networks. Self-triggered control is well known as one of the control methods in NCSs and is a control method that for sampled-data control systems both the control input and the aperiodic sampling interval (i.e., the transmission interval are computed simultaneously. In this paper, a self-triggered model predictive control (MPC method for discrete-time linear systems with disturbances is proposed. In the conventional MPC method, the first one of the control input sequence obtained by solving the finite-time optimal control problem is sent and applied to the plant. In the proposed method, the first some elements of the control input sequence obtained are sent to the plant, and each element is sequentially applied to the plant. The number of elements is decided according to the effect of disturbances. In other words, transmission intervals can be controlled. Finally, the effectiveness of the proposed method is shown by numerical simulations.

  15. Input-output modeling for urban energy consumption in Beijing: dynamics and comparison.

    Science.gov (United States)

    Zhang, Lixiao; Hu, Qiuhong; Zhang, Fan

    2014-01-01

    Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect) energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce) to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making.

  16. PERMODELAN INDEKS HARGA KONSUMEN INDONESIA DENGAN MENGGUNAKAN MODEL INTERVENSI MULTI INPUT

    KAUST Repository

    Novianti, P.W.

    2017-01-24

    There are some events which are expected effecting CPI’s fluctuation, i.e. financial crisis 1997/1998, fuel price risings, base year changing’s, independence of Timor-Timur (October 1999), and Tsunami disaster in Aceh (December 2004). During re-search period, there were eight fuel price risings and four base year changing’s. The objective of this research is to obtain multi input intervention model which can des-cribe magnitude and duration of each event effected to CPI. Most of intervention re-searches that have been done are only contain of an intervention with single input, ei-ther step or pulse function. Multi input intervention was used in Indonesia CPI case because there are some events which are expected effecting CPI. Based on the result, those events were affecting CPI. Additionally, other events, such as Ied on January 1999, events on April 2002, July 2003, December 2005, and September 2008, were affecting CPI too. In general, those events gave positive effect to CPI, except events on April 2002 and July 2003 which gave negative effects.

  17. Input-output modeling for urban energy consumption in Beijing: dynamics and comparison.

    Directory of Open Access Journals (Sweden)

    Lixiao Zhang

    Full Text Available Input-output analysis has been proven to be a powerful instrument for estimating embodied (direct plus indirect energy usage through economic sectors. Using 9 economic input-output tables of years 1987, 1990, 1992, 1995, 1997, 2000, 2002, 2005, and 2007, this paper analyzes energy flows for the entire city of Beijing and its 30 economic sectors, respectively. Results show that the embodied energy consumption of Beijing increased from 38.85 million tonnes of coal equivalent (Mtce to 206.2 Mtce over the past twenty years of rapid urbanization; the share of indirect energy consumption in total energy consumption increased from 48% to 76%, suggesting the transition of Beijing from a production-based and manufacturing-dominated economy to a consumption-based and service-dominated economy. Real estate development has shown to be a major driving factor of the growth in indirect energy consumption. The boom and bust of construction activities have been strongly correlated with the increase and decrease of system-side indirect energy consumption. Traditional heavy industries remain the most energy-intensive sectors in the economy. However, the transportation and service sectors have contributed most to the rapid increase in overall energy consumption. The analyses in this paper demonstrate that a system-wide approach such as that based on input-output model can be a useful tool for robust energy policy making.

  18. Investigation of effects of varying model inputs on mercury deposition estimates in the Southwest US

    Directory of Open Access Journals (Sweden)

    T. Myers

    2012-04-01

    Full Text Available The Community Multiscale Air Quality (CMAQ model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the contiguous United States (US. The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (EPA Clear Air Mercury Rule (CAMR emissions inventory. Using sensitivity simulations with different boundary conditions and tracer simulations, this investigation focuses on the contributions of boundary concentrations to deposited mercury in the Southwest (SW US. Concentrations of oxidized mercury species along the boundaries of the domain, in particular the upper layers of the domain, can make significant contributions to the simulated wet and dry deposition of mercury in the SW US. In order to better understand the contributions of boundary conditions to deposition, inert tracer simulations were conducted to quantify the relative amount of an atmospheric constituent transported across the boundaries of the domain at various altitudes and to quantify the amount that reaches and potentially deposits to the land surface in the SW US. Simulations using alternate sets of boundary concentrations, including estimates from global models (Goddard Earth Observing System-Chem (GEOS-Chem and the Global/Regional Atmospheric Heavy Metals (GRAHM model, and alternate meteorological input fields (for different years are analyzed in this paper. CMAQ dry deposition in the SW US is sensitive to differences in the atmospheric dynamics and atmospheric mercury chemistry parameterizations between the global models used for boundary conditions.

  19. A synaptic input portal for a mapped clock oscillator model of neuronal electrical rhythmic activity

    Science.gov (United States)

    Zariffa, José; Ebden, Mark; Bardakjian, Berj L.

    2004-09-01

    Neuronal electrical oscillations play a central role in a variety of situations, such as epilepsy and learning. The mapped clock oscillator (MCO) model is a general model of transmembrane voltage oscillations in excitable cells. In order to be able to investigate the behaviour of neuronal oscillator populations, we present a neuronal version of the model. The neuronal MCO includes an extra input portal, the synaptic portal, which can reflect the biological relationships in a chemical synapse between the frequency of the presynaptic action potentials and the postsynaptic resting level, which in turn affects the frequency of the postsynaptic potentials. We propose that the synaptic input-output relationship must include a power function in order to be able to reproduce physiological behaviour such as resting level saturation. One linear and two power functions (Butterworth and sigmoidal) are investigated, using the case of an inhibitory synapse. The linear relation was not able to produce physiologically plausible behaviour, whereas both the power function examples were appropriate. The resulting neuronal MCO model can be tailored to a variety of neuronal cell types, and can be used to investigate complex population behaviour, such as the influence of network topology and stochastic resonance.

  20. Investigation of effects of varying model inputs on mercury deposition estimates in the Southwest US

    Directory of Open Access Journals (Sweden)

    T. Myers

    2013-01-01

    Full Text Available The Community Multiscale Air Quality (CMAQ model version 4.7.1 was used to simulate mercury wet and dry deposition for a domain covering the continental United States (US. The simulations used MM5-derived meteorological input fields and the US Environmental Protection Agency (EPA Clear Air Mercury Rule (CAMR emissions inventory. Using sensitivity simulations with different boundary conditions and tracer simulations, this investigation focuses on the contributions of boundary concentrations to deposited mercury in the Southwest (SW US. Concentrations of oxidized mercury species along the boundaries of the domain, in particular the upper layers of the domain, can make significant contributions to the simulated wet and dry deposition of mercury in the SW US. In order to better understand the contributions of boundary conditions to deposition, inert tracer simulations were conducted to quantify the relative amount of an atmospheric constituent transported across the boundaries of the domain at various altitudes and to quantify the amount that reaches and potentially deposits to the land surface in the SW US. Simulations using alternate sets of boundary concentrations, including estimates from global models (Goddard Earth Observing System-Chem (GEOS-Chem and the Global/Regional Atmospheric Heavy Metals (GRAHM model, and alternate meteorological input fields (for different years are analyzed in this paper. CMAQ dry deposition in the SW US is sensitive to differences in the atmospheric dynamics and atmospheric mercury chemistry parameterizations between the global models used for boundary conditions.

  1. Good modeling practice for PAT applications: propagation of input uncertainty and sensitivity analysis.

    Science.gov (United States)

    Sin, Gürkan; Gernaey, Krist V; Lantz, Anna Eliasson

    2009-01-01

    The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input uncertainty resulting from assumptions of the model was propagated using the Monte Carlo procedure to estimate the output uncertainty. The results showed that significant uncertainty exists in the model outputs. Moreover the uncertainty in the biomass, glucose, ammonium and base-consumption were found low compared to the large uncertainty observed in the antibiotic and off-gas CO(2) predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which input parameters are responsible for the output uncertainty, three sensitivity methods (Standardized Regression Coefficients, Morris and differential analysis) were evaluated and compared. The results from these methods were mostly in agreement with each other and revealed that only few parameters (about 10) out of a total 56 were mainly responsible for the output uncertainty. Among these significant parameters, one finds parameters related to fermentation characteristics such as biomass metabolism, chemical equilibria and mass-transfer. Overall the uncertainty and sensitivity analysis are found promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes.

  2. LMI-Based Fuzzy Optimal Variance Control of Airfoil Model Subject to Input Constraints

    Science.gov (United States)

    Swei, Sean S.M.; Ayoubi, Mohammad A.

    2017-01-01

    This paper presents a study of fuzzy optimal variance control problem for dynamical systems subject to actuator amplitude and rate constraints. Using Takagi-Sugeno fuzzy modeling and dynamic Parallel Distributed Compensation technique, the stability and the constraints can be cast as a multi-objective optimization problem in the form of Linear Matrix Inequalities. By utilizing the formulations and solutions for the input and output variance constraint problems, we develop a fuzzy full-state feedback controller. The stability and performance of the proposed controller is demonstrated through its application to the airfoil flutter suppression.

  3. Determination of growth rates as an input of the stock discount valuation models

    Directory of Open Access Journals (Sweden)

    Momčilović Mirela

    2013-01-01

    Full Text Available When determining the value of the stocks with different stock discount valuation models, one of the important inputs is expected growth rate of dividends, earnings, cash flows and other relevant parameters of the company. Growth rate can be determined by three basic ways, and those are: on the basis of extrapolation of historical data, on the basis of professional assessment of the analytics who follow business of the company and on the basis of fundamental indicators of the company. Aim of this paper is to depict theoretical basis and practical application of stated methods for growth rate determination, and to indicate their advantages, or deficiencies.

  4. A leech model for homeostatic plasticity and motor network recovery after loss of descending inputs.

    Science.gov (United States)

    Lane, Brian J

    2016-04-01

    Motor networks below the site of spinal cord injury (SCI) and their reconfiguration after loss of central inputs are poorly understood but remain of great interest in SCI research. Harley et al. (J Neurophysiol 113: 3610-3622, 2015) report a striking locomotor recovery paradigm in the leech Hirudo verbena with features that are functionally analogous to SCI. They propose that this well-established neurophysiological system could potentially be repurposed to provide a complementary model to investigate basic principles of homeostatic compensation relevant to SCI research.

  5. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J.; Winkler, J.; Christensen, D.; Hancock, E.

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  6. Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Winkler, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, D. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Hancock, E. [Mountain Energy Partnership, Longmont, CO (United States)

    2014-08-01

    Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputs for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.

  7. Comparison of input parameters regarding rock mass in analytical solution and numerical modelling

    Science.gov (United States)

    Yasitli, N. E.

    2016-12-01

    Characteristics of stress redistribution around a tunnel excavated in rock are of prime importance for an efficient tunnelling operation and maintaining stability. As it is a well known fact that rock mass properties are the most important factors affecting stability together with in-situ stress field and tunnel geometry. Induced stresses and resultant deformation around a tunnel can be approximated by means of analytical solutions and application of numerical modelling. However, success of these methods depends on assumptions and input parameters which must be representative for the rock mass. However, mechanical properties of intact rock can be found by laboratory testing. The aim of this paper is to demonstrate the importance of proper representation of rock mass properties as input data for analytical solution and numerical modelling. For this purpose, intact rock data were converted into rock mass data by using the Hoek-Brown failure criterion and empirical relations. Stress-deformation analyses together with yield zone thickness determination have been carried out by using analytical solutions and numerical analyses by using FLAC3D programme. Analyses results have indicated that incomplete and incorrect design causes stability and economic problems in the tunnel. For this reason during the tunnel design analytical data and rock mass data should be used together. In addition, this study was carried out to prove theoretically that numerical modelling results should be applied to the tunnel design for the stability and for the economy of the support.

  8. Modelling the soil microclimate: does the spatial or temporal resolution of input parameters matter?

    Directory of Open Access Journals (Sweden)

    Anna Carter

    2016-01-01

    Full Text Available The urgency of predicting future impacts of environmental change on vulnerable populations is advancing the development of spatially explicit habitat models. Continental-scale climate and microclimate layers are now widely available. However, most terrestrial organisms exist within microclimate spaces that are very small, relative to the spatial resolution of those layers. We examined the effects of multi-resolution, multi-extent topographic and climate inputs on the accuracy of hourly soil temperature predictions for a small island generated at a very high spatial resolution (<1 m2 using the mechanistic microclimate model in NicheMapR. Achieving an accuracy comparable to lower-resolution, continental-scale microclimate layers (within about 2–3°C of observed values required the use of daily weather data as well as high resolution topographic layers (elevation, slope, aspect, horizon angles, while inclusion of site-specific soil properties did not markedly improve predictions. Our results suggest that large-extent microclimate layers may not provide accurate estimates of microclimate conditions when the spatial extent of a habitat or other area of interest is similar to or smaller than the spatial resolution of the layers themselves. Thus, effort in sourcing model inputs should be focused on obtaining high resolution terrain data, e.g., via LiDAR or photogrammetry, and local weather information rather than in situ sampling of microclimate characteristics.

  9. Model Predictive Control of Linear Systems over Networks with State and Input Quantizations

    Directory of Open Access Journals (Sweden)

    Xiao-Ming Tang

    2013-01-01

    Full Text Available Although there have been a lot of works about the synthesis and analysis of networked control systems (NCSs with data quantization, most of the results are developed for the case of considering the quantizer only existing in one of the transmission links (either from the sensor to the controller link or from the controller to the actuator link. This paper investigates the synthesis approaches of model predictive control (MPC for NCS subject to data quantizations in both links. Firstly, a novel model to describe the state and input quantizations of the NCS is addressed by extending the sector bound approach. Further, from the new model, two synthesis approaches of MPC are developed: one parameterizes the infinite horizon control moves into a single state feedback law and the other into a free control move followed by the single state feedback law. Finally, the stability results that explicitly consider the satisfaction of input and state constraints are presented. A numerical example is given to illustrate the effectiveness of the proposed MPC.

  10. Derivation and analysis on the analytical structure of interval type-2 fuzzy controller with two nonlinear fuzzy sets for each input variable

    Institute of Scientific and Technical Information of China (English)

    Bin-bin LEI; Xue-chao DUAN; Hong BAO; Qian XU

    2016-01-01

    Type-2 fuzzy controllers have been mostly viewed as black-box function generators. Revealing the analytical struc-ture of any type-2 fuzzy controller is important as it will deepen our understanding of how and why a type-2 fuzzy controller functions and lay a foundation for more rigorous system analysis and design. In this study, we derive and analyze the analytical structure of an interval type-2 fuzzy controller that uses the following identical elements: two nonlinear interval type-2 input fuzzy sets for each variable, four interval type-2 singleton output fuzzy sets, a Zadeh AND operator, and the Karnik-Mendel type reducer. Through dividing the input space of the interval type-2 fuzzy controller into 15 partitions, the input-output relationship for each local region is derived. Our derivation shows explicitly that the controller is approximately equivalent to a nonlinear proportional integral or proportional differential controller with variable gains. Furthermore, by comparing with the analytical structure of its type-1 counterpart, potential advantages of the interval type-2 fuzzy controller are analyzed. Finally, the reliability of the analysis results and the effectiveness of the interval type-2 fuzzy controller are verified by a simulation and an experiment.

  11. Generalized linear models for categorical and continuous limited dependent variables

    CERN Document Server

    Smithson, Michael

    2013-01-01

    Introduction and OverviewThe Nature of Limited Dependent VariablesOverview of GLMsEstimation Methods and Model EvaluationOrganization of This BookDiscrete VariablesBinary VariablesLogistic RegressionThe Binomial GLMEstimation Methods and IssuesAnalyses in R and StataExercisesNominal Polytomous VariablesMultinomial Logit ModelConditional Logit and Choice ModelsMultinomial Processing Tree ModelsEstimation Methods and Model EvaluationAnalyses in R and StataExercisesOrdinal Categorical VariablesModeling Ordinal Variables: Common Practice versus Best PracticeOrdinal Model AlternativesCumulative Mod

  12. VAM2D: Variably saturated analysis model in two dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Huyakorn, P.S.; Kool, J.B.; Wu, Y.S. (HydroGeoLogic, Inc., Herndon, VA (United States))

    1991-10-01

    This report documents a two-dimensional finite element model, VAM2D, developed to simulate water flow and solute transport in variably saturated porous media. Both flow and transport simulation can be handled concurrently or sequentially. The formulation of the governing equations and the numerical procedures used in the code are presented. The flow equation is approximated using the Galerkin finite element method. Nonlinear soil moisture characteristics and atmospheric boundary conditions (e.g., infiltration, evaporation and seepage face), are treated using Picard and Newton-Raphson iterations. Hysteresis effects and anisotropy in the unsaturated hydraulic conductivity can be taken into account if needed. The contaminant transport simulation can account for advection, hydrodynamic dispersion, linear equilibrium sorption, and first-order degradation. Transport of a single component or a multi-component decay chain can be handled. The transport equation is approximated using an upstream weighted residual method. Several test problems are presented to verify the code and demonstrate its utility. These problems range from simple one-dimensional to complex two-dimensional and axisymmetric problems. This document has been produced as a user's manual. It contains detailed information on the code structure along with instructions for input data preparation and sample input and printed output for selected test problems. Also included are instructions for job set up and restarting procedures. 44 refs., 54 figs., 24 tabs.

  13. Prediction of the Nighttime VLF Subionospheric Signal Amplitude by Using Nonlinear Autoregressive with Exogenous Input Neural Network Model

    Science.gov (United States)

    Santosa, H.; Hobara, Y.; Balikhin, M. A.

    2015-12-01

    Very Low Frequency (VLF) waves have been proposed as an approach to study and monitor the lower ionospheric conditions. The ionospheric perturbations are identified in relation with thunderstorm activity, geomagnetic storm and other factors. The temporal dependence of VLF amplitude has a complicated and large daily variabilities in general due to combinations of both effects from above (space weather effect) and below (atmospheric and crustal processes) of the ionosphere. Quantitative contributions from different external sources are not known well yet. Thus the modelling and prediction of VLF wave amplitude are important issues to study the lower ionospheric responses from various external parameters and to also detect the anomalies of the ionosphere. The purpose of the study is to model and predict nighttime average amplitude of VLF wave propagation from the VLF transmitter in Hawaii (NPM) to receiver in Chofu (CHO) Tokyo, Japan path using NARX neural network. The constructed model was trained for the target parameter of nighttime average amplitude of NPM-CHO path. The NARX model, which was built based on daily input variables of various physical parameters such as stratosphere temperature, cosmic rays and total column ozone, possessed good accuracies. As a result, the constructed models are capable of performing accurate multistep ahead predictions, while maintaining acceptable one step ahead prediction accuracy. The results of the predicted daily VLF amplitude are in good agreement with observed (true) value for one step ahead prediction (r = 0.92, RMSE = 1.99), multi-step ahead 5 days prediction (r = 0.91, RMSE = 1.14) and multi-step ahead 10 days prediction (r = 0.75, RMSE = 1.74). The developed model indicates the feasibility and reliability of predicting lower ionospheric properties by the NARX neural network approach, and provides physical insights on the responses of lower ionosphere due to various external forcing.

  14. Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions

    Science.gov (United States)

    Tsaur, Ruey-Chyn

    2015-02-01

    In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.

  15. Quantifying Numerical Model Accuracy and Variability

    Science.gov (United States)

    Montoya, L. H.; Lynett, P. J.

    2015-12-01

    The 2011 Tohoku tsunami event has changed the logic on how to evaluate tsunami hazard on coastal communities. Numerical models are a key component for methodologies used to estimate tsunami risk. Model predictions are essential for the development of Tsunami Hazard Assessments (THA). By better understanding model bias and uncertainties and if possible minimizing them, a more accurate and reliable THA will result. In this study we compare runup height, inundation lines and flow velocity field measurements between GeoClaw and the Method Of Splitting Tsunami (MOST) predictions in the Sendai plain. Runup elevation and average inundation distance was in general overpredicted by the models. However, both models agree relatively well with each other when predicting maximum sea surface elevation and maximum flow velocities. Furthermore, to explore the variability and uncertainties in numerical models, MOST is used to compare predictions from 4 different grid resolutions (30m, 20m, 15m and 12m). Our work shows that predictions of particular products (runup and inundation lines) do not require the use of high resolution (less than 30m) Digital Elevation Maps (DEMs). When predicting runup heights and inundation lines, numerical convergence was achieved using the 30m resolution grid. On the contrary, poor convergence was found in the flow velocity predictions, particularly the 1 meter depth maximum flow velocities. Also, runup height measurements and elevations from the DEM were used to estimate model bias. The results provided in this presentation will help understand the uncertainties in model predictions and locate possible sources of errors within a model.

  16. Effect of variable annual precipitation and nutrient input on nitrogen and phosphorus transport from two Midwestern agricultural watersheds

    Science.gov (United States)

    Kalkhoff, Stephen J.; Hubbard, Laura E.; Tomer, Mark D.; James, D.E.

    2016-01-01

    Precipitation patterns and nutrient inputs affect transport of nitrate (NO3-N) and phosphorus (TP) from Midwest watersheds. Nutrient concentrations and yields from two subsurface-drained watersheds, the Little Cobb River (LCR) in southern Minnesota and the South Fork Iowa River (SFIR) in northern Iowa, were evaluated during 1996–2007 to document relative differences in timings and amounts of nutrients transported. Both watersheds are located in the prairie pothole region, but the SFIR exhibits a longer growing season and more livestock production. The SFIR yielded significantly more NO3-N than the LCR watershed (31.2 versus 21.3 kg NO3-N ha− 1 y− 1). The SFIR watershed also yielded more TP than the LCR watershed (1.13 versus 0.51 kg TP ha− 1 yr− 1), despite greater TP concentrations in the LCR. About 65% of NO3-N and 50% of TP loads were transported during April–June, and < 20% of the annual loads were transported later in the growing season from July–September. Monthly NO3-N and TP loads peaked in April from the LCR but peaked in June from the SFIR; this difference was attributed to greater snowmelt runoff in the LCR. The annual NO3-N yield increased with increasing annual runoff at a similar rate in both watersheds, but the LCR watershed yielded less annual NO3-N than the SFIR for a similar annual runoff. These two watersheds are within 150 km of one another and have similar dominant agricultural systems, but differences in climate and cropping inputs affected amounts and timing of nutrient transport.

  17. International trade inoperability input-output model (IT-IIM): theory and application.

    Science.gov (United States)

    Jung, Jeesang; Santos, Joost R; Haimes, Yacov Y

    2009-01-01

    The inoperability input-output model (IIM) has been used for analyzing disruptions due to man-made or natural disasters that can adversely affect the operation of economic systems or critical infrastructures. Taking economic perturbation for each sector as inputs, the IIM provides the degree of economic production impacts on all industry sectors as the outputs for the model. The current version of the IIM does not provide a separate analysis for the international trade component of the inoperability. If an important port of entry (e.g., Port of Los Angeles) is disrupted, then international trade inoperability becomes a highly relevant subject for analysis. To complement the current IIM, this article develops the International Trade-IIM (IT-IIM). The IT-IIM investigates the resulting international trade inoperability for all industry sectors resulting from disruptions to a major port of entry. Similar to traditional IIM analysis, the inoperability metrics that the IT-IIM provides can be used to prioritize economic sectors based on the losses they could potentially incur. The IT-IIM is used to analyze two types of direct perturbations: (1) the reduced capacity of ports of entry, including harbors and airports (e.g., a shutdown of any port of entry); and (2) restrictions on commercial goods that foreign countries trade with the base nation (e.g., embargo).

  18. Multiregional input-output model for the evaluation of Spanish water flows.

    Science.gov (United States)

    Cazcarro, Ignacio; Duarte, Rosa; Sánchez Chóliz, Julio

    2013-01-01

    We construct a multiregional input-output model for Spain, in order to evaluate the pressures on the water resources, virtual water flows, and water footprints of the regions, and the water impact of trade relationships within Spain and abroad. The study is framed with those interregional input-output models constructed to study water flows and impacts of regions in China, Australia, Mexico, or the UK. To build our database, we reconcile regional IO tables, national and regional accountancy of Spain, trade and water data. Results show an important imbalance between origin of water resources and final destination, with significant water pressures in the South, Mediterranean, and some central regions. The most populated and dynamic regions of Madrid and Barcelona are important drivers of water consumption in Spain. Main virtual water exporters are the South and Central agrarian regions: Andalusia, Castile-La Mancha, Castile-Leon, Aragon, and Extremadura, while the main virtual water importers are the industrialized regions of Madrid, Basque country, and the Mediterranean coast. The paper shows the different location of direct and indirect consumers of water in Spain and how the economic trade and consumption pattern of certain areas has significant impacts on the availability of water resources in other different and often drier regions.

  19. A Water-Withdrawal Input-Output Model of the Indian Economy.

    Science.gov (United States)

    Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu

    2016-02-02

    Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.

  20. Limited fetch revisited: comparison of wind input terms in surface waves modeling

    CERN Document Server

    Andrei, Pushkarev

    2015-01-01

    The results of numerical solution of the Hasselmann kinetic equation ($HE$) for wind driven sea spectra in the fetch limited geometry are presented. Five versions of the source functions, including recently introduced ZRP model, have been studied for the exact expression of Snl and high-frequency implicit dissipation due to wave-breaking. Four out of five experiments were done in the absence of spectral peak dissipation for various Sin terms. They demonstrated the dominance of quadruplet wave-wave interaction in the energy balance and the formation of self-similar regimes of unlimited wave energy growth along the fetch. Between them was ZRP model, which showed especially good agreement with the dozen of field observations performed in the seas and lakes since 1971. The fifth, WAM3 wind input term experiment, used additional spectral peak dissipation and reproduced the results of previous similar numerical simulation, but was in a good agreement with the field experiments only for moderate fetches, demonstrati...

  1. Practical approximation method for firing-rate models of coupled neural networks with correlated inputs

    Science.gov (United States)

    Barreiro, Andrea K.; Ly, Cheng

    2017-08-01

    Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.

  2. Loss of GABAergic inputs in APP/PS1 mouse model of Alzheimer's disease

    Directory of Open Access Journals (Sweden)

    Tutu Oyelami

    2014-04-01

    Full Text Available Alzheimer's disease (AD is characterized by symptoms which include seizures, sleep disruption, loss of memory as well as anxiety in patients. Of particular importance is the possibility of preventing the progressive loss of neuronal projections in the disease. Transgenic mice overexpressing EOFAD mutant PS1 (L166P and mutant APP (APP KM670/671NL Swedish (APP/PS1 develop a very early and robust Amyloid pathology and display synaptic plasticity impairments and cognitive dysfunction. Here we investigated GABAergic neurotransmission, using multi-electrode array (MEA technology and pharmacological manipulation to quantify the effect of GABA Blockers on field excitatory postsynaptic potentials (fEPSP, and immunostaining of GABAergic neurons. Using MEA technology we confirm impaired LTP induction by high frequency stimulation in APPPS1 hippocampal CA1 region that was associated with reduced alteration of the pair pulse ratio after LTP induction. Synaptic dysfunction was also observed under manipulation of external Calcium concentration and input-output curve. Electrophysiological recordings from brain slice of CA1 hippocampus area, in the presence of GABAergic receptors blockers cocktails further demonstrated significant reduction in the GABAergic inputs in APP/PS1 mice. Moreover, immunostaining of GAD65 a specific marker for GABAergic neurons revealed reduction of the GABAergic inputs in CA1 area of the hippocampus. These results might be linked to increased seizure sensitivity, premature death and cognitive dysfunction in this animal model of AD. Further in depth analysis of GABAergic dysfunction in APP/PS1 mice is required and may open new perspectives for AD therapy by restoring GABAergic function.

  3. Predicting Group-Level Outcome Variables from Variables Measured at the Individual Level: A Latent Variable Multilevel Model

    Science.gov (United States)

    Croon, Marcel A.; van Veldhoven, Marc J. P. M.

    2007-01-01

    In multilevel modeling, one often distinguishes between macro-micro and micro-macro situations. In a macro-micro multilevel situation, a dependent variable measured at the lower level is predicted or explained by variables measured at that lower or a higher level. In a micro-macro multilevel situation, a dependent variable defined at the higher…

  4. Partitioning the impacts of spatial and climatological rainfall variability in urban drainage modeling

    Science.gov (United States)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2017-03-01

    The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the

  5. Model algorithm control using neural networks for input delayed nonlinear control system

    Institute of Scientific and Technical Information of China (English)

    Yuanliang Zhang; Kil To Chong

    2015-01-01

    The performance of the model algorithm control method is partial y based on the accuracy of the system’s model. It is diffi-cult to obtain a good model of a nonlinear system, especial y when the nonlinearity is high. Neural networks have the ability to“learn”the characteristics of a system through nonlinear mapping to rep-resent nonlinear functions as wel as their inverse functions. This paper presents a model algorithm control method using neural net-works for nonlinear time delay systems. Two neural networks are used in the control scheme. One neural network is trained as the model of the nonlinear time delay system, and the other one pro-duces the control inputs. The neural networks are combined with the model algorithm control method to control the nonlinear time delay systems. Three examples are used to il ustrate the proposed control method. The simulation results show that the proposed control method has a good control performance for nonlinear time delay systems.

  6. Teams in organizations: from input-process-output models to IMOI models.

    Science.gov (United States)

    Ilgen, Daniel R; Hollenbeck, John R; Johnson, Michael; Jundt, Dustin

    2005-01-01

    This review examines research and theory relevant to work groups and teams typically embedded in organizations and existing over time, although many studies reviewed were conducted in other settings, including the laboratory. Research was organized around a two-dimensional system based on time and the nature of explanatory mechanisms that mediated between team inputs and outcomes. These mechanisms were affective, behavioral, cognitive, or some combination of the three. Recent theoretical and methodological work is discussed that has advanced our understanding of teams as complex, multilevel systems that function over time, tasks, and contexts. The state of both the empirical and theoretical work is compared as to its impact on present knowledge and future directions.

  7. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2017-05-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  8. Application of soft computing based hybrid models in hydrological variables modeling: a comprehensive review

    Science.gov (United States)

    Fahimi, Farzad; Yaseen, Zaher Mundher; El-shafie, Ahmed

    2016-02-01

    Since the middle of the twentieth century, artificial intelligence (AI) models have been used widely in engineering and science problems. Water resource variable modeling and prediction are the most challenging issues in water engineering. Artificial neural network (ANN) is a common approach used to tackle this problem by using viable and efficient models. Numerous ANN models have been successfully developed to achieve more accurate results. In the current review, different ANN models in water resource applications and hydrological variable predictions are reviewed and outlined. In addition, recent hybrid models and their structures, input preprocessing, and optimization techniques are discussed and the results are compared with similar previous studies. Moreover, to achieve a comprehensive view of the literature, many articles that applied ANN models together with other techniques are included. Consequently, coupling procedure, model evaluation, and performance comparison of hybrid models with conventional ANN models are assessed, as well as, taxonomy and hybrid ANN models structures. Finally, current challenges and recommendations for future researches are indicated and new hybrid approaches are proposed.

  9. Modeling variability in porescale multiphase flow experiments

    Energy Technology Data Exchange (ETDEWEB)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  10. Modeling variability in porescale multiphase flow experiments

    Science.gov (United States)

    Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.

    2017-07-01

    Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.

  11. Multivariate autoregressive models with exogenous inputs for intracerebral responses to direct electrical stimulation of the human brain

    Directory of Open Access Journals (Sweden)

    Jui-Yang eChang

    2012-11-01

    Full Text Available A multivariate autoregressive model with exogenous inputs is developed for describing the cortical interactions excited by direct electrical current stimulation of the cortex. Current stimulation is challenging to model because it excites neurons in multiple locations both near and distant to the stimulation site. The approach presented here models these effects using an exogenous input that is passed through a bank of filters, one for each channel. The filtered input and a random input excite a multivariate autoregressive system describing the interactions between cortical activity at the recording sites. The exogenous input filter coefficients, the autoregressive coefficients, and random input characteristics are estimated from the measured activity due to current stimulation. The effectiveness of the approach is demonstrated using intracranial recordings from three surgical epilepsy patients. We evaluate models for wakefulness and NREM sleep in these patients with two stimulation levels in one patient and two stimulation sites in another resulting in a total of ten datasets. Excellent agreement between measured and model-predicted evoked responses is obtained across all datasets. Furthermore, one-step prediction is used to show that the model also describes dynamics in prestimulus and evoked recordings. We also compare integrated information --- a measure of intracortical communication thought to reflect the capacity for consciousness --- associated with the network model in wakefulness and sleep. As predicted, higher information integration is found in wakefulness than in sleep for all five cases.

  12. Multivariate autoregressive models with exogenous inputs for intracerebral responses to direct electrical stimulation of the human brain.

    Science.gov (United States)

    Chang, Jui-Yang; Pigorini, Andrea; Massimini, Marcello; Tononi, Giulio; Nobili, Lino; Van Veen, Barry D

    2012-01-01

    A multivariate autoregressive (MVAR) model with exogenous inputs (MVARX) is developed for describing the cortical interactions excited by direct electrical current stimulation of the cortex. Current stimulation is challenging to model because it excites neurons in multiple locations both near and distant to the stimulation site. The approach presented here models these effects using an exogenous input that is passed through a bank of filters, one for each channel. The filtered input and a random input excite a MVAR system describing the interactions between cortical activity at the recording sites. The exogenous input filter coefficients, the autoregressive coefficients, and random input characteristics are estimated from the measured activity due to current stimulation. The effectiveness of the approach is demonstrated using intracranial recordings from three surgical epilepsy patients. We evaluate models for wakefulness and NREM sleep in these patients with two stimulation levels in one patient and two stimulation sites in another resulting in a total of 10 datasets. Excellent agreement between measured and model-predicted evoked responses is obtained across all datasets. Furthermore, one-step prediction is used to show that the model also describes dynamics in pre-stimulus and evoked recordings. We also compare integrated information-a measure of intracortical communication thought to reflect the capacity for consciousness-associated with the network model in wakefulness and sleep. As predicted, higher information integration is found in wakefulness than in sleep for all five cases.

  13. Diagnostic analysis of distributed input and parameter datasets in Mediterranean basin streamflow modeling

    Science.gov (United States)

    Milella, Pamela; Bisantino, Tiziana; Gentile, Francesco; Iacobellis, Vito; Trisorio Liuzzi, Giuliana

    2012-11-01

    SummaryThe paper suggests a methodology, based on performance metrics, to select the optimal set of input and parameters to be used for the simulation of river flow discharges with a semi-distributed hydrologic model. The model is applied at daily scale in a semi-arid basin of Southern Italy (Carapelle river, basin area: 506 km2) for which rainfall and discharge series for the period 2006-2009 are available. A classification of inputs and parameters was made in two subsets: the former - spatially distributed - to be selected among different options, the latter - lumped - to be calibrated. Different data sources of (or methodologies to obtain) spatially distributed data have been explored for the first subset. In particular, the FAO Penman-Monteith, Hargreaves and Thornthwaite equations were tested for the evaluation of reference evapotranspiration that, in semi-arid areas, represents a key role in hydrological modeling. The availability of LAI maps from different remote sensing sources was exploited in order to enhance the characterization of the vegetation state and consequently of the spatio-temporal variation in actual evapotranspiration. Different type of pedotransfer functions were used to derive the soil hydraulic parameters of the area. For each configuration of the first subset of data, a manual calibration of the second subset of parameters was carried out. Both the manual calibration of the lumped parameters and the selection of the optimal distributed dataset were based on the calculation and the comparison of different performance metrics measuring the distance between observed and simulated discharge data series. Results not only show the best options for estimating reference evapotranspiration, crop coefficients, LAI values and hydraulic properties of soil, but also provide significant insights regarding the use of different performance metrics including traditional indexes such as RMSE, NSE, index of agreement, with the more recent Benchmark

  14. Comparison of several climate indices as inputs in modelling of the Baltic Sea runoff

    Energy Technology Data Exchange (ETDEWEB)

    Hanninen, J.; Vuorinen, I. [Turku Univ. (Finland). Archipelaco Research Inst.], e-mail: jari.hanninen@utu.fi

    2012-11-01

    Using Transfer function (TF) models, we have earlier presented a chain of events between changes in the North Atlantic Oscillation (NAO) and their oceanographical and ecological consequences in the Baltic Sea. Here we tested whether other climate indices as inputs would improve TF models, and our understanding of the Baltic Sea ecosystem. Besides NAO, the predictors were the Arctic Oscillation (AO), sea-level air pressures at Iceland (SLP), and wind speeds at Hoburg (Gotland). All indices produced good TF models when the total riverine runoff to the Baltic Sea was used as a modelling basis. AO was not applicable in all study areas, showing a delay of about half a year between climate and runoff events, connected with freezing and melting time of ice and snow in the northern catchment area of the Baltic Sea. NAO appeared to be most useful modelling tool as its area of applicability was the widest of the tested indices, and the time lag between climate and runoff events was the shortest. SLP and Hoburg wind speeds showed largely same results as NAO, but with smaller areal applicability. Thus AO and NAO were both mostly contributing to the general understanding of climate control of runoff events in the Baltic Sea ecosystem. (orig.)

  15. Reconstruction of rocks petrophysical properties as input data for reservoir modeling

    Science.gov (United States)

    Cantucci, B.; Montegrossi, G.; Lucci, F.; Quattrocchi, F.

    2016-11-01

    The worldwide increasing energy demand triggered studies focused on defining the underground energy potential even in areas previously discharged or neglected. Nowadays, geological gas storage (CO2 and/or CH4) and geothermal energy are considered strategic for low-carbon energy development. A widespread and safe application of these technologies needs an accurate characterization of the underground, in terms of geology, hydrogeology, geochemistry, and geomechanics. However, during prefeasibility study-stage, the limited number of available direct measurements of reservoirs, and the high costs of reopening closed deep wells must be taken into account. The aim of this work is to overcome these limits, proposing a new methodology to reconstruct vertical profiles, from surface to reservoir base, of: (i) thermal capacity, (ii) thermal conductivity, (iii) porosity, and (iv) permeability, through integration of well-log information, petrographic observations on inland outcropping samples, and flow and heat transport modeling. As case study to test our procedure we selected a deep structure, located in the medium Tyrrhenian Sea (Italy). Obtained results are consistent with measured data, confirming the validity of the proposed model. Notwithstanding intrinsic limitations due to manual calibration of the model with measured data, this methodology represents an useful tool for reservoir and geochemical modelers that need to define petrophysical input data for underground modeling before the well reopening.

  16. Multivariate sensitivity analysis to measure global contribution of input factors in dynamic models

    Energy Technology Data Exchange (ETDEWEB)

    Lamboni, Matieyendou [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Monod, Herve, E-mail: herve.monod@jouy.inra.f [INRA, Unite MIA (UR341), F78352 Jouy en Josas Cedex (France); Makowski, David [INRA, UMR Agronomie INRA/AgroParisTech (UMR 211), BP 01, F78850 Thiverval-Grignon (France)

    2011-04-15

    Many dynamic models are used for risk assessment and decision support in ecology and crop science. Such models generate time-dependent model predictions, with time either discretised or continuous. Their global sensitivity analysis is usually applied separately on each time output, but Campbell et al. (2006) advocated global sensitivity analyses on the expansion of the dynamics in a well-chosen functional basis. This paper focuses on the particular case when principal components analysis is combined with analysis of variance. In addition to the indices associated with the principal components, generalised sensitivity indices are proposed to synthesize the influence of each parameter on the whole time series output. Index definitions are given when the uncertainty on the input factors is either discrete or continuous and when the dynamic model is either discrete or functional. A general estimation algorithm is proposed, based on classical methods of global sensitivity analysis. The method is applied to a dynamic wheat crop model with 13 uncertain parameters. Three methods of global sensitivity analysis are compared: the Sobol'-Saltelli method, the extended FAST method, and the fractional factorial design of resolution 6.

  17. Bayesian modeling of measurement error in predictor variables

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Glas, Cornelis A.W.

    2003-01-01

    It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between

  18. Effect of the Shape Factor as an Input Variable on the Extraction Process for a Reversing Continuous Countercurrent Extractor

    Directory of Open Access Journals (Sweden)

    Waigoon RITTIRUT

    2010-06-01

    Full Text Available The effect of the shape factor on a reversing continuous countercurrent extraction process for concentration profiles is reported. Garcinia fruit was selected as a model solid while sucrose was used as soluble solid for the diffusion system. The results showed that a slab shape gave better results on the concentration profiles for both solid and liquid phases than those of the block shape. A better diffusion mass transfer in the slab shape leads to a better yield. The shape factor results were verified using a model solid system. The phenomena in reversing continuous countercurrent extraction process under steady-state conditions can be explained via a backmixing-diffusion model. The concentration profiles by model prediction corresponded well to those of measured data. Diffusivities which are available for simulation purposes were reported for different thicknesses and operating temperatures. The evaporation took place at an acceptable level for an open system according to the specified operating conditions.

  19. Robust unknown input observer design for state estimation and fault detection using linear parameter varying model

    Science.gov (United States)

    Li, Shanzhi; Wang, Haoping; Aitouche, Abdel; Tian, Yang; Christov, Nicolai

    2017-01-01

    This paper proposes a robust unknown input observer for state estimation and fault detection using linear parameter varying model. Since the disturbance and actuator fault is mixed together in the physical system, it is difficult to isolate the fault from the disturbance. Using the state transforation, the estimation of the original state becomes to associate with the transform state. By solving the linear matrix inequalities (LMIs)and linear matrix equalities (LMEs), the parameters of the UIO can be obtained. The convergence of the UIO is also analysed by the Layapunov theory. Finally, a wind turbine system with disturbance and actuator fault is tested for the proposed method. From the simulations, it demonstrates the effectiveness and performances of the proposed method.

  20. Applying Input-Output Model to Estimate Broader Economic Impact of Transportation Infrastructure Investment

    Science.gov (United States)

    Anas, Ridwan; Tamin, Ofyar; Wibowo, Sony S.

    2016-09-01

    The purpose of this study is to identify the relationships between infrastructure improvement and economic growth in the surrounding region. Traditionally, microeconomic and macroeconomic analyses are the mostly used tools for analyzing the linkage between transportation sectors and economic growth but offer little clues to the mechanisms linking transport improvements and the broader economy impacts. This study will estimate the broader economic benefits of the new transportation infrastructure investment, Cipularangtollway in West Java province, Indonesia, to the region connected (Bandung district) using Input-Output model. The result show the decrease of freight transportation costs by at 17 % and the increase of 1.2 % of Bandung District's GDP after the operation of Cipularangtollway.

  1. Applying Input-Output Model to Estimate Broader Economic Impact of Transportation Infrastructure Investment

    Science.gov (United States)

    Anas, Ridwan; Tamin, Ofyar; Wibowo, Sony S.

    2016-08-01

    The purpose of this study is to identify the relationships between infrastructure improvement and economic growth in the surrounding region. Traditionally, microeconomic and macroeconomic analyses are the mostly used tools for analyzing the linkage between transportation sectors and economic growth but offer little clues to the mechanisms linking transport improvements and the broader economy impacts. This study will estimate the broader economic benefits of the new transportation infrastructure investment, Cipularangtollway in West Java province, Indonesia, to the region connected (Bandung district) using Input-Output model. The result show the decrease of freight transportation costs by at 17 % and the increase of 1.2 % of Bandung District's GDP after the operation of Cipularangtollway.

  2. Modeling and Controller Design of PV Micro Inverter without Using Electrolytic Capacitors and Input Current Sensors

    Directory of Open Access Journals (Sweden)

    Faa Jeng Lin

    2016-11-01

    Full Text Available This paper outlines the modeling and controller design of a novel two-stage photovoltaic (PV micro inverter (MI that eliminates the need for an electrolytic capacitor (E-cap and input current sensor. The proposed MI uses an active-clamped current-fed push-pull DC-DC converter, cascaded with a full-bridge inverter. Three strategies are proposed to cope with the inherent limitations of a two-stage PV MI: (i high-speed DC bus voltage regulation using an integrator to deal with the 2nd harmonic voltage ripples found in single-phase systems; (ii inclusion of a small film capacitor in the DC bus to achieve ripple-free PV voltage; (iii improved incremental conductance (INC maximum power point tracking (MPPT without the need for current sensing by the PV module. Simulation and experimental results demonstrate the efficacy of the proposed system.

  3. Synchronized Beta-Band Oscillations in a Model of the Globus Pallidus-Subthalamic Nucleus Network under External Input

    Science.gov (United States)

    Ahn, Sungwoo; Zauber, S. Elizabeth; Worth, Robert M.; Rubchinsky, Leonid L.

    2016-01-01

    Hypokinetic symptoms of Parkinson's disease are usually associated with excessively strong oscillations and synchrony in the beta frequency band. The origin of this synchronized oscillatory dynamics is being debated. Cortical circuits may be a critical source of excessive beta in Parkinson's disease. However, subthalamo-pallidal circuits were also suggested to be a substantial component in generation and/or maintenance of Parkinsonian beta activity. Here we study how the subthalamo-pallidal circuits interact with input signals in the beta frequency band, representing cortical input. We use conductance-based models of the subthalamo-pallidal network and two types of input signals: artificially-generated inputs and input signals obtained from recordings in Parkinsonian patients. The resulting model network dynamics is compared with the dynamics of the experimental recordings from patient's basal ganglia. Our results indicate that the subthalamo-pallidal model network exhibits multiple resonances in response to inputs in the beta band. For a relatively broad range of network parameters, there is always a certain input strength, which will induce patterns of synchrony similar to the experimentally observed ones. This ability of the subthalamo-pallidal network to exhibit realistic patterns of synchronous oscillatory activity under broad conditions may indicate that these basal ganglia circuits are directly involved in the expression of Parkinsonian synchronized beta oscillations. Thus, Parkinsonian synchronized beta oscillations may be promoted by the simultaneous action of both cortical (or some other) and subthalamo-pallidal network mechanisms. Hence, these mechanisms are not necessarily mutually exclusive. PMID:28066222

  4. The impacts of interannual climate variability and agricultural inputs on water footprint of crop production in an irrigation district of China.

    Science.gov (United States)

    Sun, Shikun; Wu, Pute; Wang, Yubao; Zhao, Xining; Liu, Jing; Zhang, Xiaohong

    2013-02-01

    Irrigation plays an increasing important role in agriculture of China. The assessment of water resources utilization during agricultural production process will contribute to improving agricultural water management practices for the irrigation districts. The water footprint provides a new approach to assessing the agricultural water utilization. The present paper put forward a modified calculation method to quantify the water footprint of crop. On this basis, this paper calculated the water footprint of major crop in Hetao irrigation district, China. Then, it evaluated the influencing factors that caused the variability of crop water footprint during the study period. Results showed that: 1) the annual average water footprint of integrated-crop production in Hetao irrigation district was 3.91 m(3)kg(-1) (90.91% blue water and 9.09% green water). The crop production in the Hetao irrigation district mainly relies on blue water; 2) under the integrated influences of interannual climate variability and variation of agricultural inputs, the water footprint of integrated-crop production displayed a decreasing trend; 3) the contribution rate of the climatic factors to the variation of water footprint was only -6.90%, while the total contribution rate of the agricultural inputs factors was -84.31%. The results suggest that the water footprint of crop mainly depends on agricultural management rather than the regional climate and its variation. The results indicated that the water footprint of a crop could be controlled at a reasonable level by better management of all agricultural inputs and the improvement of water use efficiency in agriculture. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Polishing tool and the resulting TIF for three variable machine parameters as input for the removal simulation

    Science.gov (United States)

    Schneider, Robert; Haberl, Alexander; Rascher, Rolf

    2017-06-01

    The trend in the optic industry shows, that it is increasingly important to be able to manufacture complex lens geometries on a high level of precision. From a certain limit on the required shape accuracy of optical workpieces, the processing is changed from the two-dimensional to point-shaped processing. It is very important that the process is as stable as possible during the in point-shaped processing. To ensure stability, usually only one process parameter is varied during processing. It is common that this parameter is the feed rate, which corresponds to the dwell time. In the research project ArenA-FOi (Application-oriented analysis of resource-saving and energy-efficient design of industrial facilities for the optical industry), a touching procedure is used in the point-attack, and in this case a close look is made as to whether a change of several process parameters is meaningful during a processing. The ADAPT tool in size R20 from Satisloh AG is used, which is also available for purchase. The behavior of the tool is tested under constant conditions in the MCP 250 CNC by OptoTech GmbH. A series of experiments should enable the TIF (tool influence function) to be determined using three variable parameters. Furthermore, the maximum error frequency that can be processed is calculated as an example for one parameter set and serves as an outlook for further investigations. The test results serve as the basic for the later removal simulation, which must be able to deal with a variable TIF. This topic has already been successfully implemented in another research project of the Institute for Precision Manufacturing and High-Frequency Technology (IPH) and thus this algorithm can be used. The next step is the useful implementation of the collected knowledge. The TIF must be selected on the basis of the measured data. It is important to know the error frequencies to select the optimal TIF. Thus, it is possible to compare the simulated results with real measurement

  6. Scaling precipitation input to distributed hydrological models by measured snow distribution

    Science.gov (United States)

    Voegeli, Christian; Lehning, Michael; Wever, Nander; Bavay, Mathias; Bühler, Yves; Marty, Mauro; Molnar, Peter

    2016-04-01

    Precise knowledge about the snow distribution in alpine terrain is crucial for various applications such as flood risk assessment, avalanche warning or water supply and hydropower. To simulate the seasonal snow cover development in alpine terrain, the spatially distributed, physics-based model Alpine3D is suitable. The model is often driven by spatial interpolations from automatic weather stations (AWS). As AWS are sparsely spread, the data needs to be interpolated, leading to errors in the spatial distribution of the snow cover - especially on subcatchment scale. With the recent advances in remote sensing techniques, maps of snow depth can be acquired with high spatial resolution and vertical accuracy. Here we use maps of the snow depth distribution, calculated from summer and winter digital surface models acquired with the airborne opto-electronic scanner ADS to preprocess and redistribute precipitation input data for Alpine3D to improve the accuracy of spatial distribution of snow depth simulations. A differentiation between liquid and solid precipitation is made, to account for different precipitation patterns that can be expected from rain and snowfall. For liquid precipitation, only large scale distribution patterns are applied to distribute precipitation in the simulation domain. For solid precipitation, an additional small scale distribution, based on the ADS data, is applied. The large scale patterns are generated using AWS measurements interpolated over the domain. The small scale patterns are generated by redistributing the large scale precipitation according to the relative snow depth in the ADS dataset. The determination of the precipitation phase is done using an air temperature threshold. Using this simple approach to redistribute precipitation, the accuracy of spatial snow distribution could be improved significantly. The standard deviation of absolute snow depth error could be reduced by a factor of 2 to less than 20 cm for the season 2011/12. The

  7. MODELING OF THE PRIORITY SCHEDULING INPUT-LINE GROUP OUTPUT WITH MULTI-CHANNEL IN ATM EXCHANGE SYSTEM

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this paper, an extended Kendall model for the priority scheduling input-line group output with multi-channel in Asynchronous Transfer Mode (ATM) exchange system is proposed and then the mean method is used to model mathematically the non-typical non-anticipative PRiority service (PR) model. Compared with the typical and non-anticipative PR model, it expresses the characteristics of the priority scheduling input-line group output with multi-channel in ATM exchange system. The simulation experiment shows that this model can improve the HOL block and the performance of input-queued ATM switch network dramatically. This model has a better developing prospect in ATM exchange system.

  8. Development of a General Form CO2 and Brine Flux Input Model

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sun, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-08-01

    The National Risk Assessment Partnership (NRAP) project is developing a science-based toolset for the quantitative analysis of the potential risks associated with changes in groundwater chemistry from CO2 injection. In order to address uncertainty probabilistically, NRAP is developing efficient, reduced-order models (ROMs) as part of its approach. These ROMs are built from detailed, physics-based process models to provide confidence in the predictions over a range of conditions. The ROMs are designed to reproduce accurately the predictions from the computationally intensive process models at a fraction of the computational time, thereby allowing the utilization of Monte Carlo methods to probe variability in key parameters. This report presents the procedures used to develop a generalized model for CO2 and brine leakage fluxes based on the output of a numerical wellbore simulation. The resulting generalized parameters and ranges reported here will be used for the development of third-generation groundwater ROMs.

  9. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    Directory of Open Access Journals (Sweden)

    K. Steffens

    2013-08-01

    Full Text Available The assessment of climate change impacts on the risk for pesticide leaching needs careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-west Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO-model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-west Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios could provide robust probabilistic estimates of future pesticide losses and assessments of changes in pesticide leaching risks.

  10. Modelling pesticide leaching under climate change: parameter vs. climate input uncertainty

    Directory of Open Access Journals (Sweden)

    K. Steffens

    2014-02-01

    Full Text Available Assessing climate change impacts on pesticide leaching requires careful consideration of different sources of uncertainty. We investigated the uncertainty related to climate scenario input and its importance relative to parameter uncertainty of the pesticide leaching model. The pesticide fate model MACRO was calibrated against a comprehensive one-year field data set for a well-structured clay soil in south-western Sweden. We obtained an ensemble of 56 acceptable parameter sets that represented the parameter uncertainty. Nine different climate model projections of the regional climate model RCA3 were available as driven by different combinations of global climate models (GCM, greenhouse gas emission scenarios and initial states of the GCM. The future time series of weather data used to drive the MACRO model were generated by scaling a reference climate data set (1970–1999 for an important agricultural production area in south-western Sweden based on monthly change factors for 2070–2099. 30 yr simulations were performed for different combinations of pesticide properties and application seasons. Our analysis showed that both the magnitude and the direction of predicted change in pesticide leaching from present to future depended strongly on the particular climate scenario. The effect of parameter uncertainty was of major importance for simulating absolute pesticide losses, whereas the climate uncertainty was relatively more important for predictions of changes of pesticide losses from present to future. The climate uncertainty should be accounted for by applying an ensemble of different climate scenarios. The aggregated ensemble prediction based on both acceptable parameterizations and different climate scenarios has the potential to provide robust probabilistic estimates of future pesticide losses.

  11. Autonomous attitude coordinated control for spacecraft formation with input constraint, model uncertainties, and external disturbances

    Institute of Scientific and Technical Information of China (English)

    Zheng Zhong; Song Shenmin

    2014-01-01

    To synchronize the attitude of a spacecraft formation flying system, three novel auton-omous control schemes are proposed to deal with the issue in this paper. The first one is an ideal autonomous attitude coordinated controller, which is applied to address the case with certain mod-els and no disturbance. The second one is a robust adaptive attitude coordinated controller, which aims to tackle the case with external disturbances and model uncertainties. The last one is a filtered robust adaptive attitude coordinated controller, which is used to overcome the case with input con-straint, model uncertainties, and external disturbances. The above three controllers do not need any external tracking signal and only require angular velocity and relative orientation between a space-craft and its neighbors. Besides, the relative information is represented in the body frame of each spacecraft. The controllers are proved to be able to result in asymptotical stability almost every-where. Numerical simulation results show that the proposed three approaches are effective for atti-tude coordination in a spacecraft formation flying system.

  12. Nuclear inputs of key iron isotopes for core-collapse modeling and simulation

    CERN Document Server

    Nabi, Jameel-Un

    2014-01-01

    From the modeling and simulation results of presupernova evolution of massive stars, it was found that isotopes of iron, $^{54,55,56}$Fe, play a significant role inside the stellar cores, primarily decreasing the electron-to-baryon ratio ($Y_{e}$) mainly via electron capture processes thereby reducing the pressure support. The neutrinos produced, as a result of these capture processes, are transparent to the stellar matter and assist in cooling the core thereby reducing the entropy. The structure of the presupernova star is altered both by the changes in $Y_{e}$ and the entropy of the core material. Here we present the microscopic calculation of Gamow-Teller strength distributions for isotopes of iron. The calculation is also compared with other theoretical models and experimental data. Presented also are stellar electron capture rates and associated neutrino cooling rates, due to isotopes of iron, in a form suitable for simulation and modeling codes. It is hoped that the nuclear inputs presented here should ...

  13. Modeling first impressions from highly variable facial images.

    Science.gov (United States)

    Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom

    2014-08-12

    First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.

  14. The sensitivity of conduit flow models to basic input parameters: there is no need for magma trolls!

    Science.gov (United States)

    Thomas, M. E.; Neuberg, J. W.

    2012-04-01

    Many conduit flow models now exist and some of these models are becoming extremely complicated, conducted in three dimensions and incorporating the physics of compressible three phase fluids (magmas), intricate conduit geometries and fragmentation processes, to name but a few examples. These highly specialised models are being used to explain observations of the natural system, and there is a danger that possible explanations may be getting needlessly complex. It is coherent, for instance, to propose the involvement of sub-surface dwelling magma trolls as an explanation for the change in a volcanoes eruptive style, but assuming the simplest explanation would prevent such additions, unless they were absolutely necessary. While the understanding of individual, often small scale conduit processes is increasing rapidly, is this level of detail necessary? How sensitive are these models to small changes in the most basic of governing parameters? Can these changes be used to explain observed behaviour? Here we will examine the sensitivity of conduit flow models to changes in the melt viscosity, one of the fundamental inputs to any such model. However, even addressing this elementary issue is not straight forward. There are several viscosity models in existence, how do they differ? Can models that use different viscosity models be realistically compared? Each of these viscosity models is also heavily dependent on the magma composition and/or temperature, and how well are these variables constrained? Magma temperatures and water contents are often assumed as "ball-park" figures, and are very rarely exactly known for the periods of observation the models are attempting to explain, yet they exhibit a strong controlling factor on the melt viscosity. The role of both these variables will be discussed. For example, using one of the available viscosity models a 20 K decrease in temperature of the melt results in a greater than 100% increase in the melt viscosity. With changes of

  15. Predicting musically induced emotions from physiological inputs: Linear and neural network models

    Directory of Open Access Journals (Sweden)

    Frank A. Russo

    2013-08-01

    Full Text Available Listening to music often leads to physiological responses. Do these physiological responses contain sufficient information to infer emotion induced in the listener? The current study explores this question by attempting to predict judgments of 'felt' emotion from physiological responses alone using linear and neural network models. We measured five channels of peripheral physiology from 20 participants – heart rate, respiration, galvanic skin response, and activity in corrugator supercilii and zygomaticus major facial muscles. Using valence and arousal (VA dimensions, participants rated their felt emotion after listening to each of 12 classical music excerpts. After extracting features from the five channels, we examined their correlation with VA ratings, and then performed multiple linear regression to see if a linear relationship between the physiological responses could account for the ratings. Although linear models predicted a significant amount of variance in arousal ratings, they were unable to do so with valence ratings. We then used a neural network to provide a nonlinear account of the ratings. The network was trained on the mean ratings of eight of the 12 excerpts and tested on the remainder. Performance of the neural network confirms that physiological responses alone can be used to predict musically induced emotion. The nonlinear model derived from the neural network was more accurate than linear models derived from multiple linear regression, particularly along the valence dimension. A secondary analysis allowed us to quantify the relative contributions of inputs to the nonlinear model. The study represents a novel approach to understanding the complex relationship between physiological responses and musically induced emotion.

  16. Modeling Interconnect Variability Using Efficient Parametric Model Order Reduction

    CERN Document Server

    Li, Peng; Li, Xin; Pileggi, Lawrence T; Nassif, Sani R

    2011-01-01

    Assessing IC manufacturing process fluctuations and their impacts on IC interconnect performance has become unavoidable for modern DSM designs. However, the construction of parametric interconnect models is often hampered by the rapid increase in computational cost and model complexity. In this paper we present an efficient yet accurate parametric model order reduction algorithm for addressing the variability of IC interconnect performance. The efficiency of the approach lies in a novel combination of low-rank matrix approximation and multi-parameter moment matching. The complexity of the proposed parametric model order reduction is as low as that of a standard Krylov subspace method when applied to a nominal system. Under the projection-based framework, our algorithm also preserves the passivity of the resulting parametric models.

  17. Coupling LiDAR and thermal imagery to model the effects of riparian vegetation shade and groundwater inputs on summer river temperature.

    Science.gov (United States)

    Wawrzyniak, Vincent; Allemand, Pascal; Bailly, Sarah; Lejot, Jérôme; Piégay, Hervé

    2017-03-16

    In the context of global warming, it is important to understand the drivers controlling river temperature in order to mitigate temperature increases. A modeling approach can be useful for quantifying the respective importance of the different drivers, notably groundwater inputs and riparian shading which are potentially critical for reducing summer temperature. In this study, we use a one-dimensional deterministic model to predict summer water temperature at an hourly time step over a 21km reach of the lower Ain River (France). This sinuous gravel-bed river undergoes summer temperature increase with potential impacts on salmonid populations. The model considers heat fluxes at the water-air interface, attenuation of solar radiation by riparian forest, groundwater inputs and hydraulic characteristics of the river. Modeling is performed over two periods of five days during the summers 2010 and 2011. River properties are obtained from hydraulic modeling based on cross-section profiles and water level surveys. We model shadows of the vegetation on the river surface using LiDAR data. Groundwater inputs are determined using airborne thermal infrared (TIR) images and hydrological data. Results indicate that vegetation and groundwater inputs can mitigate high water temperatures during summer. Riparian shading effect is fairly similar between the two periods (-0.26±0.12°C and -0.31±0.18°C). Groundwater input cooling is variable between the two studied periods: when groundwater discharge represents 16% of the river discharge, it cools the river down by 0.68±0.13°C while the effect is very low (0.11±0.01°C) when the groundwater discharge contributes only 2% to the discharge. The effect of shading varies through the day: low in the morning and high during the afternoon and the evening whereas those induced by groundwater inputs is more constant through the day. Overall, the effect of riparian vegetation and groundwater inputs represents about 10% in 2010 and 24% in 2011

  18. Effects of model input data uncertainty in simulating water resources of a transnational catchment

    Science.gov (United States)

    Camargos, Carla; Breuer, Lutz

    2016-04-01

    Landscape consists of different ecosystem components and how these components affect water quantity and quality need to be understood. We start from the assumption that water resources are generated in landscapes and that rural land use (particular agriculture) has a strong impact on water resources that are used downstream for domestic and industrial supply. Partly located in the north of Luxembourg and partly in the southeast of Belgium, the Haute-Sûre catchment is about 943 km2. As part of the catchment, the Haute-Sûre Lake is an important source of drinking water for Luxembourg population, satisfying 30% of the city's demand. The objective of this study is investigate impact of spatial input data uncertainty on water resources simulations for the Haute-Sûre catchment. We apply the SWAT model for the period 2006 to 2012 and use a variety of digital information on soils, elevation and land uses with various spatial resolutions. Several objective functions are being evaluated and we consider resulting parameter uncertainty to quantify an important part of the global uncertainty in model simulations.

  19. Limited fetch revisited: Comparison of wind input terms, in surface wave modeling

    Science.gov (United States)

    Pushkarev, Andrei; Zakharov, Vladimir

    2016-07-01

    Results pertaining to numerical solutions of the Hasselmann kinetic equation (HE), for wind driven sea spectra, in the fetch limited geometry, are presented. Five versions of source functions, including the recently introduced ZRP model (Zakharov et al., 2012), have been studied, for the exact expression of Snl and high-frequency implicit dissipation, due to wave-breaking. Four of the five experiments were done in the absence of spectral peak dissipation for various Sin terms. They demonstrated the dominance of quadruplet wave-wave interaction, in the energy balance, and the formation of self-similar regimes, of unlimited wave energy growth, along the fetch. Between them was the ZRP model, which strongly agreed with dozens of field observations performed in the seas and lakes, since 1947. The fifth, the WAM3 wind input term experiment, used additional spectral peak dissipation and reproduced the results of a previous, similar, numerical simulation described in Komen et al. (1994), but only supported the field experiments for moderate fetches, demonstrating a total energy saturation at half of that of the Pierson-Moscowits limit. The alternative framework for HE numerical simulation is proposed, along with a set of tests, allowing one to select physically-justified source terms.

  20. Modeling uncertainties in workforce disruptions from influenza pandemics using dynamic input-output analysis.

    Science.gov (United States)

    El Haimar, Amine; Santos, Joost R

    2014-03-01

    Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics.

  1. A MULTIYEAR LAGS INPUT-HOLDING-OUTPUT MODEL ON EDUCATION WITH EXCLUDING IDLE CAPITAL

    Institute of Scientific and Technical Information of China (English)

    Xue FU; Xikang CHEN

    2009-01-01

    This paper develops a multi-year lag Input-Holding-Output (I-H-O) Model on education with exclusion of the idle capital to address the reasonable education structure in support of a sus-tainable development strategy in China. First, the model considers the multiyear lag of human capital because the lag time of human capital is even longer and more important than that of fixed capital. Second, it considers the idle capital resulting from the output decline in education, for example, stu-dent decrease in primary school. The new generalized Leonitief dynamic inverse is deduced to obtain a positive solution on education when output declines as well as expands. After compiling the 2000 I-H-O table on education, the authors adopt modifications-by-step method to treat nonlinear coefficients, and calculate education scale, the requirement of human capital, and education expenditure from 2005 to 2020. It is found that structural imbalance of human capital is a serious problem for Chinese economic development.

  2. Three-Verb Clusters in Interference Frisian: A Stochastic Model over Sequential Syntactic Input.

    Science.gov (United States)

    Hoekstra, Eric; Versloot, Arjen

    2016-03-01

    Abstract Interference Frisian (IF) is a variety of Frisian, spoken by mostly younger speakers, which is heavily influenced by Dutch. IF exhibits all six logically possible word orders in a cluster of three verbs. This phenomenon has been researched by Koeneman and Postma (2006), who argue for a parameter theory, which leaves frequency differences between various orders unexplained. Rejecting Koeneman and Postma's parameter theory, but accepting their conclusion that Dutch (and Frisian) data are input for the grammar of IF, we will argue that the word order preferences of speakers of IF are determined by frequency and similarity. More specifically, three-verb clusters in IF are sensitive to: their linear left-to-right similarity to two-verb clusters and three-verb clusters in Frisian and in Dutch; the (estimated) frequency of two- and three-verb clusters in Frisian and Dutch. The model will be shown to work best if Dutch and Frisian, and two- and three-verb clusters, have equal impact factors. If different impact factors are taken, the model's predictions do not change substantially, testifying to its robustness. This analysis is in line with recent ideas that the sequential nature of human speech is more important to syntactic processes than commonly assumed, and that less burden need be put on the hierarchical dimension of syntactic structure.

  3. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    Science.gov (United States)

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  4. A pre-calibration approach to selecting optimum inputs for hydrological models in data-scarce regions: a case study in Jordan

    Science.gov (United States)

    Tarawneh, Esraa; Bridge, Jonathan; Macdonald, Neil

    2016-04-01

    This study reports a pre-calibration methodology to select optimum inputs to hydrological models in dryland environments, demonstrated on the semi-arid Wala catchment, Jordan (1743 km2). The Soil and Water Assessment Tool (SWAT) is used to construct eighteen model scenarios combining three land-use, two soil and three weather datasets spanning 1979 - 2002. Weather datasets include locally-recorded precipitation and temperature data and global reanalysis data products. Soil data comprise a high-resolution map constructed from national soil survey data and a significantly lower-resolution global soil map. Landuse maps are obtained from global and local sources; with some modifications applied to the latter using available descriptive landuse information. Variability in model performance arising from using different dataset combinations is assessed by testing uncalibrated model outputs against discharge and sediment load data using r2, Nash-Sutcliffe Efficiency (NSE), RSR and PBIAS. A ranking procedure identifies best-performing input data combinations and trends among the scenarios. In the case of Wala, Jordan, global weather inputs yield considerable improvements on discontinuous local datasets; conversely, local high-resolution soil mapping data perform considerably better than globally-available soil data. NSE values vary from 0.56 to -12 and 0.79 to -85 for best and worst-performing scenarios against observed discharge and sediment data respectively. Full calibration remains an essential step prior to model application. However, the methodology presented provides a transparent, transferable approach to selecting the most robust suite of input data and hence minimising structural biases in model performance arising when calibration proceeds from low-quality initial assumptions. In regions where data are scarce, their quality is unregulated and survey resources are limited, such methods are essential in improving confidence in models which underpin critical water

  5. Identification of a Manipulator Model Using the Input Error Method in the Mathematica Program

    Directory of Open Access Journals (Sweden)

    Leszek CEDRO

    2009-06-01

    Full Text Available The problem of parameter identification for a four-degree-of-freedom robot was solved using the Mathematica program. The identification was performed by means of specially developed differential filters [1]. Using the example of a manipulator, we analyze the capabilities of the Mathematica program that can be applied to solve problems related to the modeling, control, simulation and identification of a system [2]. The responses of the identification process for the variables and the values of the quality function are included.

  6. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    Energy Technology Data Exchange (ETDEWEB)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.

  7. Including operational data in QMRA model: development and impact of model inputs.

    Science.gov (United States)

    Jaidi, Kenza; Barbeau, Benoit; Carrière, Annie; Desjardins, Raymond; Prévost, Michèle

    2009-03-01

    A Monte Carlo model, based on the Quantitative Microbial Risk Analysis approach (QMRA), has been developed to assess the relative risks of infection associated with the presence of Cryptosporidium and Giardia in drinking water. The impact of various approaches for modelling the initial parameters of the model on the final risk assessments is evaluated. The Monte Carlo simulations that we performed showed that the occurrence of parasites in raw water was best described by a mixed distribution: log-Normal for concentrations > detection limit (DL), and a uniform distribution for concentrations risks significantly. The mean annual risks for conventional treatment are: 1.97E-03 (removal credit adjusted by log parasite = log spores), 1.58E-05 (log parasite = 1.7 x log spores) or 9.33E-03 (regulatory credits based on the turbidity measurement in filtered water). Using full scale validated SCADA data, the simplified calculation of CT performed at the plant was shown to largely underestimate the risk relative to a more detailed CT calculation, which takes into consideration the downtime and system failure events identified at the plant (1.46E-03 vs. 3.93E-02 for the mean risk).

  8. Vadose zone-attenuated artificial recharge for input to a ground water model.

    Science.gov (United States)

    Nichols, William E; Wurstner, Signe K; Eslinger, Paul W

    2007-01-01

    Accurate representation of artificial recharge is requisite to calibration of a ground water model of an unconfined aquifer for a semiarid or arid site with a vadose zone that imparts significant attenuation of liquid transmission and substantial anthropogenic liquid discharges. Under such circumstances, artificial recharge occurs in response to liquid disposal to the vadose zone in areas that are small relative to the ground water model domain. Natural recharge, in contrast, is spatially variable and occurs over the entire upper boundary of a typical unconfined ground water model. An improved technique for partitioning artificial recharge from simulated total recharge for inclusion in a ground water model is presented. The improved technique is applied using data from the semiarid Hanford Site. From 1944 until the late 1980s, when Hanford's mission was the production of nuclear materials, the quantities of liquid discharged from production facilities to the ground vastly exceeded natural recharge. Nearly all hydraulic head data available for use in calibrating a ground water model at this site were collected during this period or later, when the aquifer was under the diminishing influence of the massive water disposals. The vadose zone is typically 80 to 90 m thick at the Central Plateau where most production facilities were located at this semiarid site, and its attenuation of liquid transmission to the aquifer can be significant. The new technique is shown to improve the representation of artificial recharge and thereby contribute to improvement in the calibration of a site-wide ground water model.

  9. Variable Relation Parametric Model on Graphics Modelon for Collaboration Design

    Institute of Scientific and Technical Information of China (English)

    DONG Yu-de; ZHAO Han; LI Yan-feng

    2005-01-01

    A new approach to variable relation parametric model for collaboration design based on the graphic modelon has been put forward. The paper gives a parametric description model of graphic modelon, and relating method for different graphic modelon based on variable constraint. At the same time, with the aim of engineering application in the collaboration design, the autonmous constraint in modelon and relative constraint between two modelons are given. Finally, with the tool of variable and relation dbase, the solving method of variable relating and variable-driven among different graphic modelon in a part, and doubleacting variable relating parametric method among different parts for collaboration are given.

  10. Assessing Spatial and Attribute Errors of Input Data in Large National Datasets for use in Population Distribution Models

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, Lauren A [ORNL; Urban, Marie L [ORNL; Myers, Aaron T [ORNL; Bhaduri, Budhendra L [ORNL; Bright, Eddie A [ORNL; Coleman, Phil R [ORNL

    2007-01-01

    Geospatial technologies and digital data have developed and disseminated rapidly in conjunction with increasing computing performance and internet availability. The ability to store and transmit large datasets has encouraged the development of national datasets in geospatial format. National datasets are used by numerous agencies for analysis and modeling purposes because these datasets are standardized, and are considered to be of acceptable accuracy. At Oak Ridge National Laboratory, a national population model incorporating multiple ancillary variables was developed and one of the inputs required is a school database. This paper examines inaccuracies present within two national school datasets, TeleAtlas North America (TANA) and National Center of Education Statistics (NCES). Schools are an important component of the population model, because they serve as locations containing dense clusters of vulnerable populations. It is therefore essential to validate the quality of the school input data, which was made possible by increasing national coverage of high resolution imagery. Schools were also chosen since a 'real-world' representation of K-12 schools for the Philadelphia School District was produced; thereby enabling 'ground-truthing' of the national datasets. Analyses found the national datasets not standardized and incomplete, containing 76 to 90% of existing schools. The temporal accuracy of enrollment values of updating national datasets resulted in 89% inaccuracy to match 2003 data. Spatial rectification was required for 87% of the NCES points, of which 58% of the errors were attributed to the geocoding process. Lastly, it was found that by combining the two national datasets together, the resultant dataset provided a more useful and accurate solution. Acknowledgment Prepared by Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee 37831-6285, managed by UT-Battelle, LLC for the U. S. Department of Energy undercontract no

  11. Hydrological and sedimentological modeling of the Okavango Delta, Botswana, using remotely sensed input and calibration data

    Science.gov (United States)

    Milzow, C.; Kgotlhang, L.; Kinzelbach, W.; Bauer-Gottwein, P.

    2006-12-01

    medium-term. The Delta's size and limited accessibility make direct data acquisition on the ground difficult. Remote sensing methods are the most promising source of acquiring spatially distributed data for both, model input and calibration. Besides ground data, METEOSAT and NOAA data are used for precipitation and evapotranspiration inputs respectively. The topography is taken from a study from Gumbricht et al. (2004) where the SRTM shuttle mission data is refined using remotely sensed vegetation indexes. The aquifer thickness was determined with an aeromagnetic survey. For calibration, the simulated flooding patterns are compared to patterns derived from satellite imagery: recent ENVISAT ASAR and older NOAA AVHRR scenes. The final objective is to better understand the hydrological and hydraulic aspects of this complex ecosystem and eventually predict the consequences of human interventions. It will provide a tool for decision makers involved to assess the impact of possible upstream dams and water abstraction scenarios.

  12. Discharge simulations performed with a hydrological model using bias corrected regional climate model input

    Directory of Open Access Journals (Sweden)

    S. C. van Pelt

    2009-12-01

    Full Text Available Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of precipitation change on river discharge is studied. The objective of this paper is to investigate the effect of using two different bias correction methods on output from a Regional Climate Model (RCM simulation. In this study a Regional Atmospheric Climate Model (RACMO2 run is used, forced by ECHAM5/MPIOM under the condition of the SRES-A1B emission scenario, with a 25 km horizontal resolution. The RACMO2 runs contain a systematic precipitation bias on which two bias correction methods are applied. The first method corrects for the wet day fraction and wet day average (WD bias correction and the second method corrects for the mean and coefficient of variance (MV bias correction. The WD bias correction initially corrects well for the average, but it appears that too many successive precipitation days were removed with this correction. The second method performed less well on average bias correction, but the temporal precipitation pattern was better. Subsequently, the discharge was calculated by using RACMO2 output as forcing to the HBV-96 hydrological model. A large difference was found between the simulated discharge of the uncorrected RACMO2 run, the WD bias corrected run and the MV bias corrected run. These results show the importance of an appropriate bias correction.

  13. Analysis of Design Variables of Annular Linear Induction Electromagnetic Pump using an MHD Model

    Energy Technology Data Exchange (ETDEWEB)

    Kwak, Jae Sik; Kim, Hee Reyoung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2015-05-15

    The generated force is affected by lots of factors including electrical input, hydrodynamic flow, geometrical shape, and so on. These factors, which are the design variables of an ALIP, should be suitably analyzed to optimally design an ALIP. Analysis on the developed pressure and efficiency of the ALIP according to the change of design variables is required for the ALIP satisfying requirements. In this study, the design variables of the ALIP are analyzed by using ideal MHD analysis model. Electromagnetic force and efficiency are derived by analyzing the main design variables such as pump core length, inner core diameter, flow gap and turns of coils. The developed pressure and efficiency of the ALIP were derived and analyzed on the change of the main variables such as pump core length, inner core diameter, flow gap, and turns of coils of the ALIP.

  14. A Variable Input-Output Model for Inflation, Growth, and Energy for the Korean Economy.

    Science.gov (United States)

    1983-12-01

    and the sales price of cukput as determinan-s of the technical coefficients were suggested by Walras [Ref. 4] and many other eco.cmis.s. (Ref. 5] Arrow...1967, 1975 and 1979. Seoul,Ko:rea: Fesearch Department. S. 4. Walras , L. Elemerts of Pure Economics, (3nglish Edition). George Allen and Urwin

  15. Analysis of MODIS snow cover time series over the alpine regions as input for hydrological modeling

    Science.gov (United States)

    Notarnicola, Claudia; Rastner, Philipp; Irsara, Luca; Moelg, Nico; Bertoldi, Giacomo; Dalla Chiesa, Stefano; Endrizzi, Stefano; Zebisch, Marc

    2010-05-01

    Snow extent and relative physical properties are key parameters in hydrology, weather forecast and hazard warning as well as in climatological models. Satellite sensors offer a unique advantage in monitoring snow cover due to their temporal and spatial synoptic view. The Moderate Resolution Imaging Spectrometer (MODIS) from NASA is especially useful for this purpose due to its high frequency. However, in order to evaluate the role of snow on the water cycle of a catchment such as runoff generation due to snowmelt, remote sensing data need to be assimilated in hydrological models. This study presents a comparison on a multi-temporal basis between snow cover data derived from (1) MODIS images, (2) LANDSAT images, and (3) predictions by the hydrological model GEOtop [1,3]. The test area is located in the catchment of the Matscher Valley (South Tyrol, Northern Italy). The snow cover maps derived from MODIS-images are obtained using a newly developed algorithm taking into account the specific requirements of mountain regions with a focus on the Alps [2]. This algorithm requires the standard MODIS-products MOD09 and MOD02 as input data and generates snow cover maps at a spatial resolution of 250 m. The final output is a combination of MODIS AQUA and MODIS TERRA snow cover maps, thus reducing the presence of cloudy pixels and no-data-values due to topography. By using these maps, daily time series starting from the winter season (November - May) 2002 till 2008/2009 have been created. Along with snow maps from MODIS images, also some snow cover maps derived from LANDSAT images have been used. Due to their high resolution (manto nevoso in aree alpine con dati MODIS multi-temporali e modelli idrologici, 13th ASITA National Conference, 1-4.12.2009, Bari, Italy. [3] Zanotti F., Endrizzi S., Bertoldi G. and Rigon R. 2004. The GEOtop snow module. Hydrological Processes, 18: 3667-3679. DOI:10.1002/hyp.5794.

  16. Binary outcome variables and logistic regression models

    Institute of Scientific and Technical Information of China (English)

    Xinhua LIU

    2011-01-01

    Biomedical researchers often study binary variables that indicate whether or not a specific event,such as remission of depression symptoms,occurs during the study period.The indicator variable Y takes two values,usually coded as one if the event (remission) is present and zero if the event is not present(non-remission).Let p be the probability that the event occurs ( Y =1),then 1-p will be the probability that the event does not occur ( Y =0).

  17. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input subs

  18. The Use of an Eight-Step Instructional Model to Train School Staff in Partner-Augmented Input

    Science.gov (United States)

    Senner, Jill E.; Baud, Matthew R.

    2017-01-01

    An eight-step instruction model was used to train a self-contained classroom teacher, speech-language pathologist, and two instructional assistants in partner-augmented input, a modeling strategy for teaching augmentative and alternative communication use. With the exception of a 2-hr training session, instruction primarily was conducted during…

  19. Ecological input-output modeling for embodied resources and emissions in Chinese economy 2005

    Science.gov (United States)

    Chen, Z. M.; Chen, G. Q.; Zhou, J. B.; Jiang, M. M.; Chen, B.

    2010-07-01

    For the embodiment of natural resources and environmental emissions in Chinese economy 2005, a biophysical balance modeling is carried out based on an extension of the economic input-output table into an ecological one integrating the economy with its various environmental driving forces. Included resource flows into the primary resource sectors and environmental emission flows from the primary emission sectors belong to seven categories as energy resources in terms of fossil fuels, hydropower and nuclear energy, biomass, and other sources; freshwater resources; greenhouse gas emissions in terms of CO2, CH4, and N2O; industrial wastes in terms of waste water, waste gas, and waste solid; exergy in terms of fossil fuel resources, biological resources, mineral resources, and environmental resources; solar emergy and cosmic emergy in terms of climate resources, soil, fossil fuels, and minerals. The resulted database for embodiment intensity and sectoral embodiment of natural resources and environmental emissions is of essential implications in context of systems ecology and ecological economics in general and of global climate change in particular.

  20. A seismic free field input model for FE-SBFE coupling in time domain

    Institute of Scientific and Technical Information of China (English)

    阎俊义; 金峰; 徐艳杰; 王光纶; 张楚汉

    2003-01-01

    A seismic free field input formulation of the coupling procedure of the finite element (FE) and the scaled boundary finite-element(SBFE) is proposed to perform the unbounded soil-structure interaction analysis in time domain. Based on the substructure technique, seismic excitation of the soil-structure system is represented by the free-field motion of an elastic half-space. To reduce the computational effort, the acceleration unit-impulse response function of the unbounded soil is decomposed into two functions: linear and residual. The latter converges to zero and can be truncated as required. With the prescribed tolerance parameter, the balance between accuracy and efficiency of the procedure can be controlled. The validity of the model is verified by the scattering analysis of a hemi-spherical canyon subjected to plane harmonic P, SV and SH wave incidence. Numerical results show that the new procedure is very efficient for seismic problems within a normal range of frequency. The coupling procedure presented herein can be applied to linear and nonlinear earthquake response analysis of practical structures which are built on unbounded soil.

  1. LAN Modeling in Rural Areas Based on Variable Metrics Using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Ak. Ashakumar Singh

    2013-03-01

    Full Text Available The global scenario of the present world highly needs the communication between the urban areas and the rural areas. To motivate a new system for rural broadband access, there needs the integration of LAN and IEEE 802.11 WLAN technologies. The variable metrics such as Access Protocol, User traffic profile, Buffer size and Data collision and retransmission are involved in the modeling of such LAN. In the paper, a fuzzy logic based LAN modeling technique is designed for which the variable metrics are imprecise. The technique involves the fuzzification of the variable metrics to be input, rule evaluation, and aggregation of the rule outputs. The implementation is done using Fuzzy Inference System (FIS based on Mamdani style in MatLab 7.6 for the representation of the reasoning and effective analysis. Four LAN systems are tested to analyze potential variable metrics to bring a smooth communication in the rural societies.

  2. Regional Agricultural Input-Output Model and Countermeasure for Production and Income Increase of Farmers in Southern Xinjiang,China

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Agricultural input and output status in southern Xinjiang,China is introduced,such as lack of agricultural input,low level of agricultural modernization,excessive fertilizer use,serious damage of environment,shortage of water resources,tremendous pressure on ecological balance,insignificant economic and social benefits of agricultural production in southern Xinjiang,agriculture remaining a weak industry,agricultural economy as the economic subject of southern Xinjiang,and backward economic development of southern Xinjiang.Taking the Aksu area as an example,according to the input and output data in the years 2002-2007,input-output model about regional agriculture of the southern Xinjiang is established by principal component analysis.DPS software is used in the process of solving the model.Then,Eviews software is adopted to revise and test the model in order to analyze and evaluate the economic significance of the results obtained,and to make additional explanations of the relevant model.Since the agricultural economic output is seriously restricted in southern Xinjiang at present,the following countermeasures are put forward,such as adjusting the structure of agricultural land,improving the utilization ratio of land,increasing agricultural input,realizing agricultural modernization,rationally utilizing water resources,maintaining eco-environmental balance,enhancing the awareness of agricultural insurance,minimizing the risk and loss,taking the road of industrialization of characteristic agricultural products,and realizing the transfer of surplus labor force.

  3. USING STRUCTURAL EQUATION MODELING TO INVESTIGATE RELATIONSHIPS AMONG ECOLOGICAL VARIABLES

    Science.gov (United States)

    This paper gives an introductory account of Structural Equation Modeling (SEM) and demonstrates its application using LISRELmodel utilizing environmental data. Using nine EMAP data variables, we analyzed their correlation matrix with an SEM model. The model characterized...

  4. Parametric modeling of DSC-MRI data with stochastic filtration and optimal input design versus non-parametric modeling.

    Science.gov (United States)

    Kalicka, Renata; Pietrenko-Dabrowska, Anna

    2007-03-01

    In the paper MRI measurements are used for assessment of brain tissue perfusion and other features and functions of the brain (cerebral blood flow - CBF, cerebral blood volume - CBV, mean transit time - MTT). Perfusion is an important indicator of tissue viability and functioning as in pathological tissue blood flow, vascular and tissue structure are altered with respect to normal tissue. MRI enables diagnosing diseases at an early stage of their course. The parametric and non-parametric approaches to the identification of MRI models are presented and compared. The non-parametric modeling adopts gamma variate functions. The parametric three-compartmental catenary model, based on the general kinetic model, is also proposed. The parameters of the models are estimated on the basis of experimental data. The goodness of fit of the gamma variate and the three-compartmental models to the data and the accuracy of the parameter estimates are compared. Kalman filtering, smoothing the measurements, was adopted to improve the estimate accuracy of the parametric model. Parametric modeling gives a better fit and better parameter estimates than non-parametric and allows an insight into the functioning of the system. To improve the accuracy optimal experiment design related to the input signal was performed.

  5. Stochastic modelling of aquifer level temporal fluctuations using the Kalman filter adaptation algorithm and an autoregressive exogenous variable model

    Science.gov (United States)

    Varouchakis, Emmanouil

    2017-04-01

    Reliable temporal modelling of groundwater level is significant for efficient water resources management in hydrological basins and for the prevention of possible desertification effects. In this work we propose a stochastic data driven approach of temporal monitoring and prediction that can incorporate auxiliary information. More specifically, we model the temporal (mean annual and biannual) variation of groundwater level by means of a discrete time autoregressive exogenous variable model (ARX model). The ARX model parameters and its predictions are estimated by means of the Kalman filter adaptation algorithm (KFAA). KFAA is suitable for sparsely monitored basins that do not allow for an independent estimation of the ARX model parameters. Three new modified versions of the original form of the ARX model are proposed and investigated: the first considers a larger time scale, the second a larger time delay in terms of the groundwater level input and the third considers the groundwater level difference between the last two hydrological years, which is incorporated in the model as a third input variable. We apply KFAA to time series of groundwater level values from Mires basin in the island of Crete. In addition to precipitation measurements, we use pumping data as exogenous variables. We calibrate the ARX model based on the groundwater level for the years 1981 to 2006 and use it to successfully predict the mean annual and biannual groundwater level for recent years (2007-2010).

  6. Urban pluvial flood prediction: a case study evaluating radar rainfall nowcasts and numerical weather prediction models as model inputs.

    Science.gov (United States)

    Thorndahl, Søren; Nielsen, Jesper Ellerbæk; Jensen, David Getreuer

    2016-12-01

    Flooding produced by high-intensive local rainfall and drainage system capacity exceedance can have severe impacts in cities. In order to prepare cities for these types of flood events - especially in the future climate - it is valuable to be able to simulate these events numerically, both historically and in real-time. There is a rather untested potential in real-time prediction of urban floods. In this paper, radar data observations with different spatial and temporal resolution, radar nowcasts of 0-2 h leadtime, and numerical weather models with leadtimes up to 24 h are used as inputs to an integrated flood and drainage systems model in order to investigate the relative difference between different inputs in predicting future floods. The system is tested on the small town of Lystrup in Denmark, which was flooded in 2012 and 2014. Results show it is possible to generate detailed flood maps in real-time with high resolution radar rainfall data, but rather limited forecast performance in predicting floods with leadtimes more than half an hour.

  7. Rose bush leaf and internode expansion dynamics: analysis and development of a model capturing interplant variability

    Directory of Open Access Journals (Sweden)

    Sabine eDemotes-Mainard

    2013-10-01

    Full Text Available Bush rose architecture, among other factors, such as plant health, determines plant visual quality. The commercial product is the individual plant and interplant variability may be high within a crop. Thus, both mean plant architecture and interplant variability should be studied. Expansion is an important feature of architecture, but it has been little studied at the level of individual organs in bush roses. We investigated the expansion kinetics of primary shoot organs, to develop a model reproducing the organ expansion of real crops from non destructive input variables. We took interplant variability in expansion kinetics and the model’s ability to simulate this variability into account. Changes in leaflet and internode dimensions over thermal time were recorded for primary shoot expansion, on 83 plants from three crops grown in different climatic conditions and densities. An empirical model was developed, to reproduce organ expansion kinetics for individual plants of a real crop of bush rose primary shoots. Leaflet or internode length was simulated as a logistic function of thermal time. The model was evaluated by cross-validation. We found that differences in leaflet or internode expansion kinetics between phytomer positions and between plants at a given phytomer position were due mostly to large differences in time of organ expansion and expansion rate, rather than differences in expansion duration. Thus, in the model, the parameters linked to expansion duration were predicted by values common to all plants, whereas variability in final size and organ expansion time was captured by input data. The model accurately simulated leaflet and internode expansion for individual plants (RMSEP = 7.3% and 10.2% of final length, respectively. Thus, this study defines the measurements required to simulate expansion and provides the first model simulating organ expansion in rosebush to capture interplant variability.

  8. RUSLE2015: Modelling soil erosion at continental scale using high resolution input layers

    Science.gov (United States)

    Panagos, Panos; Borrelli, Pasquale; Meusburger, Katrin; Poesen, Jean; Ballabio, Cristiano; Lugato, Emanuele; Montanarella, Luca; Alewell, Christine

    2016-04-01

    Soil erosion by water is one of the most widespread forms of soil degradation in the Europe. On the occasion of the 2015 celebration of the International Year of Soils, the European Commission's Joint Research Centre (JRC) published the RUSLE2015, a modified modelling approach for assessing soil erosion in Europe by using the best available input data layers. The objective of the recent assessment performed with RUSLE2015 was to improve our knowledge and understanding of soil erosion by water across the European Union and to accentuate the differences and similarities between different regions and countries beyond national borders and nationally adapted models. RUSLE2015 has maximized the use of available homogeneous, updated, pan-European datasets (LUCAS topsoil, LUCAS survey, GAEC, Eurostat crops, Eurostat Management Practices, REDES, DEM 25m, CORINE, European Soil Database) and have used the best suited approach at European scale for modelling soil erosion. The collaboration of JRC with many scientists around Europe and numerous prominent European universities and institutes resulted in an improved assessment of individual risk factors (rainfall erosivity, soil erodibility, cover-management, topography and support practices) and a final harmonized European soil erosion map at high resolution. The mean soil loss rate in the European Union's erosion-prone lands (agricultural, forests and semi-natural areas) was found to be 2.46 t ha-1 yr-1, resulting in a total soil loss of 970 Mt annually; equal to an area the size of Berlin (assuming a removal of 1 meter). According to the RUSLE2015 model approximately 12.7% of arable lands in the European Union is estimated to suffer from moderate to high erosion(>5 t ha-1 yr-1). This equates to an area of 140,373 km2 which equals to the surface area of Greece (Environmental Science & Policy, 54, 438-447; 2015). Even the mean erosion rate outstrips the mean formation rate (<1.4 tonnes per ha annually). The recent RUSLE2015

  9. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romanach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using

  10. A Data Flow Behavior Constraints Model for Branch Decisionmaking Variables

    Directory of Open Access Journals (Sweden)

    Lu Yan

    2012-06-01

    Full Text Available In order to detect the attacks to decision-making variable, this paper presents a data flow behavior constraint model for branch decision-making variables. Our model is expanded from the common control flow model, itemphasizes on the analysis and verification about the data flow for decision-making variables, so that to ensure the branch statement can execute correctly and can also detect the attack to branch decision-making variableeasily. The constraints of our model include the collection of variables, the statements that the decision-making variables are dependent on and the data flow constraint with the use-def relation of these variables. Our experimental results indicate that it is effective in detecting the attacks to branch decision-making variables as well as the attacks to control-data.

  11. Uncertainties in the magnitudes of inputs of trace metals to the North Sea. Implications for water quality models

    Energy Technology Data Exchange (ETDEWEB)

    Tappin, A.D.; Burton, J.D. [The University, Dept. of Oceanography, Highfield, Southampton (United Kingdom); Millward, G.E.; Statham, P.J. [Univ. of Plymouth, Dept. of Environmental Sciences, Plymouth (United Kingdom)

    1996-12-31

    Numerical modelling is a powerful tool for studying the concetrations, distributions and fates of contaminants in the North Sea, and for the prediction of water quality. Its usefulness, with respect to developing strategies regarding source reductions for example, depends on how closely the forcing functions and biogeochemical processes that significantly influence contaminant transport and cycling, can be reflected in the model. One major consideration is the completeness and quality of data on inputs, which constitute major forcing functions. If estimates of the magnitudes of contaminant inputs are poorly constrained, then model results may become of qualitative value only, rather than quantitative. In this paper, a water quality model for trace metals in the southern North Sea is used to examine how predicted concentrations and distributions of cadmium, copper and lead vary during winter in response to the incorporation into the model of uncertainties in inputs. The model is largely driven by data associated with the Natural Environment Research Council North Sea Project (NERC NSP). The range in predicted concentrations of both the dissolved and particulate phases of these metals in a given grid cell following incorporation of maximum and minimum inputs is relatively narrow, even when the range in inputs is large. For dissolved copper and lead, and particulate copper, there is reasonable agreement between simulated concentration and those observed during a winter NSP cruise. For dissolved cadmium, and particulate cadmium and lead, concentrations of the right order are predicted, although the detailed scatter in the observations is not. Significant reductions in river inputs of total led and copper lead to predictions that water column concentrations of dissolved lead and copper decrease just in the coastal zone, and then by only a small fraction. (au) 49 refs.

  12. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX

    Science.gov (United States)

    2015-07-01

    accurately estimated, such as solubility, while others — such as degradation rates — are often far more uncertain . Prior to using improved methods for...meet this purpose, a previous application of TREECS™ was used to evaluate parameter sensitivity and the effects of highly uncertain inputs for...than others. One of the most uncertain inputs in this application is the loading rate (grams/year) of unexploded RDX residue. A value of 1.5 kg/yr was

  13. A method of aggregating heterogeneous subgrid land cover input data for multi-scale urban parameterization within atmospheric models

    Science.gov (United States)

    Shaffer, S. R.

    2015-12-01

    A method for representing grid-scale heterogeneous development density for urban climate models from probability density functions of sub-grid resolution observed data is proposed. Derived values are evaluated in relation to normalized Shannon Entropy to provide guidance in assessing model input data. Urban fraction for dominant and mosaic urban class contributions are estimated by combining analysis of 30-meter resolution National Land Cover Database 2006 data products for continuous impervious surface area and categorical land cover. The method aims at reducing model error through improvement of urban parameterization and representation of observations employed as input data. The multi-scale variation of parameter values are demonstrated for several methods of utilizing input. The method provides multi-scale and spatial guidance for determining where parameterization schemes may be mis-representing heterogeneity of input data, along with motivation for employing mosaic techniques based upon assessment of input data. The proposed method has wider potential for geographic application, and complements data products which focus on characterizing central business districts. The method enables obtaining urban fraction dependent upon resolution and class partition scheme, based upon improved parameterization of observed data, which provides one means of influencing simulation prediction at various aggregated grid scales.

  14. Modeling ocean circulation and biogeochemical variability in the Gulf of Mexico

    Science.gov (United States)

    Xue, Z.; He, R.; Fennel, K.; Cai, W.-J.; Lohrenz, S.; Hopkinson, C.

    2013-11-01

    A three-dimensional coupled physical-biogeochemical model is applied to simulate and examine temporal and spatial variability of circulation and biogeochemical cycling in the Gulf of Mexico (GoM). The model is driven by realistic atmospheric forcing, open boundary conditions from a data assimilative global ocean circulation model, and observed freshwater and terrestrial nitrogen input from major rivers. A 7 yr model hindcast (2004-2010) was performed, and validated against satellite observed sea surface height, surface chlorophyll, and in situ observations including coastal sea level, ocean temperature, salinity, and dissolved inorganic nitrogen (DIN) concentration. The model hindcast revealed clear seasonality in DIN, phytoplankton and zooplankton distributions in the GoM. An empirical orthogonal function analysis indicated a phase-locked pattern among DIN, phytoplankton and zooplankton concentrations. The GoM shelf nitrogen budget was also quantified, revealing that on an annual basis the DIN input is largely balanced by the removal through denitrification (an equivalent of ~ 80% of DIN input) and offshore exports to the deep ocean (an equivalent of ~ 17% of DIN input).

  15. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software

  16. Impact of input data uncertainty on environmental exposure assessment models : A case study for electromagnetic field modelling from mobile phone base stations

    NARCIS (Netherlands)

    Beekhuizen, Johan; Heuvelink, Gerard B M; Huss, Anke; Bürgi, Alfred; Kromhout, Hans; Vermeulen, Roel

    2014-01-01

    BACKGROUND: With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can the

  17. Impact of input data uncertainty on environmental exposure assessment models: A case study for electromagnetic field modelling from mobile phone base stations

    NARCIS (Netherlands)

    Beekhuizen, J.; Heuvelink, G.B.M.; Huss, A.; Burgi, A.; Kromhout, H.; Vermeulen, R.

    2014-01-01

    Background: With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can the

  18. Modeling pCO2 variability in the Gulf of Mexico

    Science.gov (United States)

    Xue, Zuo; He, Ruoying; Fennel, Katja; Cai, Wei-Jun; Lohrenz, Steven; Huang, Wei-Jen; Tian, Hanqin; Ren, Wei; Zang, Zhengchen

    2016-08-01

    A three-dimensional coupled physical-biogeochemical model was used to simulate and examine temporal and spatial variability of sea surface pCO2 in the Gulf of Mexico (GoM). The model was driven by realistic atmospheric forcing, open boundary conditions from a data-assimilative global ocean circulation model, and observed freshwater and terrestrial nutrient and carbon input from major rivers. A 7-year model hindcast (2004-2010) was performed and validated against ship measurements. Model results revealed clear seasonality in surface pCO2 and were used to estimate carbon budgets in the Gulf. Based on the average of model simulations, the GoM was a net CO2 sink with a flux of 1.11 ± 0.84 × 1012 mol C yr-1, which, together with the enormous fluvial inorganic carbon input, was comparable to the inorganic carbon export through the Loop Current. Two model sensitivity experiments were performed: one without biological sources and sinks and the other using river input from the 1904-1910 period as simulated by the Dynamic Land Ecosystem Model (DLEM). It was found that biological uptake was the primary driver making GoM an overall CO2 sink and that the carbon flux in the northern GoM was very susceptible to changes in river forcing. Large uncertainties in model simulations warrant further process-based investigations.

  19. Numerical implementation of a state variable model for friction

    Energy Technology Data Exchange (ETDEWEB)

    Korzekwa, D.A. [Los Alamos National Lab., NM (United States); Boyce, D.E. [Cornell Univ., Ithaca, NY (United States)

    1995-03-01

    A general state variable model for friction has been incorporated into a finite element code for viscoplasticity. A contact area evolution model is used in a finite element model of a sheet forming friction test. The results show that a state variable model can be used to capture complex friction behavior in metal forming simulations. It is proposed that simulations can play an important role in the analysis of friction experiments and the development of friction models.

  20. Stochastic modeling of interannual variation of hydrologic variables

    Science.gov (United States)

    Dralle, David; Karst, Nathaniel; Müller, Marc; Vico, Giulia; Thompson, Sally E.

    2017-07-01

    Quantifying the interannual variability of hydrologic variables (such as annual flow volumes, and solute or sediment loads) is a central challenge in hydrologic modeling. Annual or seasonal hydrologic variables are themselves the integral of instantaneous variations and can be well approximated as an aggregate sum of the daily variable. Process-based, probabilistic techniques are available to describe the stochastic structure of daily flow, yet estimating interannual variations in the corresponding aggregated variable requires consideration of the autocorrelation structure of the flow time series. Here we present a method based on a probabilistic streamflow description to obtain the interannual variability of flow-derived variables. The results provide insight into the mechanistic genesis of interannual variability of hydrologic processes. Such clarification can assist in the characterization of ecosystem risk and uncertainty in water resources management. We demonstrate two applications, one quantifying seasonal flow variability and the other quantifying net suspended sediment export.

  1. A NUI Based Multiple Perspective Variability Modelling CASE Tool

    OpenAIRE

    Bashroush, Rabih

    2010-01-01

    With current trends towards moving variability from hardware to \\ud software, and given the increasing desire to postpone design decisions as much \\ud as is economically feasible, managing the variability from requirements \\ud elicitation to implementation is becoming a primary business requirement in the \\ud product line engineering process. One of the main challenges in variability \\ud management is the visualization and management of industry size variability \\ud models. In this demonstrat...

  2. Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model

    Science.gov (United States)

    Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.

    2009-05-01

    Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.

  3. Impacts of the representation of riverine freshwater input in the community earth system model

    Science.gov (United States)

    Tseng, Yu-heng; Bryan, Frank O.; Whitney, Michael M.

    2016-09-01

    The impacts of the representation of riverine freshwater input on the simulated ocean state are investigated through comparison of a suite of experiments with the Community Earth System Model (CESM). The aspects of river and estuary processes investigated include lateral spreading of runoff, runoff contribution to the surface buoyancy flux within the K-Profile Parameterization (KPP), the use of a local salinity in the virtual salt flux (VSF) formulation, and the vertical redistribution of runoff. The horizontal runoff spreading distribution plays an important role in the regional salinity distribution and significantly changes the vertical stratification and mixing. When runoff is considered to be a contribution to the surface buoyancy flux, the calculation of turbulent length and velocity scales in the KPP can be significantly impacted near larger discharge rivers, resulting in local surface salinity changes of up to 12 ppt. Using the local surface salinity instead of a globally constant reference salinity in the conversion of riverine freshwater flux to VSF can reduce biases in the simulated salinity near river mouths but leads to drift in global mean salinity. This is remedied through a global correction approach. We also explore the sensitivity to the vertical redistribution of runoff, which partially mimics the impacts of vertical mixing process within estuaries and coastal river plumes. The impacts of the vertical redistribution of runoff are largest when the runoff effective mixing depth is comparable with the mixed layer depth, resulting from the enhanced vertical mixing and the increase of the available potential energy. The impacts in all sensitivity experiments are predominantly local, but the regional circulation can advect the influences downstream.

  4. Multi-Wheat-Model Ensemble Responses to Interannual Climate Variability

    Science.gov (United States)

    Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981e2010 grain yield, and we evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long-termwarming, suggesting that additional processes differentiate climate change impacts from observed climate variability analogs and motivating continuing analysis and model development efforts.

  5. Modeling the Effects of Irrigation on Land Surface Fluxes and States over the Conterminous United States: Sensitivity to Input Data and Model Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Guoyong; Huang, Maoyi; Tang, Qiuhong; Sacks, William J.; Lei, Huimin; Leung, Lai-Yung R.

    2013-09-16

    Previous studies on irrigation impacts on land surface fluxes/states were mainly conducted as sensitivity experiments, with limited analysis of uncertainties from the input data and model irrigation schemes used. In this study, we calibrated and evaluated the performance of irrigation water use simulated by the Community Land Model version 4 (CLM4) against observations from agriculture census. We investigated the impacts of irrigation on land surface fluxes and states over the conterminous United States (CONUS) and explored possible directions of improvement. Specifically, we found large uncertainty in the irrigation area data from two widely used sources and CLM4 tended to produce unrealistically large temporal variations of irrigation demand for applications at the water resources region scale over CONUS. At seasonal to interannual time scales, the effects of irrigation on surface energy partitioning appeared to be large and persistent, and more pronounced in dry than wet years. Even with model calibration to yield overall good agreement with the irrigation amounts from the National Agricultural Statistics Service (NASS), differences between the two irrigation area datasets still dominate the differences in the interannual variability of land surface response to irrigation. Our results suggest that irrigation amount simulated by CLM4 can be improved by (1) calibrating model parameter values to account for regional differences in irrigation demand and (2) accurate representation of the spatial distribution and intensity of irrigated areas.

  6. ABOUT PSYCHOLOGICAL VARIABLES IN APPLICATION SCORING MODELS

    Directory of Open Access Journals (Sweden)

    Pablo Rogers

    2015-01-01

    Full Text Available The purpose of this study is to investigate the contribution of psychological variables and scales suggested by Economic Psychology in predicting individuals’ default. Therefore, a sample of 555 individuals completed a self-completion questionnaire, which was composed of psychological variables and scales. By adopting the methodology of the logistic regression, the following psychological and behavioral characteristics were found associated with the group of individuals in default: a negative dimensions related to money (suffering, inequality and conflict; b high scores on the self-efficacy scale, probably indicating a greater degree of optimism and over-confidence; c buyers classified as compulsive; d individuals who consider it necessary to give gifts to children and friends on special dates, even though many people consider this a luxury; e problems of self-control identified by individuals who drink an average of more than four glasses of alcoholic beverage a day.

  7. High resolution variability in the Quaternary Indian monsoon inferred from records of clastic input and paleo-production recovered during IODP Expedition 355

    Science.gov (United States)

    Hahn, Annette; Lyle, Mitchell; Kulhanek, Denise; Ando, Sergio; Clift, Peter

    2016-04-01

    The sediment cores obtained from the Indus fan at Site U1457 during Expedition 355 of the International Ocean Discovery Program (IODP) contain a ca. 100m spliced section covering the past ca. 1Ma. We aim to make use of this unique long, mostly continuous climate archive to unravel the millennial scale atmospheric and oceanic processes linked to changes in the Indian monsoon climate over the Quaternary glacial-interglacial cycles. Our aim is to fill this gap using fast, cost-efficient methods (Fourier Transform Infrared Spectroscopy [FTIRS] and X-ray Fluorescence [XRF] scanning) which allow us to study this sequence at a millennial scale resolution (2-3cm sampling interval). An important methodological aspect of this study is developing FTIRS as a method for the simultaneous estimation of the sediment total inorganic carbon and organic carbon content by using the specific fingerprint absorption spectra of minerals (e.g. calcite) and organic sediment components. The resulting paleo-production proxies give indications of oceanic circulation patterns and serve as a direct comparison to the XRF scanning data. Initial results show that variability in paleo-production is accompanied by changes in the quantity and composition of clastic input to the site. Phases of increased deposition of terrigenous material are enriched in K, Al, Fe and Si. Both changes in the weathering and erosion focus areas affect the mineralogy and elemental composition of the clastic input as grain size and mineralogical changes are reflected in the ratios of lighter to heavier elements. Furthermore, trace element compositions (Zn, Cu, Mn) give indications of diagenetic processes and contribute to the understanding of the depositional environment. The resulting datasets will lead to a more comprehensive understanding of the interplay of the local atmospheric and oceanic circulation processes over glacial-interglacial cycles; an essential prerequisite for regional predictions of global climate

  8. A Sequence of Relaxations Constraining Hidden Variable Models

    CERN Document Server

    Steeg, Greg Ver

    2011-01-01

    Many widely studied graphical models with latent variables lead to nontrivial constraints on the distribution of the observed variables. Inspired by the Bell inequalities in quantum mechanics, we refer to any linear inequality whose violation rules out some latent variable model as a "hidden variable test" for that model. Our main contribution is to introduce a sequence of relaxations which provides progressively tighter hidden variable tests. We demonstrate applicability to mixtures of sequences of i.i.d. variables, Bell inequalities, and homophily models in social networks. For the last, we demonstrate that our method provides a test that is able to rule out latent homophily as the sole explanation for correlations on a real social network that are known to be due to influence.

  9. Quantifying variability in earthquake rupture models using multidimensional scaling: application to the 2011 Tohoku earthquake

    KAUST Repository

    Razafindrakoto, Hoby

    2015-04-22

    Finite-fault earthquake source inversion is an ill-posed inverse problem leading to non-unique solutions. In addition, various fault parametrizations and input data may have been used by different researchers for the same earthquake. Such variability leads to large intra-event variability in the inferred rupture models. One way to understand this problem is to develop robust metrics to quantify model variability. We propose a Multi Dimensional Scaling (MDS) approach to compare rupture models quantitatively. We consider normalized squared and grey-scale metrics that reflect the variability in the location, intensity and geometry of the source parameters. We test the approach on two-dimensional random fields generated using a von Kármán autocorrelation function and varying its spectral parameters. The spread of points in the MDS solution indicates different levels of model variability. We observe that the normalized squared metric is insensitive to variability of spectral parameters, whereas the grey-scale metric is sensitive to small-scale changes in geometry. From this benchmark, we formulate a similarity scale to rank the rupture models. As case studies, we examine inverted models from the Source Inversion Validation (SIV) exercise and published models of the 2011 Mw 9.0 Tohoku earthquake, allowing us to test our approach for a case with a known reference model and one with an unknown true solution. The normalized squared and grey-scale metrics are respectively sensitive to the overall intensity and the extension of the three classes of slip (very large, large, and low). Additionally, we observe that a three-dimensional MDS configuration is preferable for models with large variability. We also find that the models for the Tohoku earthquake derived from tsunami data and their corresponding predictions cluster with a systematic deviation from other models. We demonstrate the stability of the MDS point-cloud using a number of realizations and jackknife tests, for

  10. Usability Evaluation of Variability Modeling by means of Common Variability Language

    Directory of Open Access Journals (Sweden)

    Jorge Echeverria

    2015-12-01

    Full Text Available Common Variability Language (CVL is a recent proposal for OMG's upcoming Variability Modeling standard. CVL models variability in terms of Model Fragments.  Usability is a widely-recognized quality criterion essential to warranty the successful use of tools that put these ideas in practice. Facing the need of evaluating the usability of CVL modeling tools, this paper presents a Usability Evaluation of CVL applied to a Modeling Tool for firmware code of Induction Hobs. This evaluation addresses the configuration, scoping and visualization facets. The evaluation involved the end users of the tool whom are engineers of our Induction Hob industrial partner. Effectiveness and efficiency results indicate that model configuration in terms of model fragment substitutions is intuitive enough but both scoping and visualization require improved tool support. Results also enabled us to identify a list of usability problems which may contribute to alleviate scoping and visualization issues in CVL.

  11. Basic relations for the period variation models of variable stars

    OpenAIRE

    Mikulášek, Zdeněk; Gráf, Tomáš; Zejda, Miloslav; Zhu, Liying; Qian, Shen-Bang

    2012-01-01

    Models of period variations are basic tools for period analyzes of variable stars. We introduce phase function and instant period and formulate basic relations and equations among them. Some simple period models are also presented.

  12. Fixed transaction costs and modelling limited dependent variables

    NARCIS (Netherlands)

    Hempenius, A.L.

    1994-01-01

    As an alternative to the Tobit model, for vectors of limited dependent variables, I suggest a model, which follows from explicitly using fixed costs, if appropriate of course, in the utility function of the decision-maker.

  13. Methods for Handling Missing Variables in Risk Prediction Models

    NARCIS (Netherlands)

    Held, Ulrike; Kessels, Alfons; Aymerich, Judith Garcia; Basagana, Xavier; ter Riet, Gerben; Moons, Karel G. M.; Puhan, Milo A.

    2016-01-01

    Prediction models should be externally validated before being used in clinical practice. Many published prediction models have never been validated. Uncollected predictor variables in otherwise suitable validation cohorts are the main factor precluding external validation.We used individual patient

  14. Realistic modeling of seismic input for megacities and large urban areas

    Science.gov (United States)

    Panza, G. F.; Unesco/Iugs/Igcp Project 414 Team

    2003-04-01

    The project addressed the problem of pre-disaster orientation: hazard prediction, risk assessment, and hazard mapping, in connection with seismic activity and man-induced vibrations. The definition of realistic seismic input has been obtained from the computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different source and structural models. The innovative modeling technique, that constitutes the common tool to the entire project, takes into account source, propagation and local site effects. This is done using first principles of physics about wave generation and propagation in complex media, and does not require to resort to convolutive approaches, that have been proven to be quite unreliable, mainly when dealing with complex geological structures, the most interesting from the practical point of view. In fact, several techniques that have been proposed to empirically estimate the site effects using observations convolved with theoretically computed signals corresponding to simplified models, supply reliable information about the site response to non-interfering seismic phases. They are not adequate in most of the real cases, when the seismic sequel is formed by several interfering waves. The availability of realistic numerical simulations enables us to reliably estimate the amplification effects even in complex geological structures, exploiting the available geotechnical, lithological, geophysical parameters, topography of the medium, tectonic, historical, palaeoseismological data, and seismotectonic models. The realistic modeling of the ground motion is a very important base of knowledge for the preparation of groundshaking scenarios that represent a valid and economic tool for the seismic microzonation. This knowledge can be very fruitfully used by civil engineers in the design of new seismo-resistant constructions and in the reinforcement of the existing built environment, and, therefore

  15. Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables

    Science.gov (United States)

    Henson, Robert A.; Templin, Jonathan L.; Willse, John T.

    2009-01-01

    This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…

  16. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...

  17. Coevolution of variability models and related software artifacts

    DEFF Research Database (Denmark)

    Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas

    2015-01-01

    models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source...

  18. A method for the identification of state space models from input and output measurements

    Directory of Open Access Journals (Sweden)

    David Di Ruscio

    1995-07-01

    Full Text Available In this paper we present a simple and general algorithm for the combined deterministic stochastic realization problem directly from known input and output time series. The solution to the pure deterministic as well as the pure stochastic realization problem are special cases of the method presented.

  19. Utilizing Physical Input-Output Model to Inform Nitrogen related Ecosystem Services

    Science.gov (United States)

    Here we describe the development of nitrogen PIOTs for the midwestern US state of Illinois with large inputs of nitrogen from agriculture and industry. The PIOTs are used to analyze the relationship between regional economic activities and ecosystem services in order to identify...

  20. Impact of Infralimbic Inputs on Intercalated Amygdale Neurons: A Biophysical Modeling Study

    Science.gov (United States)

    Li, Guoshi; Amano, Taiju; Pare, Denis; Nair, Satish S.

    2011-01-01

    Intercalated (ITC) amygdala neurons regulate fear expression by controlling impulse traffic between the input (basolateral amygdala; BLA) and output (central nucleus; Ce) stations of the amygdala for conditioned fear responses. Previously, stimulation of the infralimbic (IL) cortex was found to reduce fear expression and the responsiveness of Ce…

  1. Influence of the meteorological input on the atmospheric transport modelling with FLEXPART of radionuclides from the Fukushima Daiichi nuclear accident.

    Science.gov (United States)

    Arnold, D; Maurer, C; Wotawa, G; Draxler, R; Saito, K; Seibert, P

    2015-01-01

    In the present paper the role of precipitation as FLEXPART model input is investigated for one possible release scenario of the Fukushima Daiichi accident. Precipitation data from the European Center for Medium-Range Weather Forecast (ECMWF), the NOAA's National Center for Environmental Prediction (NCEP), the Japan Meteorological Agency's (JMA) mesoscale analysis and a JMA radar-rain gauge precipitation analysis product were utilized. The accident of Fukushima in March 2011 and the following observations enable us to assess the impact of these precipitation products at least for this single case. As expected the differences in the statistical scores are visible but not large. Increasing the ECMWF resolution of all the fields from 0.5° to 0.2° rises the correlation from 0.71 to 0.80 and an overall rank from 3.38 to 3.44. Substituting ECMWF precipitation, while the rest of the variables remains unmodified, by the JMA mesoscale precipitation analysis and the JMA radar gauge precipitation data yield the best results on a regional scale, specially when a new and more robust wet deposition scheme is introduced. The best results are obtained with a combination of ECMWF 0.2° data with precipitation from JMA mesoscale analyses and the modified wet deposition with a correlation of 0.83 and an overall rank of 3.58. NCEP-based results with the same source term are generally poorer, giving correlations around 0.66, and comparatively large negative biases and an overall rank of 3.05 that worsens when regional precipitation data is introduced.

  2. Quantum Discord, CHSH Inequality and Hidden Variables -- Critical reassessment of hidden-variables models

    CERN Document Server

    Fujikawa, Kazuo

    2013-01-01

    Hidden-variables models are critically reassessed. It is first examined if the quantum discord is classically described by the hidden-variable model of Bell in the Hilbert space with $d=2$. The criterion of vanishing quantum discord is related to the notion of reduction and, surprisingly, the hidden-variable model in $d=2$, which has been believed to be consistent so far, is in fact inconsistent and excluded by the analysis of conditional measurement and reduction. The description of the full contents of quantum discord by the deterministic hidden-variables models is not possible. We also re-examine CHSH inequality. It is shown that the well-known prediction of CHSH inequality $|B|\\leq 2$ for the CHSH operator $B$ introduced by Cirel'son is not unique. This non-uniqueness arises from the failure of linearity condition in the non-contextual hidden-variables model in $d=4$ used by Bell and CHSH, in agreement with Gleason's theorem which excludes $d=4$ non-contextual hidden-variables models. If one imposes the l...

  3. Assessing geotechnical centrifuge modelling in addressing variably saturated flow in soil and fractured rock.

    Science.gov (United States)

    Jones, Brendon R; Brouwers, Luke B; Van Tonder, Warren D; Dippenaar, Matthys A

    2017-01-05

    The vadose zone typically comprises soil underlain by fractured rock. Often, surface water and groundwater parameters are readily available, but variably saturated flow through soil and rock are oversimplified or estimated as input for hydrological models. In this paper, a series of geotechnical centrifuge experiments are conducted to contribute to the knowledge gaps in: (i) variably saturated flow and dispersion in soil and (ii) variably saturated flow in discrete vertical and horizontal fractures. Findings from the research show that the hydraulic gradient, and not the hydraulic conductivity, is scaled for seepage flow in the geotechnical centrifuge. Furthermore, geotechnical centrifuge modelling has been proven as a viable experimental tool for the modelling of hydrodynamic dispersion as well as the replication of similar flow mechanisms for unsaturated fracture flow, as previously observed in literature. Despite the imminent challenges of modelling variable saturation in the vadose zone, the geotechnical centrifuge offers a powerful experimental tool to physically model and observe variably saturated flow. This can be used to give valuable insight into mechanisms associated with solid-fluid interaction problems under these conditions. Findings from future research can be used to validate current numerical modelling techniques and address the subsequent influence on aquifer recharge and vulnerability, contaminant transport, waste disposal, dam construction, slope stability and seepage into subsurface excavations.

  4. A radiobiological model of radiotherapy response and its correlation with prognostic imaging variables

    Science.gov (United States)

    Crispin-Ortuzar, Mireia; Jeong, Jeho; Fontanella, Andrew N.; Deasy, Joseph O.

    2017-04-01

    Radiobiological models of tumour control probability (TCP) can be personalized using imaging data. We propose an extension to a voxel-level radiobiological TCP model in order to describe patient-specific differences and intra-tumour heterogeneity. In the proposed model, tumour shrinkage is described by means of a novel kinetic Monte Carlo method for inter-voxel cell migration and tumour deformation. The model captures the spatiotemporal evolution of the tumour at the voxel level, and is designed to take imaging data as input. To test the performance of the model, three image-derived variables found to be predictive of outcome in the literature have been identified and calculated using the model’s own parameters. Simulating multiple tumours with different initial conditions makes it possible to perform an in silico study of the correlation of these variables with the dose for 50% tumour control (\\text{TC}{{\\text{D}}50} ) calculated by the model. We find that the three simulated variables correlate with the calculated \\text{TC}{{\\text{D}}50} . In addition, we find that different variables have different levels of sensitivity to the spatial distribution of hypoxia within the tumour, as well as to the dynamics of the migration mechanism. Finally, based on our results, we observe that an adequate combination of the variables may potentially result in higher predictive power.

  5. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    Science.gov (United States)

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel.

  6. A Spline Regression Model for Latent Variables

    Science.gov (United States)

    Harring, Jeffrey R.

    2014-01-01

    Spline (or piecewise) regression models have been used in the past to account for patterns in observed data that exhibit distinct phases. The changepoint or knot marking the shift from one phase to the other, in many applications, is an unknown parameter to be estimated. As an extension of this framework, this research considers modeling the…

  7. Development of a MODIS-Derived Surface Albedo Data Set: An Improved Model Input for Processing the NSRDB

    Energy Technology Data Exchange (ETDEWEB)

    Maclaurin, Galen [National Renewable Energy Lab. (NREL), Golden, CO (United States); Sengupta, Manajit [National Renewable Energy Lab. (NREL), Golden, CO (United States); Xie, Yu [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gilroy, Nicholas [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-12-01

    A significant source of bias in the transposition of global horizontal irradiance to plane-of-array (POA) irradiance arises from inaccurate estimations of surface albedo. The current physics-based model used to produce the National Solar Radiation Database (NSRDB) relies on model estimations of surface albedo from a reanalysis climatalogy produced at relatively coarse spatial resolution compared to that of the NSRDB. As an input to spectral decomposition and transposition models, more accurate surface albedo data from remotely sensed imagery at finer spatial resolutions would improve accuracy in the final product. The National Renewable Energy Laboratory (NREL) developed an improved white-sky (bi-hemispherical reflectance) broadband (0.3-5.0 ..mu..m) surface albedo data set for processing the NSRDB from two existing data sets: a gap-filled albedo product and a daily snow cover product. The Moderate Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua satellites have provided high-quality measurements of surface albedo at 30 arc-second spatial resolution and 8-day temporal resolution since 2001. The high spatial and temporal resolutions and the temporal coverage of the MODIS sensor will allow for improved modeling of POA irradiance in the NSRDB. However, cloud and snow cover interfere with MODIS observations of ground surface albedo, and thus they require post-processing. The MODIS production team applied a gap-filling methodology to interpolate observations obscured by clouds or ephemeral snow. This approach filled pixels with ephemeral snow cover because the 8-day temporal resolution is too coarse to accurately capture the variability of snow cover and its impact on albedo estimates. However, for this project, accurate representation of daily snow cover change is important in producing the NSRDB. Therefore, NREL also used the Integrated Multisensor Snow and Ice Mapping System data set, which provides daily snow cover observations of the

  8. Active suspension control of a one-wheel car model using single input rule modules fuzzy reasoning and a disturbance observer

    Institute of Scientific and Technical Information of China (English)

    YOSHIMURA Toshio; TERAMURA Itaru

    2005-01-01

    This paper presents the construction of an active suspension control of a one-wheel car model using fuzzy reasoning and a disturbance observer. The one-wheel car model to be treated here can be approximately described as a nonlinear two degrees of freedom system subject to excitation from a road profile. The active control is designed as the fuzzy control inferred by using single input rule modules fuzzy reasoning, and the active control force is released by actuating a pneumatic actuator. The excitation from the road profile is estimated by using a disturbance observer, and the estimate is denoted as one of the variables in the precondition part of the fuzzy control rules. A compensator is inserted to counter the performance degradation due to the delay of the pneumatic actuator. The experimental result indicates that the proposed active suspension system improves much the vibration suppression of the car model.

  9. An introduction to latent variable growth curve modeling concepts, issues, and application

    CERN Document Server

    Duncan, Terry E; Strycker, Lisa A

    2013-01-01

    This book provides a comprehensive introduction to latent variable growth curve modeling (LGM) for analyzing repeated measures. It presents the statistical basis for LGM and its various methodological extensions, including a number of practical examples of its use. It is designed to take advantage of the reader's familiarity with analysis of variance and structural equation modeling (SEM) in introducing LGM techniques. Sample data, syntax, input and output, are provided for EQS, Amos, LISREL, and Mplus on the book's CD. Throughout the book, the authors present a variety of LGM techniques that are useful for many different research designs, and numerous figures provide helpful diagrams of the examples.Updated throughout, the second edition features three new chapters-growth modeling with ordered categorical variables, growth mixture modeling, and pooled interrupted time series LGM approaches. Following a new organization, the book now covers the development of the LGM, followed by chapters on multiple-group is...

  10. Combining data sources to characterise climatic variability for hydrological modelling in high mountain catchments

    Science.gov (United States)

    Pritchard, David; Fowler, Hayley; Bardossy, Andras; O'Donnell, Greg; Forsythe, Nathan

    2016-04-01

    Robust hydrological modelling of high mountain catchments to support water resources management depends critically on the accuracy of climatic input data. However, the hydroclimatological complexity and sparse measurement networks typically characteristic of these environments present significant challenges for determining the structure of spatial and temporal variability in key climatic variables. Focusing on the Upper Indus Basin (UIB), this research explores how different data sources can be combined in order to characterise climatic patterns and related uncertainties at the scales required in hydrological modelling. Analysis of local observations with respect to underlying climatic processes and variability is extended relative to previous studies in this region, which forms a basis for evaluating the domains of applicability and potential insights associated with selected remote sensing and reanalysis products. As part of this, the information content of recent high resolution simulations for understanding climatic patterns is assessed, with particular reference to the High Asia Refined Analysis (HAR). A strategy for integrating these different data sources to obtain plausible realisations of the distributed climatic fields needed for hydrological modelling is developed on the basis of this analysis, which provides a platform for exploring uncertainties arising from potential biases and other sources of error. The interaction between uncertainties in climatic input data and alternative approaches to process parameterisation in hydrological and cryospheric modelling is explored.

  11. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features...... are implemented including robust standard errors for clustered correlated data, multigroup analyses, non-linear parameter constraints, inference with incomplete data, maximum likelihood estimation with censored and binary observations, and instrumental variable estimators. In addition an extensive simulation...

  12. Modeling, estimation and identification of stochastic systems with latent variables

    OpenAIRE

    Bottegal, Giulio

    2013-01-01

    The main topic of this thesis is the analysis of static and dynamic models in which some variables, although directly influencing the behavior of certain observables, are not accessible to measurements. These models find applications in many branches of science and engineering, such as control systems, communications, natural and biological sciences and econometrics. It is well-known that models with unaccessible - or latent - variables, usually suffer from a lack of uniqueness of representat...

  13. Evaluating the efficiency of municipalities in collecting and processing municipal solid waste: A shared input DEA-model

    Energy Technology Data Exchange (ETDEWEB)

    Rogge, Nicky, E-mail: Nicky.Rogge@hubrussel.be [Hogeschool-Universiteit Brussel (HUBrussel), Center for Business Management Research (CBMR), Warmoesberg 26, 1000 Brussels (Belgium); Katholieke Universiteit Leuven (KULeuven), Faculty of Business and Economics, Naamsestraat 69, 3000 Leuven (Belgium); De Jaeger, Simon [Katholieke Universiteit Leuven (KULeuven), Faculty of Business and Economics, Naamsestraat 69, 3000 Leuven (Belgium); Hogeschool-Universiteit Brussel (HUBrussel), Center for Economics and Corporate Sustainability (CEDON), Warmoesberg 26, 1000 Brussels (Belgium)

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Complexity in local waste management calls for more in depth efficiency analysis. Black-Right-Pointing-Pointer Shared-input Data Envelopment Analysis can provide solution. Black-Right-Pointing-Pointer Considerable room for the Flemish municipalities to improve their cost efficiency. - Abstract: This paper proposed an adjusted 'shared-input' version of the popular efficiency measurement technique Data Envelopment Analysis (DEA) that enables evaluating municipality waste collection and processing performances in settings in which one input (waste costs) is shared among treatment efforts of multiple municipal solid waste fractions. The main advantage of this version of DEA is that it not only provides an estimate of the municipalities overall cost efficiency but also estimates of the municipalities' cost efficiency in the treatment of the different fractions of municipal solid waste (MSW). To illustrate the practical usefulness of the shared input DEA-model, we apply the model to data on 293 municipalities in Flanders, Belgium, for the year 2008.

  14. Uncertainty squared: Choosing among multiple input probability distributions and interpreting multiple output probability distributions in Monte Carlo climate risk models

    Science.gov (United States)

    Baer, P.; Mastrandrea, M.

    2006-12-01

    Simple probabilistic models which attempt to estimate likely transient temperature change from specified CO2 emissions scenarios must make assumptions about at least six uncertain aspects of the causal chain between emissions and temperature: current radiative forcing (including but not limited to aerosols), current land use emissions, carbon sinks, future non-CO2 forcing, ocean heat uptake, and climate sensitivity. Of these, multiple PDFs (probability density functions) have been published for the climate sensitivity, a couple for current forcing and ocean heat uptake, one for future non-CO2 forcing, and none for current land use emissions or carbon cycle uncertainty (which are interdependent). Different assumptions about these parameters, as well as different model structures, will lead to different estimates of likely temperature increase from the same emissions pathway. Thus policymakers will be faced with a range of temperature probability distributions for the same emissions scenarios, each described by a central tendency and spread. Because our conventional understanding of uncertainty and probability requires that a probabilistically defined variable of interest have only a single mean (or median, or modal) value and a well-defined spread, this "multidimensional" uncertainty defies straightforward utilization in policymaking. We suggest that there are no simple solutions to the questions raised. Crucially, we must dispel the notion that there is a "true" probability probabilities of this type are necessarily subjective, and reasonable people may disagree. Indeed, we suggest that what is at stake is precisely the question, what is it reasonable to believe, and to act as if we believe? As a preliminary suggestion, we demonstrate how the output of a simple probabilistic climate model might be evaluated regarding the reasonableness of the outputs it calculates with different input PDFs. We suggest further that where there is insufficient evidence to clearly

  15. Using Multi-input-layer Wavelet Neural Network to Model Product Quality of Continuous Casting Furnace and Hot Rolling Mill

    Institute of Scientific and Technical Information of China (English)

    HuanqinLi; JieCheng; BaiwuWan

    2004-01-01

    A new architecture of wavelet neural network with multi-input-layer is proposed and implemented for modeling a class of large-scale industrial processes. Because the processes are very complicated and the number of technological parameters, which determine the final product quality, is quite large, and these parameters do not make actions at the same time but work in different procedures, the conventional feed-forward neural networks cannot model this set of problems efficiently. The network presented in this paper has several input-layers according to the sequence of work procedure in large-scale industrial production processes. The performance of such networks is analyzed and the network is applied to model the steel plate quality of continuous casting furnace and hot rolling mill. Simulation results indicate that the developed methodology is competent and has well prospects to this set of problems.

  16. Observation-Based Dissipation and Input Terms for Spectral Wave Models, with End-User Testing

    Science.gov (United States)

    2014-09-30

    Zieger, S. “Wave climate in the marginal ice zones of Arctic Seas, observations and modelling”. ONR Sea State DRU project. Studies wave climate in the...Rinke, and H. Matthes, 2014: Projected changes of wind-wave activity in the Arctic Ocean. Proceedings of the 22nd IAHR International Symposium on Ice ...objectives are to use new observation-based source terms for the wind input, wave-breaking (whitecapping) dissipation and swell decay in the third

  17. Effect of Flux Adjustments on Temperature Variability in Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Duffy, P.; Bell, J.; Covey, C.; Sloan, L.

    1999-12-27

    It has been suggested that ''flux adjustments'' in climate models suppress simulated temperature variability. If true, this might invalidate the conclusion that at least some of observed temperature increases since 1860 are anthropogenic, since this conclusion is based in part on estimates of natural temperature variability derived from flux-adjusted models. We assess variability of surface air temperatures in 17 simulations of internal temperature variability submitted to the Coupled Model Intercomparison Project. By comparing variability in flux-adjusted vs. non-flux adjusted simulations, we find no evidence that flux adjustments suppress temperature variability in climate models; other, largely unknown, factors are much more important in determining simulated temperature variability. Therefore the conclusion that at least some of observed temperature increases are anthropogenic cannot be questioned on the grounds that it is based in part on results of flux-adjusted models. Also, reducing or eliminating flux adjustments would probably do little to improve simulations of temperature variability.

  18. Linear Regression in High Dimension and/or for Correlated Inputs

    Science.gov (United States)

    Jacques, J.; Fraix-Burnet, D.

    2014-12-01

    Ordinary least square is the common way to estimate linear regression models. When inputs are correlated or when they are too numerous, regression methods using derived inputs directions or shrinkage methods can be efficient alternatives. Methods using derived inputs directions build new uncorrelated variables as linear combination of the initial inputs, whereas shrinkage methods introduce regularization and variable selection by penalizing the usual least square criterion. Both kinds of methods are presented and illustrated thanks to the R software on an astronomical dataset.

  19. Coronal energy input and dissipation in a solar active region 3D MHD model

    CERN Document Server

    Bourdin, Philippe-A; Peter, Hardi

    2015-01-01

    Context. We have conducted a 3D MHD simulation of the solar corona above an active region in full scale and high resolution, which shows coronal loops, and plasma flows within them, similar to observations. Aims. We want to find the connection between the photospheric energy input by field-line braiding with the coronal energy conversion by Ohmic dissipation of induced currents. Methods. To this end we compare the coronal energy input and dissipation within our simulation domain above different fields of view, e.g. for a small loops system in the active region (AR) core. We also choose an ensemble of field lines to compare, e.g., the magnetic energy input to the heating per particle along these field lines. Results. We find an enhanced Ohmic dissipation of currents in the corona above areas that also have enhanced upwards-directed Poynting flux. These regions coincide with the regions where hot coronal loops within the AR core are observed. The coronal density plays a role in estimating the coronal temperatur...

  20. Simulation Evaluation of Pilot Inputs for Real Time Modeling During Commercial Flight Operations

    Science.gov (United States)

    Martos, Borja; Ranaudo, Richard; Oltman, Ryan; Myhre, Nick

    2017-01-01

    Aircraft dynamics characteristics can only be identified from flight data when the aircraft dynamics are excited sufficiently. A preliminary study was conducted into what types and levels of manual piloted control excitation would be required for accurate Real-Time Parameter IDentification (RTPID) results by commercial airline pilots. This includes assessing the practicality for the pilot to provide this excitation when cued, and to further understand if pilot inputs during various phases of flight provide sufficient excitation naturally. An operationally representative task was evaluated by 5 commercial airline pilots using the NASA Ice Contamination Effects Flight Training Device (ICEFTD). Results showed that it is practical to use manual pilot inputs only as a means of achieving good RTPID in all phases of flight and in flight turbulence conditions. All pilots were effective in satisfying excitation requirements when cued. Much of the time, cueing was not even necessary, as just performing the required task provided enough excitation for accurate RTPID estimation. Pilot opinion surveys reported that the additional control inputs required when prompted by the excitation cueing were easy to make, quickly mastered, and required minimal training.

  1. Comparing rainfall variability, model complexity and hydrological response at the intra-event scale

    Science.gov (United States)

    Cristiano, Elena; ten Veldhuis, Marie-claire; Ochoa-Rodriguez, Susana; van de Giesen, Nick

    2017-04-01

    The high variability in space and time of rainfall is one of the main aspects that influence hydrological response and generation of pluvial flooding. This phenomenon has a bigger impact in urban areas, where response is usually faster and flow peaks are typically higher, due to the high degree of imperviousness. Previous researchers have investigated sensitivity of urban hydrodynamic models to rainfall space-time resolution as well as interactions with model structure and resolution. They showed that finding a proper match between rainfall resolution and model complexity is important and that sensitivity increases for smaller urban catchment scales. Results also showed high variability in hydrological response sensitivity, the origins of which remain poorly understood. In this work, we investigate the interaction between rainfall input variability and model structure and scale at high resolution, i.e. 1-15 minutes in time and 100m to 3 km in space. Apart from studying summary statistics such as relative peak flow errors and coefficient of determination, we look into characteristics of response hydrographs to find explanations for response variability in relation to catchment properties as well storm event characteristics (e.g. storm scale and movement, single-peak versus multi-peak events). The aim is to identify general relations between storm temporal and spatial scale and catchment scale in explaining variability of hydrological response. Analyses are conducted for the Cranbrook catchment (London, UK), using 3 hydrodynamic models set up in InfoWorks ICM: a low resolution semi-distributed (SD1) model, a high resolution semi-distributed (SD2) model and a fully distributed (FD) model. These models represent the spatial variability of the land in different ways: semi-distributed models divide the surface in subcatchments, each of them modelled in a lumped way (51 subcatchment for the S model and 4409 subcatchments for the SD model), while the fully distributed

  2. A Polynomial Term Structure Model with Macroeconomic Variables

    Directory of Open Access Journals (Sweden)

    José Valentim Vicente

    2007-06-01

    Full Text Available Recently, a myriad of factor models including macroeconomic variables have been proposed to analyze the yield curve. We present an alternative factor model where term structure movements are captured by Legendre polynomials mimicking the statistical factor movements identified by Litterman e Scheinkmam (1991. We estimate the model with Brazilian Foreign Exchange Coupon data, adopting a Kalman filter, under two versions: the first uses only latent factors and the second includes macroeconomic variables. We study its ability to predict out-of-sample term structure movements, when compared to a random walk. We also discuss results on the impulse response function of macroeconomic variables.

  3. Gaussian Process Structural Equation Models with Latent Variables

    CERN Document Server

    Silva, Ricardo

    2010-01-01

    In a variety of disciplines such as social sciences, psychology, medicine and economics, the recorded data are considered to be noisy measurements of latent variables connected by some causal structure. This corresponds to a family of graphical models known as the structural equation model with latent variables. While linear non-Gaussian variants have been well-studied, inference in nonparametric structural equation models is still underdeveloped. We introduce a sparse Gaussian process parameterization that defines a non-linear structure connecting latent variables, unlike common formulations of Gaussian process latent variable models. An efficient Markov chain Monte Carlo procedure is described. We evaluate the stability of the sampling procedure and the predictive ability of the model compared against the current practice.

  4. Modelling and forecasting electricity price variability

    Energy Technology Data Exchange (ETDEWEB)

    Haugom, Erik

    2012-07-01

    The liberalization of electricity sectors around the world has induced a need for financial electricity markets. This thesis is mainly focused on calculating, modelling, and predicting volatility for financial electricity prices. The four first essays examine the liberalized Nordic electricity market. The purposes in these papers are to describe some stylized properties of high-frequency financial electricity data and to apply models that can explain and predict variation in volatility. The fifth essay examines how information from high-frequency electricity forward contracts can be used in order to improve electricity spot-price volatility predictions. This essay uses data from the Pennsylvania-New Jersey-Maryland wholesale electricity market in the U.S.A. Essay 1 describes some stylized properties of financial high-frequency electricity prices, their returns and volatilities at the Nordic electricity exchange, Nord Pool. The analyses focus on distribution properties, serial correlation, volatility clustering, the influence of extreme events and seasonality in the various measures. The objective of Essay 2 is to calculate, model, and predict realized volatility of financial electricity prices for quarterly and yearly contracts. The total variation is also separated into continuous and jump variation. Various market measures are also included in the models in order potentially to improve volatility predictions. Essay 3 compares day-ahead predictions of Nord Pool financial electricity price volatility obtained from a GARCH approach with those obtained using standard time-series techniques on realized volatility. The performances of a total of eight models (two representing the GARCH family and six representing standard autoregressive models) are compared and evaluated. Essay 4 examines whether predictions of day-ahead and week-ahead volatility can be improved by additionally including volatility and covariance effects from related financial electricity contracts

  5. Coevolution of variability models and related software artifacts

    DEFF Research Database (Denmark)

    Passos, Leonardo; Teixeira, Leopoldo; Dinztner, Nicolas;

    2015-01-01

    to the evolution of different kinds of software artifacts, it is not surprising that industry reports existing tools and solutions ineffective, as they do not handle the complexity found in practice. Attempting to mitigate this overall lack of knowledge and to support tool builders with insights on how variability...... models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source......Variant-rich software systems offer a large degree of customization, allowing users to configure the target system according to their preferences and needs. Facing high degrees of variability, these systems often employ variability models to explicitly capture user-configurable features (e...

  6. Harmonize input selection for sediment transport prediction

    Science.gov (United States)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  7. Fractional Langevin model of gait variability

    Directory of Open Access Journals (Sweden)

    Latka Miroslaw

    2005-08-01

    Full Text Available Abstract The stride interval in healthy human gait fluctuates from step to step in a random manner and scaling of the interstride interval time series motivated previous investigators to conclude that this time series is fractal. Early studies suggested that gait is a monofractal process, but more recent work indicates the time series is weakly multifractal. Herein we present additional evidence for the weakly multifractal nature of gait. We use the stride interval time series obtained from ten healthy adults walking at a normal relaxed pace for approximately fifteen minutes each as our data set. A fractional Langevin equation is constructed to model the underlying motor control system in which the order of the fractional derivative is itself a stochastic quantity. Using this model we find the fractal dimension for each of the ten data sets to be in agreement with earlier analyses. However, with the present model we are able to draw additional conclusions regarding the nature of the control system guiding walking. The analysis presented herein suggests that the observed scaling in interstride interval data may not be due to long-term memory alone, but may, in fact, be due partly to the statistics.

  8. Modeling and design of energy efficient variable stiffness actuators

    NARCIS (Netherlands)

    Visser, L.C.; Carloni, Raffaella; Ünal, Ramazan; Stramigioli, Stefano

    In this paper, we provide a port-based mathematical framework for analyzing and modeling variable stiffness actuators. The framework provides important insights in the energy requirements and, therefore, it is an important tool for the design of energy efficient variable stiffness actuators. Based

  9. A model for variability design rationale in SPL

    NARCIS (Netherlands)

    Galvao, I.; van den Broek, P.M.; Aksit, Mehmet

    2010-01-01

    The management of variability in software product lines goes beyond the definition of variations, traceability and configurations. It involves a lot of assumptions about the variability and related models, which are made by the stakeholders all over the product line but almost never handled explicit

  10. SISTEM KONTROL OTOMATIK DENGAN MODEL SINGLE-INPUT-DUAL-OUTPUT DALAM KENDALI EFISIENSI UMUR-PEMAKAIAN INSTRUMEN

    Directory of Open Access Journals (Sweden)

    S.N.M.P. Simamora

    2014-10-01

    Full Text Available Efficiency condition occurs when the value of the used outputs compared to the resource total that has been used almost close to the value 1 (absolute environment. An instrument to achieve efficiency if the power output level has decreased significantly in the life of the instrument used, if it compared to the previous condition, when the instrument is not equipped with additional systems (or proposed model improvement. Even more effective if the inputs model that are used in unison to achieve a homogeneous output. On this research has been designed and implemented the automatic control system for models of single input-dual-output, wherein the sampling instruments used are lamp and fan. Source voltage used is AC (alternate-current and tested using quantitative research methods and instrumentation (with measuring instruments are observed. The results obtained demonstrate the efficiency of the instrument experienced a significant current model of single-input-dual-output applied separately instrument trials such as lamp and fan when it compared to the condition or state before. And the result show that the design has been built, can also run well.

  11. Sensitivity of meteorological input and soil properties in simulating aerosols (dust, PM10, and BC) using CHIMERE chemistry transport model

    Indian Academy of Sciences (India)

    Nishi Srivastava; S K Satheesh; Nadège Blond

    2014-08-01

    The objective of this study is to evaluate the ability of a European chemistry transport model, ‘CHIMERE’ driven by the US meteorological model MM5, in simulating aerosol concentrations [dust, PM10 and black carbon (BC)] over the Indian region. An evaluation of a meteorological event (dust storm); impact of change in soil-related parameters and meteorological input grid resolution on these aerosol concentrations has been performed. Dust storm simulation over Indo-Gangetic basin indicates ability of the model to capture dust storm events. Measured (AERONET data) and simulated parameters such as aerosol optical depth (AOD) and Angstrom exponent are used to evaluate the performance of the model to capture the dust storm event. A sensitivity study is performed to investigate the impact of change in soil characteristics (thickness of the soil layer in contact with air, volumetric water, and air content of the soil) and meteorological input grid resolution on the aerosol (dust, PM10, BC) distribution. Results show that soil parameters and meteorological input grid resolution have an important impact on spatial distribution of aerosol (dust, PM10, BC) concentrations.

  12. Terrestrial ecosystem recovery - Modelling the effects of reduced acidic inputs and increased inputs of sea-salts induced by global change

    DEFF Research Database (Denmark)

    Beier, C.; Moldan, F.; Wright, R.F.

    2003-01-01

    to 3 large-scale "clean rain" experiments, the so-called roof experiments at Risdalsheia, Norway; Gardsjon, Sweden, and Klosterhede, Denmark. Implementation of the Gothenburg protocol will initiate recovery of the soils at all 3 sites by rebuilding base saturation. The rate of recovery is small...... and base saturation increases less than 5% over the next 30 years. A climate-induced increase in storm severity will increase the sea-salt input to the ecosystems. This will provide additional base cations to the soils and more than double the rate of the recovery, but also lead to strong acid pulses...... following high sea-salt inputs as the deposited base cations exchange with the acidity stored in the soil. Future recovery of soils and runoff at acidified catchments will thus depend on the amount and rate of reduction of acid deposition, and in the case of systems near the coast, the frequency...

  13. Variable Selection in the Partially Linear Errors-in-Variables Models for Longitudinal Data

    Institute of Scientific and Technical Information of China (English)

    Yi-ping YANG; Liu-gen XUE; Wei-hu CHENG

    2012-01-01

    This paper proposes a new approach for variable selection in partially linear errors-in-variables (EV) models for longitudinal data by penalizing appropriate estimating functions.We apply the SCAD penalty to simultaneously select significant variables and estimate unknown parameters.The rate of convergence and the asymptotic normality of the resulting estimators are established.Furthermore,with proper choice of regularization parameters,we show that the proposed estimators perform as well as the oracle procedure.A new algorithm is proposed for solving penalized estimating equation.The asymptotic results are augmented by a simulation study.

  14. Modeling of the impact of Rhone River nutrient inputs on the dynamics of planktonic diversity

    Science.gov (United States)

    Alekseenko, Elena; Baklouti, Melika; Garreau, Pierre; Guyennon, Arnaud; Carlotti, François

    2014-05-01

    conditions (for which the sea surface layer is well mixed). As a first step, these scenarios will allow to investigate the impact of changes in the N:P ratios of the Rhone River on the structure of the planktonic community at short time scale (two years). Acknowledgements The present research is a contribution to the Labex OT-Med (n° ANR-11-LABX-0061) funded by the French Government «Investissements d'Avenir» program of the French National Research Agency (ANR) through the A*MIDEX project (n° ANR-11-IDEX-0001-02). We thank our collegue P. Raimbault for the access to the MOOSE project dataset about the nutrient composition of the Rhone River . References Alekseenko E., Raybaud V., Espinasse B., Carlotti F., Queguiner B., Thouvenin B., Garreau P., Baklouti M. (2014) Seasonal dynamics and stoichiometry of the planktonic community in the NW Mediterranean Sea: a 3D modeling approach. Ocean Dynamics IN PRESS. http://dx.doi.org/10.1007/s10236-013-0669-2 Baklouti M, Diaz F, Pinazo C, Faure V, Quequiner B (2006a) Investigation of mechanistic formulations depicting phytoplankton dynamics for models of marine pelagic ecosystems and description of a new model. Prog Oceanogr 71:1-33 Baklouti M, Faure V, Pawlowski L, Sciandra A (2006b) Investigation and sensitivity analysis of a mechanistic phytoplankton model implemented in a new modular tool (Eco3M) dedicated to biogeochemical modelling. Prog Oceanogr 71:34-58 Lazure P, Dumas F (2008) An external-internal mode coupling for a 3D hydrodynamical model for applications at regional scale (MARS). Adv Water Resour 31(2):233-250 Ludwig, W., Dumont, E., Meybeck, M., Heussner, S. (2009). River discharges of water and nutrients to the Mediterranean and Black Sea: Major drivers for ecosystem changes during past and future decades? Progress in Oceanography 80, pp. 199-217 Malanotte-Rizoli, P. and Pan-Med Group. (2012) Physical forcing and physical/biochemical variability of the Mediterranean Sea : A review of unresolved issues and directions of

  15. Modeling Candle Flame Behavior In Variable Gravity

    Science.gov (United States)

    Alsairafi, A.; Tien, J. S.; Lee, S. T.; Dietrich, D. L.; Ross, H. D.

    2003-01-01

    The burning of a candle, as typical non-propagating diffusion flame, has been used by a number of researchers to study the effects of electric fields on flame, spontaneous flame oscillation and flickering phenomena, and flame extinction. In normal gravity, the heat released from combustion creates buoyant convection that draws oxygen into the flame. The strength of the buoyant flow depends on the gravitational level and it is expected that the flame shape, size and candle burning rate will vary with gravity. Experimentally, there exist studies of candle burning in enhanced gravity (i.e. higher than normal earth gravity, g(sub e)), and in microgravity in drop towers and space-based facilities. There are, however, no reported experimental data on candle burning in partial gravity (g model of the candle flame, buoyant forces were neglected. The treatment of momentum equation was simplified using a potential flow approximation. Although the predicted flame characteristics agreed well with the experimental results, the model cannot be extended to cases with buoyant flows. In addition, because of the use of potential flow, no-slip boundary condition is not satisfied on the wick surface. So there is some uncertainty on the accuracy of the predicted flow field. In the present modeling effort, the full Navier-Stokes momentum equations with body force term is included. This enables us to study the effect of gravity on candle flames (with zero gravity as the limiting case). In addition, we consider radiation effects in more detail by solving the radiation transfer equation. In the previous study, flame radiation is treated as a simple loss term in the energy equation. Emphasis of the present model is on the gas-phase processes. Therefore, the detailed heat and mass transfer phenomena inside the porous wick are not treated. Instead, it is assumed that a thin layer of liquid fuel coated the entire wick surface during the burning process. This is the limiting case that the mass

  16. Multi-wheat-model ensemble responses to interannual climatic variability

    DEFF Research Database (Denmark)

    Ruane, A C; Hudson, N I; Asseng, S

    2016-01-01

    evaluate results against the interannual variability of growing season temperature, precipitation, and solar radiation. The amount of information used for calibration has only a minor effect on most models' climate response, and even small multi-model ensembles prove beneficial. Wheat model clusters reveal...... common characteristics of yield response to climate; however models rarely share the same cluster at all four sites indicating substantial independence. Only a weak relationship (R2 ≤ 0.24) was found between the models' sensitivities to interannual temperature variability and their response to long...

  17. Effects of input discretization, model complexity, and calibration strategy on model performance in a data-scarce glacierized catchment in Central Asia

    Science.gov (United States)

    Tarasova, L.; Knoche, M.; Dietrich, J.; Merz, R.

    2016-06-01

    Glacierized high-mountainous catchments are often the water towers for downstream region, and modeling these remote areas are often the only available tool for the assessment of water resources availability. Nevertheless, data scarcity affects different aspects of hydrological modeling in such mountainous glacierized basins. On the example of poorly gauged glacierized catchment in Central Asia, we examined the effects of input discretization, model complexity, and calibration strategy on model performance. The study was conducted with the GSM-Socont model driven with climatic input from the corrected High Asia Reanalysis data set of two different discretizations. We analyze the effects of the use of long-term glacier volume loss, snow cover images, and interior runoff as an additional calibration data. In glacierized catchments with winter accumulation type, where the transformation of precipitation into runoff is mainly controlled by snow and glacier melt processes, the spatial discretization of precipitation tends to have less impact on simulated runoff than a correct prediction of the integral precipitation volume. Increasing model complexity by using spatially distributed input or semidistributed parameters values does not increase model performance in the Gunt catchment, as the more complex model tends to be more sensitive to errors in the input data set. In our case, better model performance and quantification of the flow components can be achieved by additional calibration data, rather than by using a more distributed model parameters. However, a semidistributed model better predicts the spatial patterns of snow accumulation and provides more plausible runoff predictions at the interior sites.

  18. Synaptic inputs compete during rapid formation of the calyx of Held: a new model system for neural development.

    Science.gov (United States)

    Holcomb, Paul S; Hoffpauir, Brian K; Hoyson, Mitchell C; Jackson, Dakota R; Deerinck, Thomas J; Marrs, Glenn S; Dehoff, Marlin; Wu, Jonathan; Ellisman, Mark H; Spirou, George A

    2013-08-07

    Hallmark features of neural circuit development include early exuberant innervation followed by competition and pruning to mature innervation topography. Several neural systems, including the neuromuscular junction and climbing fiber innervation of Purkinje cells, are models to study neural development in part because they establish a recognizable endpoint of monoinnervation of their targets and because the presynaptic terminals are large and easily monitored. We demonstrate here that calyx of Held (CH) innervation of its target, which forms a key element of auditory brainstem binaural circuitry, exhibits all of these characteristics. To investigate CH development, we made the first application of serial block-face scanning electron microscopy to neural development with fine temporal resolution and thereby accomplished the first time series for 3D ultrastructural analysis of neural circuit formation. This approach revealed a growth spurt of added apposed surface area (ASA)>200 μm2/d centered on a single age at postnatal day 3 in mice and an initial rapid phase of growth and competition that resolved to monoinnervation in two-thirds of cells within 3 d. This rapid growth occurred in parallel with an increase in action potential threshold, which may mediate selection of the strongest input as the winning competitor. ASAs of competing inputs were segregated on the cell body surface. These data suggest mechanisms to select "winning" inputs by regional reinforcement of postsynaptic membrane to mediate size and strength of competing synaptic inputs.

  19. A new approach to model the variability of karstic recharge

    Directory of Open Access Journals (Sweden)

    A. Hartmann

    2012-02-01

    Full Text Available In karst systems, surface near dissolution carbonate rock results in a high spatial and temporal variability of groundwater recharge. To adequately represent the dominating recharge processes in hydrological models is still a challenge, especially in data scare regions. In this study, we developed a recharge model that is based on a perceptual model of the epikarst. It represents epikarst heterogeneity as a set of system property distributions to produce not only a single recharge time series, but a variety of time series representing the spatial recharge variability. We tested the new model with a unique set of spatially distributed flow and tracer observations in a karstic cave at Mt. Carmel, Israel. We transformed the spatial variability into statistical variables and apply an iterative calibration strategy in which more and more data was added to the calibration. Thereby, we could show that the model is only able to produce realistic results when the information about the spatial variability of the observations was included into the model calibration. We could also show that tracer information improves the model performance if data about the variability is not included.

  20. Geometrically nonlinear creeping mathematic models of shells with variable thickness

    Directory of Open Access Journals (Sweden)

    V.M. Zhgoutov

    2012-08-01

    Full Text Available Calculations of strength, stability and vibration of shell structures play an important role in the design of modern devices machines and structures. However, the behavior of thin-walled structures of variable thickness during which geometric nonlinearity, lateral shifts, viscoelasticity (creep of the material, the variability of the profile take place and thermal deformation starts up is not studied enough.In this paper the mathematical deformation models of variable thickness shells (smoothly variable and ribbed shells, experiencing either mechanical load or permanent temperature field and taking into account the geometrical nonlinearity, creeping and transverse shear, were developed. The refined geometrical proportions for geometrically nonlinear and steadiness problems are given.

  1. Boolean Variables in Economic Models Solved by Linear Programming

    Directory of Open Access Journals (Sweden)

    Lixandroiu D.

    2014-12-01

    Full Text Available The article analyses the use of logical variables in economic models solved by linear programming. Focus is given to the presentation of the way logical constraints are obtained and of the definition rules based on predicate logic. Emphasis is also put on the possibility to use logical variables in constructing a linear objective function on intervals. Such functions are encountered when costs or unitary receipts are different on disjunct intervals of production volumes achieved or sold. Other uses of Boolean variables are connected to constraint systems with conditions and the case of a variable which takes values from a finite set of integers.

  2. Using structural equation modeling to investigate relationships among ecological variables

    Science.gov (United States)

    Malaeb, Z.A.; Kevin, Summers J.; Pugesek, B.H.

    2000-01-01

    Structural equation modeling is an advanced multivariate statistical process with which a researcher can construct theoretical concepts, test their measurement reliability, hypothesize and test a theory about their relationships, take into account measurement errors, and consider both direct and indirect effects of variables on one another. Latent variables are theoretical concepts that unite phenomena under a single term, e.g., ecosystem health, environmental condition, and pollution (Bollen, 1989). Latent variables are not measured directly but can be expressed in terms of one or more directly measurable variables called indicators. For some researchers, defining, constructing, and examining the validity of latent variables may be the end task of itself. For others, testing hypothesized relationships of latent variables may be of interest. We analyzed the correlation matrix of eleven environmental variables from the U.S. Environmental Protection Agency's (USEPA) Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) using methods of structural equation modeling. We hypothesized and tested a conceptual model to characterize the interdependencies between four latent variables-sediment contamination, natural variability, biodiversity, and growth potential. In particular, we were interested in measuring the direct, indirect, and total effects of sediment contamination and natural variability on biodiversity and growth potential. The model fit the data well and accounted for 81% of the variability in biodiversity and 69% of the variability in growth potential. It revealed a positive total effect of natural variability on growth potential that otherwise would have been judged negative had we not considered indirect effects. That is, natural variability had a negative direct effect on growth potential of magnitude -0.3251 and a positive indirect effect mediated through biodiversity of magnitude 0.4509, yielding a net positive total effect of 0

  3. Dynamical analysis of a five-dimensioned chemostat model with impulsive diffusion and pulse input environmental toxicant

    Energy Technology Data Exchange (ETDEWEB)

    Jiao Jianjun, E-mail: jiaojianjun05@126.co [Guizhou Key Laboratory of Economic System Simulation, Guizhou College of Finance and Economics, Guiyang 550004 (China); Ye Kaili [School of Economics and Management, Xinyang Normal University, Xinyang 464000, Henan (China); Chen Lansun [Institute of Mathematics, Academy of Mathematics and System Sciences, Beijing 100080 (China)

    2011-01-15

    Research Highlights: This work improves on existing chemostat models. The proposed model accounts for natural phenomena. This work improves on the existing mathematical methods. - Abstract: In this paper, we consider a five-dimensioned chemostat model with impulsive diffusion and pulse input environmental toxicant. Using the discrete dynamical system determined by the stroboscopic map, we obtain a microorganism-extinction periodic solution. Further, it is globally asymptotically stable. The permanent condition of the investigated system is also analyzed by the theory on impulsive differential equation. Our results reveal that the chemostat environmental changes play an important role on the outcome of the chemostat.

  4. Investigation of the effects of process variables on derived properties of spray dried solid-dispersions using polymer based response surface model and ensemble artificial neural network models.

    Science.gov (United States)

    Patel, Ashwinkumar D; Agrawal, Anjali; Dave, Rutesh H

    2014-04-01

    The objective of this study was to use different statistical tools to understand and optimize the spray drying process to prepare solid dispersions. In this study we investigated the relationship between input variables (inlet temperature, feed concentration, flow rate, solvent and atomization parameters) and quality attributes (yield, outlet temperature and mean particle size) of spray dried solid dispersions (SSDs) using response surface model and ensemble artificial neural network. The Box Behnken design was developed to investigate the effect of various input variables on quality attributes of final products. Moreover, Pearson correlation analysis, self organizing map, contour plots and response surface plot were used to illustrate the relationship between input variables and quality attributes. The influence of different physicochemical properties of solvent on the quality attributes of spray dried products was also investigated. Final validation of prepared models was done using binary SSDs of six model drugs with PVP. Results demonstrated the effectiveness of proposed PVP based model which can help scientists to gain detailed understanding of spray drying process of solid dispersion using minimal resources and time during early formulation development stage. It will also help them to ensure consistent quality of SSDs using broad range of input variables. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Estimators are presented for the coefficients of the polynomial errors-in-variables (EV) model when replicated observations are taken at some experimental points. These estimators are shown to be strongly consistent under mild conditions.

  6. Standard Model evaluation of $\\varepsilon_K$ using lattice QCD inputs for $\\hat{B}_K$ and $V_{cb}$

    CERN Document Server

    Bailey, Jon A; Lee, Weonjong; Park, Sungwoo

    2015-01-01

    We report the Standard Model evaluation of the indirect CP violation parameter $\\varepsilon_K$ using inputs from lattice QCD: the kaon bag parameter $\\hat{B}_K$, $\\xi_0$, $|V_{us}|$ from the $K_{\\ell 3}$ and $K_{\\mu 2}$ decays, and $|V_{cb}|$ from the axial current form factor for the exclusive decay $\\bar{B} \\to D^* \\ell \\bar{\

  7. Assimilation of autoscaled data and regional and local ionospheric models as input sources for real-time 3D IRI modeling

    Science.gov (United States)

    Pezzopane, Michael; Zolesi, Bruno; Settimi, Alessandro; Pietrella, Marco; Cander, Ljiljiana; Bianchi, Cesidio; Pignatelli, Alessandro

    This paper describes the three-dimensional (3-D) electron density mapping of the Earth’s ionosphere by the assimilative IRI-SIRMUP-P (ISP) model. Specifically, it highlights how the joint utilization of autoscaled data such as the critical frequency foF2, the propagation factor M(3000)F2, and the electron density profile N(h) coming from several reference ionospheric stations, as input to the regional SIRMUP (Simplified Ionospheric Regional Model Updated) and global IRI (International Reference Ionosphere) models, can provide a valid tool for obtaining a real-time three-dimensional (3-D) electron density mapping of the ionosphere. Performance of the ISP model is shown by comparing the electron density profiles given by the model with the ones measured at dedicated testing ionospheric stations for quiet and disturbed geomagnetic conditions. Overall the representation of the ionosphere made by the ISP model proves to be better than the climatological representation made by only the IRI-URSI and the IRI-CCIR models. However, there are some cases when the assimilation of the autoscaled data from the reference stations causes either a strong underestimation or a strong overestimation of the real conditions of the ionosphere hence the IRI-URSI model performs better. This ISP misrepresentation, occurring mainly when the number of reference stations covering the region mapped by the model is not sufficient to represent disturbed periods during which the ionosphere is highly variable both in space and time, is the theme for further ISP improvements. Synthesize oblique ionograms obtained by the combined application of the ISP model and IONORT (IONOspheric Ray-Tracing) are also described in this paper. The comparison between these and measured oblique ionograms, both in terms of the ionogram shape and the Maximum Usable Frequency characterizing the considered radio path confirms that the ISP model can represent the real conditions of the ionosphere more accurately than IRI

  8. Bayesian Network Models for Local Dependence among Observable Outcome Variables

    Science.gov (United States)

    Almond, Russell G.; Mulder, Joris; Hemat, Lisa A.; Yan, Duanli

    2009-01-01

    Bayesian network models offer a large degree of flexibility for modeling dependence among observables (item outcome variables) from the same task, which may be dependent. This article explores four design patterns for modeling locally dependent observations: (a) no context--ignores dependence among observables; (b) compensatory context--introduces…

  9. The relationship between cub and loglinear models with latent variables

    NARCIS (Netherlands)

    Oberski, D. L.; Vermunt, J. K.

    2015-01-01

    The "combination of uniform and shifted binomial"(cub) model is a distribution for ordinal variables that has received considerable recent attention and specialized development. This article notes that the cub model is a special case of the well-known loglinear latent class model, an observation tha

  10. Multi-wheat-model ensemble responses to interannual climate variability

    NARCIS (Netherlands)

    Ruane, Alex C.; Hudson, Nicholas I.; Asseng, Senthold; Camarrano, Davide; Ewert, Frank; Martre, Pierre; Boote, Kenneth J.; Thorburn, Peter J.; Aggarwal, Pramod K.; Angulo, Carlos; Basso, Bruno; Bertuzzi, Patrick; Biernath, Christian; Brisson, Nadine; Challinor, Andrew J.; Doltra, Jordi; Gayler, Sebastian; Goldberg, Richard; Grant, Robert F.; Heng, Lee; Hooker, Josh; Hunt, Leslie A.; Ingwersen, Joachim; Izaurralde, Roberto C.; Kersebaum, Kurt Christian; Kumar, Soora Naresh; Müller, Christoph; Nendel, Claas; O'Leary, Garry; Olesen, Jørgen E.; Osborne, Tom M.; Palosuo, Taru; Priesack, Eckart; Ripoche, Dominique; Rötter, Reimund P.; Semenov, Mikhail A.; Shcherbak, Iurii; Steduto, Pasquale; Stöckle, Claudio O.; Stratonovitch, Pierre; Streck, Thilo; Supit, Iwan; Tao, Fulu; Travasso, Maria; Waha, Katharina; Wallach, Daniel; White, Jeffrey W.; Wolf, Joost

    2016-01-01

    We compare 27 wheat models' yield responses to interannual climate variability, analyzed at locations in Argentina, Australia, India, and The Netherlands as part of the Agricultural Model Intercomparison and Improvement Project (AgMIP) Wheat Pilot. Each model simulated 1981-2010 grain yield, and

  11. Energy Efficiency Analysis and Modeling the Relationship between Energy Inputs and Wheat Yield in Iran

    Directory of Open Access Journals (Sweden)

    Fakher Kardoni

    2015-12-01

    Full Text Available Wheat is the dominant cereal crop constituting the first staple food in Iran. This paper studies the energy consumption patterns and the relationship between energy inputs and yield for Wheat production in Iranian agriculture during the period 1986 – 2008. The results indicated that total energy inputs in irrigated and dryland wheat production increased from 29.01 and 9.81 GJ ha-1 in 1986 to 44.67 and 12.35 GJ ha-1 in 2008, respectively. Similarly, total output energy rose from 28.87 and 10.43 GJ ha-1 in 1986 to 58.53 and 15.77 GJ ha-1 in 2008, in the same period. Energy efficiency indicators, input– output ratio, energy productivity, and net energy have improved over the examined period. The results also revealed that nonrenewable, direct, and indirect energy forms had a positive impact on the output level. Moreover, the regression results showed the significant effect of irrigation water and seed energies in irrigated wheat and human labor and fertilizer in dryland wheat on crop yield. Results of this study indicated that improvement of fertilizer efficiency and reduction of fuel consumption by modifying tillage, harvest method, and other agronomic operations can significantly affect the energy efficiency of wheat production in Iran.

  12. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  13. Total Variability Modeling using Source-specific Priors

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2016-01-01

    In total variability modeling, variable length speech utterances are mapped to fixed low-dimensional i-vectors. Central to computing the total variability matrix and i-vector extraction, is the computation of the posterior distribution for a latent variable conditioned on an observed feature...... sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...

  14. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  15. Linear latent variable models: the lava-package

    DEFF Research Database (Denmark)

    Holst, Klaus Kähler; Budtz-Jørgensen, Esben

    2013-01-01

    An R package for specifying and estimating linear latent variable models is presented. The philosophy of the implementation is to separate the model specification from the actual data, which leads to a dynamic and easy way of modeling complex hierarchical structures. Several advanced features are...... interface covering a broad range of non-linear generalized structural equation models is described. The model and software are demonstrated in data of measurements of the serotonin transporter in the human brain....

  16. Instrumental Variable Bayesian Model Averaging via Conditional Bayes Factors

    OpenAIRE

    Karl, Anna; Lenkoski, Alex

    2012-01-01

    We develop a method to perform model averaging in two-stage linear regression systems subject to endogeneity. Our method extends an existing Gibbs sampler for instrumental variables to incorporate a component of model uncertainty. Direct evaluation of model probabilities is intractable in this setting. We show that by nesting model moves inside the Gibbs sampler, model comparison can be performed via conditional Bayes factors, leading to straightforward calculations. This new Gibbs sampler is...

  17. Modeling and Simulation For A Variable Sprayerrate System

    Science.gov (United States)

    Shi, Yan; Liang, Anbo; Yuan, Haibo; Zhang, Chunmei; Li, Junlong

    Variable spraying technology is an important content and developing direction in current plant protection machinery, which can effectively save pesticide and lighten burden of ecological environment in agriculture according to characteristic of spraying targets and speed of aircraft crew. Paper established mathematic model and delivery function of variable spraying system based on designed hardware of variable spraying machine, making use of PID controlling algorithm to simulate in MATLAB. Simulating result explained that the model can conveniently control gushing amounts and can arrive at satisfied controlling.

  18. Modeling the variability of firing rate of retinal ganglion cells.

    Science.gov (United States)

    Levine, M W

    1992-12-01

    Impulse trains simulating the maintained discharges of retinal ganglion cells were generated by digital realizations of the integrate-and-fire model. If the mean rate were set by a "bias" level added to "noise," the variability of firing would be related to the mean firing rate as an inverse square root law; the maintained discharges of retinal ganglion cells deviate systematically from such a relationship. A more realistic relationship can be obtained if the integrate-and-fire mechanism is "leaky"; with this refinement, the integrate-and-fire model captures the essential features of the data. However, the model shows that the distribution of intervals is insensitive to that of the underlying variability. The leakage time constant, threshold, and distribution of the noise are confounded, rendering the model unspecifiable. Another aspect of variability is presented by the variance of responses to repeated discrete stimuli. The variance of response rate increases with the mean response amplitude; the nature of that relationship depends on the duration of the periods in which the response is sampled. These results have defied explanation. But if it is assumed that variability depends on mean rate in the way observed for maintained discharges, the variability of responses to abrupt changes in lighting can be predicted from the observed mean responses. The parameters that provide the best fits for the variability of responses also provide a reasonable fit to the variability of maintained discharges.

  19. Using Enthalpy as a Prognostic Variable in Atmospheric Modelling with Variable Composition

    Science.gov (United States)

    2016-04-14

    tories, and the equation of state p = ∑ i pi = ∑ i ρiRiT = ρRT . (4) Here Ri = kB/mi are individual gas constants for each species and kB is the...relation between the mass, pressure, and temperature fields via the equation of state (4). The use of virtual temperature in Equation (11) implies that...internal energy equation as a convenient prognostic thermodynamic variable for atmospheric modelling with variable composition, including models of

  20. The role of additive neurogenesis and synaptic plasticity in a hippocampal memory model with grid-cell like input.

    Directory of Open Access Journals (Sweden)

    Peter A Appleby

    Full Text Available Recently, we presented a study of adult neurogenesis in a simplified hippocampal memory model. The network was required to encode and decode memory patterns despite changing input statistics. We showed that additive neurogenesis was a more effective adaptation strategy compared to neuronal turnover and conventional synaptic plasticity as it allowed the network to respond to changes in the input statistics while preserving representations of earlier environments. Here we extend our model to include realistic, spatially driven input firing patterns in the form of grid cells in the entorhinal cortex. We compare network performance across a sequence of spatial environments using three distinct adaptation strategies: conventional synaptic plasticity, where the network is of fixed size but the connectivity is plastic; neuronal turnover, where the network is of fixed size but units in the network may die and be replaced; and additive neurogenesis, where the network starts out with fewer initial units but grows over time. We confirm that additive neurogenesis is a superior adaptation strategy when using realistic, spatially structured input patterns. We then show that a more biologically plausible neurogenesis rule that incorporates cell death and enhanced plasticity of new granule cells has an overall performance significantly better than any one of the three individual strategies operating alone. This adaptation rule can be tailored to maximise performance of the network when operating as either a short- or long-term memory store. We also examine the time course of adult neurogenesis over the lifetime of an animal raised under different hypothetical rearing conditions. These growth profiles have several distinct features that form a theoretical prediction that could be tested experimentally. Finally, we show that place cells can emerge and refine in a realistic manner in our model as a direct result of the sparsification performed by the dentate gyrus

  1. Disentangling Pleiotropy along the Genome using Sparse Latent Variable Models

    DEFF Research Database (Denmark)

    Janss, Luc

    Bayesian models are described that use atent variables to model covariances. These models are flexible, scale up linearly in the number of traits, and allow separating covariance structures in different components at the trait level and at the genomic level. Multi-trait version of the BayesA (MT......-BA) and Bayesian LASSO (MT-BL) are described that model heterogeneous variance and covariance over the genome, and a model that directly models multiple genomic breeding values (MT-MG), representing different genomic covariance structures. The models are demonstrated on a mouse data set to model the genomic...

  2. Instantaneous-to-daily GPP upscaling schemes based on a coupled photosynthesis-stomatal conductance model: correcting the overestimation of GPP by directly using daily average meteorological inputs.

    Science.gov (United States)

    Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin

    2014-11-01

    Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.

  3. Modelling of variability of the chemically peculiar star phi Draconis

    CERN Document Server

    Prvák, Milan; Krtička, Jiří; Mikulášek, Zdeněk; Lüftinger, T

    2015-01-01

    Context: The presence of heavier chemical elements in stellar atmospheres influences the spectral energy distribution (SED) of stars. An uneven surface distribution of these elements, together with flux redistribution and stellar rotation, are commonly believed to be the primary causes of the variability of chemically peculiar (CP) stars. Aims: We aim to model the photometric variability of the CP star PHI Dra based on the assumption of inhomogeneous surface distribution of heavier elements and compare it to the observed variability of the star. We also intend to identify the processes that contribute most significantly to its photometric variability. Methods: We use a grid of TLUSTY model atmospheres and the SYNSPEC code to model the radiative flux emerging from the individual surface elements of PHI Dra with different chemical compositions. We integrate the emerging flux over the visible surface of the star at different phases throughout the entire rotational period to synthesise theoretical light curves of...

  4. A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs.

    Science.gov (United States)

    Rosenbaum, Robert

    2016-01-01

    Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public.

  5. Nonlinear neural network for hemodynamic model state and input estimation using fMRI data

    KAUST Repository

    Karam, Ayman M.

    2014-11-01

    Originally inspired by biological neural networks, artificial neural networks (ANNs) are powerful mathematical tools that can solve complex nonlinear problems such as filtering, classification, prediction and more. This paper demonstrates the first successful implementation of ANN, specifically nonlinear autoregressive with exogenous input (NARX) networks, to estimate the hemodynamic states and neural activity from simulated and measured real blood oxygenation level dependent (BOLD) signals. Blocked and event-related BOLD data are used to test the algorithm on real experiments. The proposed method is accurate and robust even in the presence of signal noise and it does not depend on sampling interval. Moreover, the structure of the NARX networks is optimized to yield the best estimate with minimal network architecture. The results of the estimated neural activity are also discussed in terms of their potential use.

  6. Realistic modelling of the seismic input Site effects and parametric studies

    CERN Document Server

    Romanelli, F; Vaccari, F

    2002-01-01

    We illustrate the work done in the framework of a large international cooperation, showing the very recent numerical experiments carried out within the framework of the EC project 'Advanced methods for assessing the seismic vulnerability of existing motorway bridges' (VAB) to assess the importance of non-synchronous seismic excitation of long structures. The definition of the seismic input at the Warth bridge site, i.e. the determination of the seismic ground motion due to an earthquake with a given magnitude and epicentral distance from the site, has been done following a theoretical approach. In order to perform an accurate and realistic estimate of site effects and of differential motion it is necessary to make a parametric study that takes into account the complex combination of the source and propagation parameters, in realistic geological structures. The computation of a wide set of time histories and spectral information, corresponding to possible seismotectonic scenarios for different sources and stru...

  7. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  8. Embodied water analysis for Hebei Province, China by input-output modelling

    Science.gov (United States)

    Liu, Siyuan; Han, Mengyao; Wu, Xudong; Wu, Xiaofang; Li, Zhi; Xia, Xiaohua; Ji, Xi

    2016-12-01

    With the accelerating coordinated development of the Beijing-Tianjin-Hebei region, regional economic integration is recognized as a national strategy. As water scarcity places Hebei Province in a dilemma, it is of critical importance for Hebei Province to balance water resources as well as make full use of its unique advantages in the transition to sustainable development. To our knowledge, related embodied water accounting analysis has been conducted for Beijing and Tianjin, while similar works with the focus on Hebei are not found. In this paper, using the most complete and recent statistics available for Hebei Province, the embodied water use in Hebei Province is analyzed in detail. Based on input-output analysis, it presents a complete set of systems accounting framework for water resources. In addition, a database of embodied water intensity is proposed which is applicable to both intermediate inputs and final demand. The result suggests that the total amount of embodied water in final demand is 10.62 billion m3, of which the water embodied in urban household consumption accounts for more than half. As a net embodied water importer, the water embodied in the commodity trade in Hebei Province is 17.20 billion m3. The outcome of this work implies that it is particularly urgent to adjust industrial structure and trade policies for water conservation, to upgrade technology and to improve water utilization. As a result, to relieve water shortages in Hebei Province, it is of crucial importance to regulate the balance of water use within the province, thus balancing water distribution in the various industrial sectors.

  9. Application Delay Modelling for Variable Length Packets in Single Cell IEEE 802.11 WLANs

    CERN Document Server

    Sunny, Albert; Aggarwal, Saurabh

    2010-01-01

    In this paper, we consider the problem of modelling the average delay experienced by an application packets of variable length in a single cell IEEE 802.11 DCF wireless local area network. The packet arrival process at each node i is assumed to be a stationary and independent increment random process with mean ai and second moment a(2) i . The packet lengths at node i are assumed to be i.i.d random variables Pi with finite mean and second moment. A closed form expression has been derived for the same. We assume the input arrival process across queues to be uncorrelated Poison processes. As the nodes share a single channel, they have to contend with one another for a successful transmission. The mean delay for a packet has been approximated by modelling the system as a 1-limited Random Polling system with zero switchover times. Extensive simulations are conducted to verify the analytical results.

  10. Modeling commuting patterns in a multi-regional input-output framework: impacts of an `urban re-centralization' scenario

    Science.gov (United States)

    Ferreira, J.-P.; Ramos, P.; Cruz, L.; Barata, E.

    2017-10-01

    The paper suggests a modeling approach for assessing economic and social impacts of changes in urban forms and commuting patterns that extends a multi-regional input-output framework by incorporating a set of commuting-related consequences. The Lisbon Metropolitan Area case with an urban re-centralization scenario is used as an example to illustrate the relevance of this modeling approach for analyzing commuting-related changes in regional income distribution on the one side and in household consumption structures on the other.

  11. Modeling Soil Carbon Dynamics in Northern Forests: Effects of Spatial and Temporal Aggregation of Climatic Input Data.

    Science.gov (United States)

    Dalsgaard, Lise; Astrup, Rasmus; Antón-Fernández, Clara; Borgen, Signe Kynding; Breidenbach, Johannes; Lange, Holger; Lehtonen, Aleksi; Liski, Jari

    2016-01-01

    Boreal forests contain 30% of the global forest carbon with the majority residing in soils. While challenging to quantify, soil carbon changes comprise a significant, and potentially increasing, part of the terrestrial carbon cycle. Thus, their estimation is important when designing forest-based climate change mitigation strategies and soil carbon change estimates are required for the reporting of greenhouse gas emissions. Organic matter decomposition varies with climate in complex nonlinear ways, rendering data aggregation nontrivial. Here, we explored the effects of temporal and spatial aggregation of climatic and litter input data on regional estimates of soil organic carbon stocks and changes for upland forests. We used the soil carbon and decomposition model Yasso07 with input from the Norwegian National Forest Inventory (11275 plots, 1960-2012). Estimates were produced at three spatial and three temporal scales. Results showed that a national level average soil carbon stock estimate varied by 10% depending on the applied spatial and temporal scale of aggregation. Higher stocks were found when applying plot-level input compared to country-level input and when long-term climate was used as compared to annual or 5-year mean values. A national level estimate for soil carbon change was similar across spatial scales, but was considerably (60-70%) lower when applying annual or 5-year mean climate compared to long-term mean climate reflecting the recent climatic changes in Norway. This was particularly evident for the forest-dominated districts in the southeastern and central parts of Norway and in the far north. We concluded that the sensitivity of model estimates to spatial aggregation will depend on the region of interest. Further, that using long-term climate averages during periods with strong climatic trends results in large differences in soil carbon estimates. The largest differences in this study were observed in central and northern regions with strongly

  12. Performance assessment of nitrate leaching models for highly vulnerable soils used in low-input farming based on lysimeter data.

    Science.gov (United States)

    Groenendijk, Piet; Heinen, Marius; Klammler, Gernot; Fank, Johann; Kupfersberger, Hans; Pisinaras, Vassilios; Gemitzi, Alexandra; Peña-Haro, Salvador; García-Prats, Alberto; Pulido-Velazquez, Manuel; Perego, Alessia; Acutis, Marco; Trevisan, Marco

    2014-11-15

    The agricultural sector faces the challenge of ensuring food security without an excessive burden on the environment. Simulation models provide excellent instruments for researchers to gain more insight into relevant processes and best agricultural practices and provide tools for planners for decision making support. The extent to which models are capable of reliable extrapolation and prediction is important for exploring new farming systems or assessing the impacts of future land and climate changes. A performance assessment was conducted by testing six detailed state-of-the-art models for simulation of nitrate leaching (ARMOSA, COUPMODEL, DAISY, EPIC, SIMWASER/STOTRASIM, SWAP/ANIMO) for lysimeter data of the Wagna experimental field station in Eastern Austria, where the soil is highly vulnerable to nitrate leaching. Three consecutive phases were distinguished to gain insight in the predictive power of the models: 1) a blind test for 2005-2008 in which only soil hydraulic characteristics, meteorological data and information about the agricultural management were accessible; 2) a calibration for the same period in which essential information on field observations was additionally available to the modellers; and 3) a validation for 2009-2011 with the corresponding type of data available as for the blind test. A set of statistical metrics (mean absolute error, root mean squared error, index of agreement, model efficiency, root relative squared error, Pearson's linear correlation coefficient) was applied for testing the results and comparing the models. None of the models performed good for all of the statistical metrics. Models designed for nitrate leaching in high-input farming systems had difficulties in accurately predicting leaching in low-input farming systems that are strongly influenced by the retention of nitrogen in catch crops and nitrogen fixation by legumes. An accurate calibration does not guarantee a good predictive power of the model. Nevertheless all

  13. Incorporation of Failure Into an Orthotropic Three-Dimensional Model with Tabulated Input Suitable for Use in Composite Impact Problems

    Science.gov (United States)

    Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther

    2017-01-01

    The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in various coordinate directions. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current

  14. Financial applications of a Tabu search variable selection model

    Directory of Open Access Journals (Sweden)

    Zvi Drezner

    2001-01-01

    Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.

  15. AN ACCURATE MODELING OF DELAY AND SLEW METRICS FOR ON-CHIP VLSI RC INTERCONNECTS FOR RAMP INPUTS USING BURR’S DISTRIBUTION FUNCTION

    Directory of Open Access Journals (Sweden)

    Rajib Kar

    2010-09-01

    Full Text Available This work presents an accurate and efficient model to compute the delay and slew metric of on-chip interconnect of high speed CMOS circuits foe ramp input. Our metric assumption is based on the Burr’s Distribution function. The Burr’s distribution is used to characterize the normalized homogeneous portion of the step response. We used the PERI (Probability distribution function Extension for Ramp Inputs technique that extends delay metrics and slew metric for step inputs to the more general and realistic non-step inputs. The accuracy of our models is justified with the results compared with that of SPICE simulations.

  16. Optimization of a prognostic biosphere model for terrestrial biomass and atmospheric CO2 variability

    Directory of Open Access Journals (Sweden)

    M. Saito

    2014-08-01

    Full Text Available This study investigates the capacity of a prognostic biosphere model to simulate global variability in atmospheric CO2 concentrations and vegetation carbon dynamics under current environmental conditions. Global data sets of atmospheric CO2 concentrations, above-ground biomass (AGB, and net primary productivity (NPP in terrestrial vegetation were assimilated into the biosphere model using an inverse modeling method combined with an atmospheric transport model. In this process, the optimal physiological parameters of the biosphere model were estimated by minimizing the misfit between observed and modeled values, and parameters were generated to characterize various biome types. Results obtained using the model with the optimized parameters correspond to the observed seasonal variations in CO2 concentration and their annual amplitudes in both the Northern and Southern Hemispheres. In simulating the mean annual AGB and NPP, the model shows improvements in estimating the mean magnitudes and probability distributions for each biome, as compared with results obtained using prior simulation parameters. However, the model is less efficient in its simulation of AGB for forest type biomes. This misfit suggests that more accurate values of input parameters, specifically, grid mean AGB values and seasonal variabilities in physiological parameters, are required to improve the performance of the simulation model.

  17. Effect of Manure vs. Fertilizer Inputs on Productivity of Forage Crop Models

    Directory of Open Access Journals (Sweden)

    Pasquale Martiniello

    2011-06-01

    Full Text Available Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV. The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha−1, respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha−1 of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha−1 under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.

  18. Solving Inverse Problems for Mechanistic Systems Biology Models with Unknown Inputs

    Science.gov (United States)

    2014-10-16

    frusemide in terms of diuresis and natriuresis can be modeled by indirect response model [18]. In this project, a modified version of this model was used...were derived from their measurements. The model relating the effect site excretion rate of frusemide ( ) to diuresis is given by: 64433-MA-II...time courses of frusemide infusion rate, frusemide urinary excretion rate, diuresis and natriuresis). The “true” parameter values used in the

  19. Effect of manure vs. fertilizer inputs on productivity of forage crop models.

    Science.gov (United States)

    Annicchiarico, Giovanni; Caternolo, Giovanni; Rossi, Emanuela; Martiniello, Pasquale

    2011-06-01

    Manure produced by livestock activity is a dangerous product capable of causing serious environmental pollution. Agronomic management practices on the use of manure may transform the target from a waste to a resource product. Experiments performed on comparison of manure with standard chemical fertilizers (CF) were studied under a double cropping per year regime (alfalfa, model I; Italian ryegrass-corn, model II; barley-seed sorghum, model III; and horse-bean-silage sorghum, model IV). The total amount of manure applied in the annual forage crops of the model II, III and IV was 158, 140 and 80 m3 ha(-1), respectively. The manure applied to soil by broadcast and injection procedure provides an amount of nitrogen equal to that supplied by CF. The effect of manure applications on animal feeding production and biochemical soil characteristics was related to the models. The weather condition and manures and CF showed small interaction among treatments. The number of MFU ha(-1) of biomass crop gross product produced in autumn and spring sowing models under manure applications was 11,769, 20,525, 11,342, 21,397 in models I through IV, respectively. The reduction of MFU ha(-1) under CF ranges from 10.7% to 13.2% those of the manure models. The effect of manure on organic carbon and total nitrogen of topsoil, compared to model I, stressed the parameters as CF whose amount was higher in models II and III than model IV. In term of percentage the organic carbon and total nitrogen of model I and treatment with manure was reduced by about 18.5 and 21.9% in model II and model III and 8.8 and 6.3% in model IV, respectively. Manure management may substitute CF without reducing gross production and sustainability of cropping systems, thus allowing the opportunity to recycle the waste product for animal forage feeding.

  20. Variability in a Community-Structured SIS Epidemiological Model.

    Science.gov (United States)

    Hiebeler, David E; Rier, Rachel M; Audibert, Josh; LeClair, Phillip J; Webber, Anna

    2015-04-01

    We study an SIS epidemiological model of a population partitioned into groups referred to as communities, households, or patches. The system is studied using stochastic spatial simulations, as well as a system of ordinary differential equations describing moments of the distribution of infectious individuals. The ODE model explicitly includes the population size, as well as the variability in infection levels among communities and the variability among stochastic realizations of the process. Results are compared with an earlier moment-based model which assumed infinite population size and no variance among realizations of the process. We find that although the amount of localized (as opposed to global) contact in the model has little effect on the equilibrium infection level, it does affect both the timing and magnitude of both types of variability in infection level.

  1. 'Fingerprints' of four crop models as affected by soil input data aggregation

    DEFF Research Database (Denmark)

    Angulo, Carlos; Gaiser, Thomas; Rötter, Reimund P;

    2014-01-01

    . In this study we used four crop models (SIMPLACE, DSSAT-CSM, EPIC and DAISY) differing in the detail of modeling above-ground biomass and yield as well as of modeling soil water dynamics, water uptake and drought effects on plants to simulate winter wheat in two (agro-climatologically and geo...

  2. Input-Output Modeling and Control of the Departure Process of Congested Airports

    Science.gov (United States)

    Pujet, Nicolas; Delcaire, Bertrand; Feron, Eric

    2003-01-01

    A simple queueing model of busy airport departure operations is proposed. This model is calibrated and validated using available runway configuration and traffic data. The model is then used to evaluate preliminary control schemes aimed at alleviating departure traffic congestion on the airport surface. The potential impact of these control strategies on direct operating costs, environmental costs and overall delay is quantified and discussed.

  3. Reservoir observers: Model-free inference of unmeasured variables in chaotic systems.

    Science.gov (United States)

    Lu, Zhixin; Pathak, Jaideep; Hunt, Brian; Girvan, Michelle; Brockett, Roger; Ott, Edward

    2017-04-01

    Deducing the state of a dynamical system as a function of time from a limited number of concurrent system state measurements is an important problem of great practical utility. A scheme that accomplishes this is called an "observer." We consider the case in which a model of the system is unavailable or insufficiently accurate, but "training" time series data of the desired state variables are available for a short period of time, and a limited number of other system variables are continually measured. We propose a solution to this problem using networks of neuron-like units known as "reservoir computers." The measurements that are continually available are input to the network, which is trained with the limited-time data to output estimates of the desired state variables. We demonstrate our method, which we call a "reservoir observer," using the Rössler system, the Lorenz system, and the spatiotemporally chaotic Kuramoto-Sivashinsky equation. Subject to the condition of observability (i.e., whether it is in principle possible, by any means, to infer the desired unmeasured variables from the measured variables), we show that the reservoir observer can be a very effective and versatile tool for robustly reconstructing unmeasured dynamical system variables.

  4. Modeling pCO2 variability in the Gulf of Mexico

    Directory of Open Access Journals (Sweden)

    Z. Xue

    2014-08-01

    Full Text Available A three-dimensional coupled physical–biogeochemical model was used to simulate and examine temporal and spatial variability of surface pCO2 in the Gulf of Mexico (GoM. The model is driven by realistic atmospheric forcing, open boundary conditions from a data-assimilative global ocean circulation model, and observed freshwater and terrestrial nutrient and carbon input from major rivers. A seven-year model hindcast (2004–2010 was performed and was validated against in situ measurements. The model revealed clear seasonality in surface pCO2. Based on the multi-year mean of the model results, the GoM is an overall CO2 sink with a flux of 1.34 × 1012 mol C yr−1, which, together with the enormous fluvial carbon input, is balanced by the carbon export through the Loop Current. A sensitivity experiment was performed where all biological sources and sinks of carbon were disabled. In this simulation surface pCO2 was elevated by ~ 70 ppm, providing the evidence that biological uptake is a primary driver for the observed CO2 sink. The model also provided insights about factors influencing the spatial distribution of surface pCO2 and sources of uncertainty in the carbon budget.

  5. A 3-mode, Variable Velocity Jet Model for HH 34

    Science.gov (United States)

    Raga, A.; Noriega-Crespo, A.

    1998-01-01

    Variable ejection velocity jet models can qualitatively explain the appearance of successive working surfaces in Herbig-Haro (HH) jets. This paper presents an attempt to explore which features of the HH 34 jet can indeed be reproduced by such a model.

  6. Manifest Variable Granger Causality Models for Developmental Research: A Taxonomy

    Science.gov (United States)

    von Eye, Alexander; Wiedermann, Wolfgang

    2015-01-01

    Granger models are popular when it comes to testing hypotheses that relate series of measures causally to each other. In this article, we propose a taxonomy of Granger causality models. The taxonomy results from crossing the four variables Order of Lag, Type of (Contemporaneous) Effect, Direction of Effect, and Segment of Dependent Series…

  7. An Alternative Approach for Nonlinear Latent Variable Models

    Science.gov (United States)

    Mooijaart, Ab; Bentler, Peter M.

    2010-01-01

    In the last decades there has been an increasing interest in nonlinear latent variable models. Since the seminal paper of Kenny and Judd, several methods have been proposed for dealing with these kinds of models. This article introduces an alternative approach. The methodology involves fitting some third-order moments in addition to the means and…

  8. Modeling, analysis and control of a variable geometry actuator

    NARCIS (Netherlands)

    Evers, W.J.; Knaap, A. van der; Besselink, I.J.M.; Nijmeijer, H.

    2008-01-01

    A new design of variable geometry force actuator is presented in this paper. Based upon this design, a model is derived which is used for steady-state analysis, as well as controller design in the presence of friction. The controlled actuator model is finally used to evaluate the power consumption u

  9. On the evolution of accretion disc flow in cataclysmic variables. III - Outburst properties of constant and uniform-alpha model discs

    Science.gov (United States)

    Lin, D. N. C.; Faulkner, J.; Papaloizou, J.

    1985-01-01

    Attention is given to the stability and evolution of some simple accretion disk models in which the viscosity is prescribed by an ad hoc, uniform-alpha model. Emphasis is placed on systems in which the mass input rate from the secondary to the disk around the primary is assumed to be constant, although initial calculations with variable mass input rates are also performed. Time-dependent visual magnitude light curves constructed for cataclysmic binaries with a range of disk size, primary mass and mass input rate, and viscosity magnitude, are compared with the observed properties of various cataclysmic variable subclasses. The results obtained indicate that the observational differences between novae and dwarf novae may be due to mass input rate differences. The present models can reproduce the gross observational features of U-Gem-type dwarf nova outbursts.

  10. Modeling river total bed material load discharge using artificial intelligence approaches (based on conceptual inputs)

    Science.gov (United States)

    Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal

    2014-06-01

    This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.

  11. Modelling avalanche danger and understanding snow depth variability

    OpenAIRE

    2010-01-01

    This thesis addresses the causes of avalanche danger at a regional scale. Modelled snow stratigraphy variables were linked to [1] forecasted avalanche danger and [2] observed snowpack stability. Spatial variability of snowpack parameters in a region is an additional important factor that influences the avalanche danger. Snow depth and its change during individual snow fall periods are snowpack parameters which can be measured at a high spatial resolution. Hence, the spatial distribution of sn...

  12. Quantification of Inter-Tsunami Model Variability for Hazard Assessment Studies

    Science.gov (United States)

    Catalan, P. A.; Alcantar, A.; Cortés, P. I.

    2014-12-01

    There is a wide range of numerical models capable of modeling tsunamis, most of which have been properly validated and verified against standard benchmark cases and particular field or laboratory cases studies. Consequently, these models are regularly used as essential tools in estimating the tsunami hazard on coastal communities by scientists, or consulting companies, and treating model results in a deterministic way. Most of these models are derived from the same set of equations, typically the Non Linear Shallow Water Equations, to which ad-hoc terms are added to include physical effects such as friction, the Coriolis force, and others. However, not very often these models are used in unison to address the variability in the results. Therefore, in this contribution, we perform a high number of simulations using a set of numerical models and quantify the variability in the results. In order to reduce the influence of input data on the results, a single tsunami scenario is used over a common bathymetry. Next, we perform model comparisons as to asses sensitivity to changes in grid resolution and Manning roughness coefficients. Results are presented either as intra-model comparisons (sensitivity to changes using the same model) and inter-model comparisons (sensitivity to changing models). For the case tested, it was observed that most models reproduced fairly consistently the arrival and periodicity of the tsunami waves. However, variations in amplitude, characterized by the standard-deviation between model runs, could be as large as the mean signal. This level of variability is considered too large for deterministic assessment, reinforcing the idea that uncertainty needs to be included in such studies.

  13. Uncertainty of modelled urban peak O3 concentrations and its sensitivity to input data perturbations based on the Monte Carlo analysis

    Science.gov (United States)

    Pineda Rojas, Andrea L.; Venegas, Laura E.; Mazzeo, Nicolás A.

    2016-09-01

    A simple urban air quality model [MODelo de Dispersión Atmosférica Ubana - Generic Reaction Set (DAUMOD-GRS)] was recently developed. One-hour peak O3 concentrations in the Metropolitan Area of Buenos Aires (MABA) during the summer estimated with the DAUMOD-GRS model have shown values lower than 20 ppb (the regional background concentration) in the urban area and levels greater than 40 ppb in its surroundings. Due to the lack of measurements outside the MABA, these relatively high ozone modelled concentrations constitute the only estimate for the area. In this work, a methodology based on the Monte Carlo analysis is implemented to evaluate the uncertainty in these modelled concentrations associated to possible errors of the model input data. Results show that the larger 1-h peak O3 levels in the MABA during the summer present larger uncertainties (up to 47 ppb). On the other hand, multiple linear regression analysis is applied at selected receptors in order to identify the variables explaining most of the obtained variance. Although their relative contributions vary spatially, the uncertainty of the regional background O3 concentration dominates at all the analysed receptors (34.4-97.6%), indicating that their estimations could be improved to enhance the ability of the model to simulate peak O3 concentrations in the MABA.

  14. Internal variability of a 3-D ocean model

    Directory of Open Access Journals (Sweden)

    Bjarne Büchmann

    2016-11-01

    Full Text Available The Defence Centre for Operational Oceanography runs operational forecasts for the Danish waters. The core setup is a 60-layer baroclinic circulation model based on the General Estuarine Transport Model code. At intervals, the model setup is tuned to improve ‘model skill’ and overall performance. It has been an area of concern that the uncertainty inherent to the stochastical/chaotic nature of the model is unknown. Thus, it is difficult to state with certainty that a particular setup is improved, even if the computed model skill increases. This issue also extends to the cases, where the model is tuned during an iterative process, where model results are fed back to improve model parameters, such as bathymetry.An ensemble of identical model setups with slightly perturbed initial conditions is examined. It is found that the initial perturbation causes the models to deviate from each other exponentially fast, causing differences of several PSUs and several kelvin within a few days of simulation. The ensemble is run for a full year, and the long-term variability of salinity and temperature is found for different regions within the modelled area. Further, the developing time scale is estimated for each region, and great regional differences are found – in both variability and time scale. It is observed that periods with very high ensemble variability are typically short-term and spatially limited events.A particular event is examined in detail to shed light on how the ensemble ‘behaves’ in periods with large internal model variability. It is found that the ensemble does not seem to follow any particular stochastic distribution: both the ensemble variability (standard deviation or range as well as the ensemble distribution within that range seem to vary with time and place. Further, it is observed that a large spatial variability due to mesoscale features does not necessarily correlate to large ensemble variability. These findings bear

  15. A linear input-varying framework for modeling and control of morphing aircraft

    Science.gov (United States)

    Grant, Daniel T.

    2011-12-01

    Morphing, which changes the shape and configuration of an aircraft, is being adopted to expand mission capabilities of aircraft. The introduction of biological-inspired morphing is particularly attractive in that highly-agile birds present examples of desired shapes and configurations. A previous study adopted such morphing by designing a multiple-joint wing that represented the shoulder and elbow joints of a bird. The resulting variable-gull aircraft could rotate the wing section vertically at these joints to alter the flight dynamics. This paper extends that multiple-joint concept to allow a variable-sweep wing with independent inboard and outboard sections. The aircraft is designed and analyzed to demonstrate the range of flight dynamics which result from the morphing. In particular, the vehicle is shown to have enhanced crosswind rejection which is a certainly critical metric for the urban environments in which these aircraft are anticipated to operate. Mission capability can be enabled by morphing an aircraft to optimize its aerodynamics and associated flight dynamics for each maneuver. Such optimization often consider the steady-state behavior of the configuration; however, the transient behavior must also be analyzed. In particular, the time-varying inertias have an effect on the flight dynamics that can adversely affect mission performance if not properly compensated. These inertia terms cause coupling between the longitudinal and lateral-directional dynamics even for maneuvers around trim. A simulation of a variable-sweep aircraft undergoing a symmetric morphing for an altitude change shows a noticeable lateral translation in the flight path because of the induced asymmetry. The flight dynamics of morphing aircraft must be analyzed to ensure shape-changing trajectories have the desired characteristics. The tools for describing flight dynamics of fixed-geometry aircraft are not valid for time-varying systems such as morphing aircraft. This paper introduces

  16. Analysis models for variables associated with breastfeeding duration.

    Science.gov (United States)

    dos S Neto, Edson Theodoro; Zandonade, Eliana; Emmerich, Adauto Oliveira

    2013-09-01

    OBJECTIVE To analyze the factors associated with breastfeeding duration by two statistical models. METHODS A population-based cohort study was conducted with 86 mothers and newborns from two areas primary covered by the National Health System, with high rates of infant mortality in Vitória, Espírito Santo, Brazil. During 30 months, 67 (78%) children and mothers were visited seven times at home by trained interviewers, who filled out survey forms. Data on food and sucking habits, socioeconomic and maternal characteristics were collected. Variables were analyzed by Cox regression models, considering duration of breastfeeding as the dependent variable, and logistic regression (dependent variables, was the presence of a breastfeeding child in different post-natal ages). RESULTS In the logistic regression model, the pacifier sucking (adjusted Odds Ratio: 3.4; 95%CI 1.2-9.55) and bottle feeding (adjusted Odds Ratio: 4.4; 95%CI 1.6-12.1) increased the chance of weaning a child before one year of age. Variables associated to breastfeeding duration in the Cox regression model were: pacifier sucking (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.3) and bottle feeding (adjusted Hazard Ratio 2.0; 95%CI 1.2-3.5). However, protective factors (maternal age and family income) differed between both models. CONCLUSIONS Risk and protective factors associated with cessation of breastfeeding may be analyzed by different models of statistical regression. Cox Regression Models are adequate to analyze such factors in longitudinal studies.

  17. Linear and Non-linear Multi-Input Multi-Output Model Predictive Control of Continuous Stirred Tank Reactor

    Directory of Open Access Journals (Sweden)

    Muayad Al-Qaisy

    2013-04-01

    Full Text Available In this article, multi-input multi-output (MIMO linear model predictive controller (LMPC based on state space model and nonlinear model predictive controller based on neural network (NNMPC are applied on a continuous stirred tank reactor (CSTR. The idea is to have a good control system that will be able to give optimal performance, reject high load disturbance, and track set point change. In order to study the performance of the two model predictive controllers, MIMO Proportional-Integral-Derivative controller (PID strategy is used as benchmark. The LMPC, NNMPC, and PID strategies are used for controlling the residual concentration (CA and reactor temperature (T. NNMPC control shows a superior performance over the LMPC and PID controllers by presenting a smaller overshoot and shorter settling time.

  18. Input-Dependent Integral Nonlinearity Modeling for Pipelined Analog-Digital Converters

    OpenAIRE

    Samer Medawar; Peter Händel; Niclas Björsell; Magnus Jansson

    2010-01-01

    Integral nonlinearity (INL) for pipelined analog-digital converters (ADCs) operating at RF is measured and characterized. A parametric model for the INL of pipelined ADCs is proposed, and the corresponding least-squares problem is formulated and solved. The INL is modeled both with respect to the converter output code and the frequency stimuli, which is dynamic modeling. The INL model contains a static and a dynamic part. The former comprises two 1-D terms in ADC code that are a sequence of z...

  19. Validation of input-noise model for simulations of supercontinuum generation and rogue waves

    DEFF Research Database (Denmark)

    Frosz, Michael Henoch

    2010-01-01

    A new model of pump noise in supercontinuum and rogue wave generation is presented. Simulations are compared with experiments and show that the new model provides significantly better agreement than the currently ubiquitously used one-photon-per-mode model. The new model also allows for a study...... of the influence of the pump spectral line width on the spectral broadening mechanisms. Specifically, it is found that for four-wave mixing (FWM) a narrow spectral line width ( 0.1 nm) initially leads to a build-up of FWM from quantum noise, whereas a broad spectral line width ( 1 nm) initially leads to a gradual...

  20. Error analysis of the quantification of hepatic perfusion using a dual-input single-compartment model

    Science.gov (United States)

    Miyazaki, Shohei; Yamazaki, Youichi; Murase, Kenya

    2008-11-01

    We performed an error analysis of the quantification of liver perfusion from dynamic contrast-enhanced computed tomography (DCE-CT) data using a dual-input single-compartment model for various disease severities, based on computer simulations. In the simulations, the time-density curves (TDCs) in the liver were generated from an actually measured arterial input function using a theoretical equation describing the kinetic behavior of the contrast agent (CA) in the liver. The rate constants for the transfer of CA from the hepatic artery to the liver (K1a), from the portal vein to the liver (K1p), and from the liver to the plasma (k2) were estimated from simulated TDCs with various plasma volumes (V0s). To investigate the effect of the shapes of input functions, the original arterial and portal-venous input functions were stretched in the time direction by factors of 2, 3 and 4 (stretching factors). The above parameters were estimated with the linear least-squares (LLSQ) and nonlinear least-squares (NLSQ) methods, and the root mean square errors (RMSEs) between the true and estimated values were calculated. Sensitivity and identifiability analyses were also performed. The RMSE of V0 was the smallest, followed by those of K1a, k2 and K1p in an increasing order. The RMSEs of K1a, K1p and k2 increased with increasing V0, while that of V0 tended to decrease. The stretching factor also affected parameter estimation in both methods. The LLSQ method estimated the above parameters faster and with smaller variations than the NLSQ method. Sensitivity analysis showed that the magnitude of the sensitivity function of V0 was the greatest, followed by those of K1a, K1p and k2 in a decreasing order, while the variance of V0 obtained from the covariance matrices was the smallest, followed by those of K1a, K1p and k2 in an increasing order. The magnitude of the sensitivity function and the variance increased and decreased, respectively, with increasing disease severity and decreased